Saturday, December 30, 2006

IEEE Spectrum - December 2006


Notes on December 2006 edition of IEEE Spectrum:

Engineers and autism:
Simon Baron-Cohen, of Cambridge university, has proposed a theory concerning the link between engineers (and, others systemizers - such as mathematicians) and autism spectrum disorders. According to this theory, in the past, these people did not meet people like themselves. However, at the present time, because professions tend to organise people by their psychological types, the probability that two such people marry and bear children greatly increases. This situation has then led to a 'concentration' of the genes responsible for systemizing behaviour, which in turn has led to the increased chances of a child with extreme instances of such traits; i.e. autism. He uses some striking statistics which appear to support his theory: engineers are twice as likely to have autistic children, and the relatives of autistics display higher than average levels of systemizing. Despite being seemingly bleak news, these 'autistic traits' may not necessarily be all bad. There is no doubting the severe disability that autistics suffer, however, taking the example of Asperger's syndrome. Whereas people with this condition often leads to isolation, it is often accompanied by by very positive mental capacities, sometimes even cited as genius (Einstein and Newton are two widely quoted possible examples).
From IEEE Spectrum, December 2006 issue, p6

Robot waiters:
A restaurant in Hong Kong has attempted to dispense with human waiters and replace them with automated robots. Built by Cyber Robotics Technology (and costing US$5000 each), they were designed to seat customers, take orders (via touchscreen), avoid customers, deliver the food, and even respond to single word commands. A great idea one would think. However, in reality, they needed direct human supervision, thereby defeating their practical purpose - leaving them as novelty items. I think this story is fairly indicative (from my limited experience anyway) of other supposedly autonomous robots - requiring the helping hand of human control to be able to operate to specification.
From IEEE Spectrum, December 2006 issue, p17

Other stories:
- The emergence of the London Stock exchange as the location of choice for technology start-up companies, over the more traditional NASDAQ - p10
- India's space aspirations, including their own GPS-style system and rocket systems - p12
- The possible disappearance of traditional film-cameras? The ever increasing forays of electronics companies into the realm of photography - p13
- Smart, wireless parking systems - p14
- A review of Sony's PS3 system and its 'monster' 8-core processor, the cell - p18
- Social entrepreneurship: Benetech and the human rights project - p25
- The story of Jack Morton: the brilliant man who pioneered the transistor at Bell Labs, but who never appreciated its potential for microchips, which ultimately led to the falling behind of AT&T in the microchip race - p31
- The present and potential of the digital cinema setup and experience, with an insert on the past and future of 3D cinema - p37
- A short exposee on taking risks with your career - p44
- The benefits, and drawbacks, of virtual private networks: a case study of Relakks (www.relakks.com) - p45
- The Faraday cage wallet, designed to protect personal information from RFID bearing identity thieves (especially with the new proposed RFID passports fast becoming a reality) - p46
- The 'unobvious' rule in patent applications: put to the test in the US court of appeals. I think this one has the potential to change the way in which research is commercialised, depending on the outcome of the challenge - p47
- The Wiki world: the rise of the wiki, and the way it's changing the world, especially the language - p52

Monday, December 11, 2006

An uncanny resemblance... (a bit of fun)

Apologies for the lack of posting over the last few weeks - I blame illness (probably the dreaded man-'flu) and a lot of other stuff to do.

Trawling through past issues of the Piled Higher and Deeper cartoon archive in my spare time, I came across one which described my working environment with uncanny accuracy... This cartoon describes (tongue-in-cheek) the environment that a 'typical' PhD student has to work in. The first three pictures are an almost exact replication of the room I work in and even my desk! I was wondering if this really is a 'universal constant' of postgrad research, or whether I am fitting myself into a well accepted stereotype?

Thursday, November 30, 2006

Scientific Impact of Nations


This figure is from a BBC news story on the new European comission framework 7 research programme. It shows how the wealth intensity of various nations compares to its scientific impact (as measured by citations). There are a couple of nations I was surprised by (through my ignorance more than anything else): the overwhelming advantage that Switzerland has, and the relative modesty of the US's citation intensity for starters.

Tuesday, November 28, 2006

Comprehensive Working Memory Activation in SOAR

Notes on Nuxoll, Laird and James (2004), "Comprehensive working memory activation in SOAR", Proc. of the 6th Int. Conf.on Cognitive Modelling, pp 226-230

The SOAR cognitive architecture uses exclusively symbolic representation and processing. A recent addition to the basic working memory/procedural memory components of the SOAR architecture is an episodic memory component. An essential part of the functioning of this is the retrieval of the most relevant episode. This is a non-trivial problem due to the number of irrelevances in the episode. Biasing potential matches is thus a potential solution: this paper covers the possibility of using activation as a biasing factor.

Working memory in SOAR is made up of a collection of attribute-value pairs, such as the current state (including external sensory information and internal inferences). If the contents of working memory matches the condition of any production rule in procedural memory (of the form condition->action), then that rule fires (or is executed) - even if multiple rules are activated. Each rule has the capability of modifying the contents of working memory, resulting in a possible action, or the firing of another rule. The characteristics of the SOAR architecture which are relevant to this particular study are as follows: (1) simultaneous rule firing (in the case where the contents of working memory match the conditions of multiple productions); (2) operators (in the case where multiple rules fire, they each propose an operator. Only one of these operators can be selected for application to the contents of working memory); (3) the decision cycle (propose, select and apply); (4) persistent working memory (two types: o-supported indicating that the contents of working memory remain intact until explicitly modified or cleared, and i-supported which are cleared from working memory when the rule which instantiated the particular element ceases to fire, i.e. match working memory).

The original study to employ activation levels in the SOAR architecture was Chong (2003). This study proposed that part of the contents of working memory were subject to the effect of activation levels, where an item in working memory with a low activation would eventually be removed, even if it was an o-supported working memory element. The current paper under review changed this to apply to the entire contents of working memory (apart from i-supported, which is automatically removed anyway), although they kept the underlying principle of activation alteration: the more an element is used, the higher its activation level, and vice versa for litle used elements (ln decay used). The activation assigned to a newly created production is based upon the activations of those elements which contribute to its creation - this is a different approach to those methods which automatically assign newly created rules a relatively high initial activation level. One final general point on activation levels are that each reference to an element in working memory has its own activation level, with the overall element activation being a sum of each reference. Thus, each reference to an element can decay independently. For further details please refer to the paper.

This updated version of SOAR is then tested in a 16x16 gridworld, in which the SOAR architecture controls the movements of an agent (or 'eater'), in order to examine the role activation has in aiding learning, specifically in retrieving past episodic experiences, as mentioned previously. In this case, retrieved past experiences were used as a comparison to the action the agent was currently considering, thus illustrating the importance of selecting the 'correct' past experience. When activation was enabled, the eater showed an approximate 30% improvement in performance over an agent which did not use activation to select an episode.

Because of the fact that i-supported working memory elements do not have an activation level, this leads to the possibility of i-support masking, which occurs when the o-supported elements supporting an i-element are removed, thus removing the i-element, even though the i-element may have been referenced (refer to figure 5). To rectify this problem, a 'pay it backward' system was implemented, whereby activation levels of supporting o-elements may be increased if a dependant i-element is referenced. At this stage also, the 'pay it forward' scheme of basing the activation level of newly created productions on the activations of those productions from which it was created was also introduced. The overall results of experimentation (still using the eater simulation scheme) were clear. The pay-it-backward addition improved performance over the activation alone, and the addition of the pay-it-forward scheme showed still further improvement.

Saturday, November 25, 2006

The Hippocampus, Memory and Place Cells

Notes on "The hippocampus, memory and place cells: is it spatial memory or a memory space?", Eichenbaum et al., Neuron, vol 23, pp 209-226, 1999

I found this paper at just the right time in terms of my research. When looking at navigation tasks for robots, and when trying to derive inspiration and lessons from natural systems, one cannot get far without coming across place cells. Place cells are neurons located in the hippocampus (although there are possibly also cells in other brain regions with somewhat similar behaviours?), and are so named because they can be observed to fire when the animal under observation is in a specific location is its environment. This observation is very robust in rats in particular (although other species have been used), and has led to the now pervasive view that place cells form the basis of a cognitive map - i.e. a map of the environment upon which cognitive operations may be performed (such as planning for e.g.). These maps may essentially be seen as allocentric 2-D coordinate representations of the environment - a property which every computational model of place cell functionality has inherited. It is interesting to note however that while this has been become generally accepted theory in the animal sciences, neurospychologists who study memory in primates and humans have been very slow to take on the theory. This is mainly due to the contention of the former that the hippocampus is dedicated to spatial memory in rodents (the aforementioned cognitive map), whereas in humans, the global memory deficits shown by hippocampal lesions in humans is well known, casting doubt on the solely spatial memory proposition.

The aim of the paper is to review the evidence and put forward a theory of place cell function which is consistent with both camps. The classic literature on place cells in rats is first reviewed (mainly work by O'Keefe at al). Single neuron (extracellular) recordings are the most common means of measuring place cell activity. As mentioned, it has been well documented that these fire when the rat is in specific locations in the environment. However, as was noted in the initial experiments concerning the existence of these cells, but which has in subsequent studies been largely ignored, place cells also fire for non-spatial cues, which clearly isn't consistent with the cognitive map theory of place cell functionality. Additionally, subsequent studies looking for further evidence in support of the cognitive map theory, namely homogeneity and continuityof spatial representations and the binding of spatial representations in a cohesive framework, found evidence against cognitive maps. First off, evidence points to the fact that place-fields of plac cells do not provide a continuous map: clustering of place fields occur, indicating the over-representation of some spatial areas with respect to others. Secondly, it appears as though place cells involve a collection of independant representations, each encoding the spatial relations between some subset of cues. These three points against the cognitive map theory, along with some others, indicate the need for a rethink on this subject.

The authors duly oblige with an alternative account: instead of a cognitive map solely tied to spatial stimuli, there exists a memory space: individual hippocampal cells encode regularitie present in the animals every experience, including spatial and non-spatial cues and behavioural actions. On a side note, this appears to me to be consistent with the idea of "fast" or "one-shot" learning in the hippocampus, and "slow" learning in the neocortex. It also seems to be not inconsistent with Fusters overlapping cortical hierarchies and distributed neural representations theory, upon which I have written in previous posts.

According to this theory, different specialities of neuons arise in the hippocampus as a direct result of the different possible combinations of inputs and input weights, and the history of coactive inputs (essentially hebbian learning, I believe...), and where these inputs are derived from (from which part of the cortex, or brain). This new theory thus leads to an alternative explanation as to the activity of place cells: instead of being a cognitive map as described previously, they may be characterised as a set of activations along a temporal sequence as the rat moves in different trial episodes. This would also apply to the learning and performance of other non-spatial tasks, which would thus account for place cell activity for non-spatial cues. Given this account, the hippocampus is thus perceived to be central to the functioning of episodic memory.

The paper concludes with the observation that this model can be distinguished from spatial mapping theories in that it proposes a set of mechanisms which account for both spatial and non-spatial memory dependant on the hippocampus. From the point of view of the first to sentences of this post, and my observation on this models relation to Fusters cortical hierarchies, it seems possible (or not-impossible as I'd prefer to say) that navigation can be described in terms of associative links, instead of having to rely on an internally generated cognitive map.

Monday, November 20, 2006

"Unified theories of Cognition" - Chapter 1

Mind and Theories: Notes on chapter 1 of "Unified theories of Cognition", Allen Newell, 1990

What is a theory? (p13) "Let there be some body of explicit knowledge, from which answers can be obtained to questions by enquiries. Some answers might be predictions, some might be explanations, some might be prescriptions of control. If this body of knowledge yields answers to those questions for you, you can call it a theory." "There is little sense in worrying that one body of knowledge is 'just a collection of facts' (such as a database of peoples heights and ages) and another is 'the fundamental axioms of a subject matter' (such as Newton's three laws plus supporting explication). The difference is important, but it is clear that they answer different sorts of questions, have different scope, and have different potential for further development: i.e. different bodies of knowledge."

Theories are to be nurtured and changed and built up. One aught to be happy to change them in order to make them more useful: almost nay body of knowledge can enter into a theory if it works. An especially powerful form of theory is one which describes a body of underlying mechanisms, whose interactions and compositions provide the answers to all the questions we have.

The word 'cognition' emerged in part to indicate the central processes that were ignored by peripheral perception and motor behaviour. Yet one problem with cognitive psychology has been its persistance in thinking about cognition without bringing in perceptual and motor processes.

Language should be approached with caution and circumspection. A unified theory of cognition must deal with it, but I [Newell] will take it as something to be approached later rather than sooner. [This being a view I personally agree with - perception and movement evolved before language, so I take that to be an indication what I will call, perhaps unjustly, more fundamental processes, and thus of more interest - to me anyway :-)]. We cannot face the entire list [...of human cognitive capabilities...] all at once, so let us consider it to be a priority list, and work our way down from the top. What I mean by a unified theory of cognition is a cognitive theory that gets significantly further down the list than we have ever done before.

According to Newell, the constraints that shaped the human mind are (in the order he listed them): (1) Behave flexibly as a function of the environment; (2) Exhibit adaptive (rational, goal oriented) behaviour; (3) Operate in real time; (4) Operate in a rich, complex, detailed environment, capable of perceiving an immense amount of changing detail, using vast amounts of knowledge, and controlling a motor system of many degrees of freedom; (5) Use symbols and abstractions (known from introspection); (6) Use language, both natural and artificial; (7) Learn from the environment and from experience; (8) Acquire capabilities through development; (9) Operate autonomously, but within a social community; (10) Be self-aware and have a sense of self (meta-cognition?); (11) Be realiseable as a neural system; (12) Be constructable by an embryological growth process; (13) Arise through evolution.

Concerning point 5, I believe that one must be very careful when using introspection when determining constraints of the human mind. For example, introspection tells us that thouhgts occur serially, however, it is well known that the brain is a massively parallel system. This seemingly fundamental difference is explainable by theories of consciousness (e.g. Baars's Global Workspace Theory), but serves as an example of why introspection should be used with caution.

Continuing with the notes... There is a production system called Grapes (Sauers and Farrell, 1982) that embodies the basic production action and automatic creation mechanisms. Finally, there is a book on "Induction" (Holland, Holyoak, Nisbett and Thagard, 1987), is an interdisciplinary effort using different system vehicles: Hollands Classifier Systems and the more classical problem solving systems.

Saturday, November 18, 2006

Sleep is good...

Just as I am slowly drifting off to the land of nod, I came across this post by Deric Bownds (MindBlog). As I'm tired, and since it concisely says what needs to be said, there's a copy below. Apologies.

Memory enhancement during your sleep...just wear an electric head strap?
I'm wondering how long it is going to be before we start seeings advertisements for "effortless memory enhancement" devices inspired by the work of Marshall et al reported in Nature. (For example, a tiara that places button electrodes bilaterally over the mastoids and frontolateral cortex and generates a low oscillating current around 0.75 cycles per second during non-REM sleep). Although I'm tempted to cook down their description to make it a bit more palatable, their abstract does do the job:"There is compelling evidence that sleep contributes to the long-term consolidation of new memories. This function of sleep has been linked to slow per se is unclear, but can easily be investigated by inducing the extracellular oscillating potential fields of interest. Here we show that inducing slow oscillation-like potential fields by transcranial application of oscillating potentials (0.75 Hz) during early nocturnal non-rapid-eye-movement sleep, that is, a period of emerging slow wave sleep, enhances the retention of hippocampus-dependent declarative memories in healthy humans. The slowly oscillating potential stimulation induced an immediate increase in slow wave sleep, endogenous cortical slow oscillations and slow spindle activity in the frontal cortex. Brain stimulation with oscillations at 5 Hz—another frequency band that normally predominates during rapid-eye-movement sleep—decreased slow oscillations and left declarative memory unchanged. Our findings indicate that endogenous slow potential oscillations have a causal role in the sleep-associated consolidation of memory, and that this role is enhanced by field effects in cortical extracellular space."
posted by Deric Bownds at 5:42 AM

Thursday, November 16, 2006

Network Memory

In this paper on network memory, Prof. Fuster presents the view that memories are held in the brain not in localised regions, but in distributed networks that span the entire neocortex. There is however some organisation to this: the overlapping cortical hierarchies I mentioned in a previous post.

"The capacity to store information about oneself and ones environment is present throughout the nervous system. Thus, almost all regions of the brain store memory of one kind or another. In primates, the memory of past experience is stored largely in the neocortex - the phylogenetically newest part of the cerebral cortex." These are the first few sentences of the paper, and they describe quite concisely the basis of the paper. He goes on to briefly overview the history of memory research, using it as the basis of the assertion that memories are "essentially associative; the information they contain is defined by neural relationships" (based on Hebb's famous "if they fire together, they wire together" principle of neural connection formation). Through these associative processes cells can become interconnected into functional units of memory - which may represent simple sensory memories (or images). Using this as the basis of complex memories, personal memories must therefore be made up of vast numbers of these relatively simples 'units' of memory interconnected in complex networks

"It is reasonable to assume, as Hayek did, that memory and perception share, to a large extent, the same cortical networks, neurons and connections." Thus, new memories or perceptions are expansions of previously existing ones - additional associations to pre-existing networks. Due to this, any individual neuron, or small group of neurons, may be part of numerous networks and thus memories.If I may make use of another quote which I think is one with important implications (even if it is somewhat fuzzy from certain points of view): "Memory networks are most likely to develop, at least partly, by self-organisation, from the bottom up, that is, from sensory or motor cortical areas towards areas of association. They also probably develop in part from the top down, guided by attention and prior memory stored in the association cortex; here the synchronous convergence would be between new inputs and inputs from old reactivated networks. In any case, networks grow on a substrate made of lateral as well as feedforward and feedback connections." The result of this type of organisation would be hierarchical memories, which is emergent from the interactions of multiple memory units. However, this self organisation does not start from scratch with a newborn: phyletic memory (or memory of the species - that which we are born with) is postulated to determine what the structure of the sensory and motor cortices is at birth - in other words, they already contain the necessary information to begin interacting with the environment from information which is genetically defined. Neural plasticity of these regions (possible remaining to adulthood) allows these memories to be refined before becoming relatively fixed for life. This phyletic memory thus serves as the foundation upon which personal memories may 'grow' - fusing so that there is no way to distinguish between the two.

Perceptual memory is memory acquired through the senses. A hierarchy exists, from "sensorially concrete" (memories of elemental sensations of all modalities) at the bottom of the hierarchy, to "conceptually general" (abstract concepts which ,although originally acquired by sensory experience, has become separable for cognitive operations) at the top. As one rises in the hierarchy, so do the networks representing memories at each level become more distributed and widespread in the cortex. Perceptual memory with this organisation does not however persist independently of motor memory (described below) - there are numerous reciprocal connections and associations between the two hierarchies, which naturally has important implications for sensory-motor integration and working memory.

Motor memory is the representation of motor acts and behaviours, including much, if not all, what has been traditionally defined as procedural memory. Much of the motor memory in the lower levels of the hierarchy is essentially phyletic (i.e. the fulfillment of basic drives, such as defensive reactions), however is conditionable and subject to control from 'higher' levels. The prefrontal cortex is the highest level of the motor hierarchy, which indicates a role not only in the representation of complex actions, but also the operations for their enactment, including working memory.

The next part of the paper is concerned with the dynamics of memory. At any given time, most long-term memory is out of consciousness, presumably equating to the relevant neural networks being relatively inactive. Reactivation occurs through associative processes of recall or recognition, which may be due to either internal or external stimuli. The hippocampus is implicated strongly in this reactivation process. Network dynamics can best be observed using electrical stimulation or neuroimaging. Using the monkey task described in my other post on Fusters cortical hierarchies, the reactivation of networks can be observed in both posterior regions (sensory recognition/recall), and frontal regions (maintenance/planning/action). Please watch the short video clips showing this spread of activation (links from this post)- they really are enlightening. By this view, working memory is active memory, and is thus distinguished by the state of the network, and not the cortical location of activations (although this is dependant on the two distinct cortical hierarchies just reviewed - thereby adding adding a location requirement on working memory).

In the concluding comments, Fuster notes that even though a hierarchical structure is in place, this does not mean that they are rigid and stored in separate cortical domains. Instead, different types of memories are made up of numerous 'elements' distributed across the different levels of the hierarchy (or both hierarchies as the case may be). The final note that the paper makes is concerning working memory and its relationship to long-term memory. From this hierarchy setup, the two are one and the same, although as I noted in the previous paragraph, merely saying that working memory is 'activated' long-term memory isn't sufficient - from my understanding of how working memory would fit within this hierarchical network theory, working memory would be made up of those activated networks across both the perceptual and motor hierarchies (and at multiple levels within those hierarchies) such that some form of coordination (planning?) is required between the two 'types' of memory. In other words, executing a motor action or recognising an object in the visual field in themselves do not require working memory (even though by this theory they would result in activated cortical networks), but identifying an object to be manipulated in some way would require working memory (coordination between the activated networks across both sensory and motor hierarchies).

Apologies for the somewhat disorganised nature of this post. It was written across a couple of periods, and was just an organisation of my thoughts...

Jumping Spiders: an enigma of the natural world

Thoughts and notes on the Jumping spider - implications for the development of "cognitive robotics"?

Engages in planning, and other 'intelligent' behaviour which seems inexplicable given that the brain of the spider is the size of a pinhead, i.e.very small. They do however have independantly moving eyes (what is the structure of these?), each of which on its own is larger in size than its brain. Does the eye in itself provide an additional means of 'processing' information for the brain to enable more complex behaviour? Does the spider, through development from egg to adult, exhibit developmental learning (learning from experience), and does behaviour appear to be 'hard-wired' as with other insects (cognitive development occurs accross multiple generations rather than with a single generation)?

BBC documentary, approx 2pm, Sunday 12th November 2006, "Spiders from Mars" (not sure about this reference, sorry, con't find the BBC page on it...)

Link: Wikipedia article and Jumping Spiders of the World

Tuesday, November 07, 2006

The notion of "chunking" for cognitive models

These notes taken largely from Alan Newell's observations as described in his William James Lectures (and the subsequent book, "Unified Theories of Cognition", 1990).

Chunking has a long history in psychology - the classic study into the capacity of short-term memory by Miller (1956) introduced the term in the most commonly used context. It is a unit of memory organisation, formed by bringing together a set of already formed chunks to form a larger unit. It implies the ability to build up structures recursively, thus leading to the hierarchical organisation of memory, and it appears to be a ubiquitous feature of human memory (sits reasonably well with Joaquin Fuster's view of memory being based upon associations at all levels).

Using the example of the effect of practice on task completion time, this memory organisation sets out three general propositions. Firstly, chunking occurs constantly and at a (fairly?) constant rate. More experience results in more chunking. Secondly, peformance of a task is faster if there are more relevant chunks to the said task. This last point presents a direct conflict with the processing in current digital computers: the more rules there are, the more computational overhead, thus the slower the processing. Thirdly, the presence of the hierarchical structure predicts that higher level chunks will apply to fewer situations than lower level chunks. The higher the chunk in the hierarchy, the more sub-patterns it has, thus the less likely is it to match the current situation exactly.

These initial observations in general, and this application as an example, provides the theoretical basis of the SOAR cognitive architecture. Thus, the observation of human behaviour leads to general behavioural traits, which may (or may not) be considered as "laws of operation". Modelling these most general "laws" may thus provide further insight into human functional operation.

Friday, November 03, 2006

The PM's views on Science in the UK

Some quotes from Prime Minister Tony Blair on his 'Vision for Science':

The country's future, he says, lies "through science and technology helping us - not just to gain more benefits in terms of material possessions and consumer goods which obviously are very important to people"..."But also things like the environment. We won't solve climate change without the best scientific minds"..."We're not going to be able to treat people for diseases unless we have the best scientific minds."

"I want to enthuse our young people particularly with the prospect of working in science. It's not just about being a boffin in a laboratory - it's actually about practical application and transforming lives, tackling the world's problems and doing so in a very practical way." - Well, I'm glad somebody noticed... :-p

Apparently, science funding has increased hugely over the past few years (which I'm not disputing). I just hope this means that I can get funding!

Wednesday, November 01, 2006

Scientific Reporting/Journalism in the UK?

The topic of yesterdays post, self-awareness in asian elephants, seemed to be a popular one in the British media. I myself first heard about the paper on the BBC Science website, which gave a concise breakdown of the contents, and included an interview (albeit cutdown) with one of the authors and an expert from another research institute. All round a reasonable summary I felt, which provided links to the original source and which kept the work in context. However, yesterday evening I watched the Channel 5 news, which also had a short piece on the research. It showed the young asian elephant in front of the mirror playing around, which was fair enough from the 'ahh isn't it cute, lets not turn over from this channel yet' point of view. However, the voice-over description of the work was in my humble view bordering on the farcical. Ok, they mentioned the phrase 'self-awareness' once (I think it was just the once though...), but then spent the rest of the report making comments along the lines of "scientists have discovered a vanity streak in elephants", and (referring to the cross placed above the elephants eye which one of the youngsters kept touching on herself, thus indicating self-awareness) "as you can see, she keeps touching the cross with her trunk: maybe she just doesn't like the style...". These aren't direct quotes, but they indicate the tone of the report. You might of course say that I'm being overly picky and critical, and that the journalist responsible was trying to pitch the story at us, the great unwashed (the general public), but from what I saw, there was no meaningful content other than pictures of cute little elephants. I didn't see the BBC tv news version, but I like to think that they did a better job of it. Yes, I'm a BBC snob... :-p

Tuesday, October 31, 2006

Convergent Cognitive Evolution and Consciousness

Self awareness is considered to be a faculty of those animals which are at the top of the 'cognitive hierarchy', and is almost synonomous with presence of what could consider as intelligence (in animals anyway), not to mention its implications for consciousness. So what animals exhibit self-awareness? In this paper by Plotnik et al (PNAS, current issue), the possibility of elephants (in this case asian elephants) displaying self awareness was examined. They used the standard mirror test - a cross (or mark) is placed on the animal in a location where it would be impossible to see without the aid of a mirror (in this case above one eye). When a mirror is present, animals displaying self-awareness would (eventually) realise that the thing it sees in the mirror is itself, and thus examine the foreign mark on its own body, and not on its reflection. See paper for further details.

What was found was that a single elephant displayed this self-aware behaviour, out of a group that was tested. Not so impressive at first blush perhaps. But when it is considered that only around half of monkeys tested using this procedure display self-awareness, the results of this study carry more weight. Previous to this study, self awareness was considered to be the domain of humans, monkeys, and dolphins. The behaviour of the elephant in this study was comparable to the behaviour of these other animals.

In the discussion of their results, the authors suggest that self awareness capabilities may be indicative of a clear distinction between self and other, which would be necessary for social interaction. Thus the suggestion that convergent cognitive evolution has occured in these species: that the 'property' of self awareness for an agent may be emergent from the need to interact socially. In other words, social interaction is a driving factor for cognitive evolution, to a large extent if these results are to be taken into account. This would have huge implications for the study of consciousness, in both biological and artificial systems. If self-awareness is considered to be a basic type of consciousness, then the possibility arises that the only reason consciousness emerged was due to the development (perhaps through other evolutionary driving factors) of social interaction networks and hierarchies.

Monday, October 30, 2006

Laughs and Giggles

Not in any way related to research of any form, but just wanted to share a website someone told me about today. I imagine most know of it already, but anyway:

Piled Higher and Deeper: An online graduate student comic strip series. It's been going for years, and it's quite funny. After a brief browse through the archives, there are two which tickle me: Anatomy of a group meeting presentation, and Beyond the Scope of Research. I'm sure there are many more...

Thursday, October 26, 2006

Book: Spiking Neuron Models

Link to an on-line book: Spiking Neuron Models: Single Neurons, Populations, Plasticity
Wulfram Gerstner and Werner M. Kistler Cambridge University Press (August 2002)

The book is an introduction to spiking neuron models aimed at graduates - in fact, the authors say that they used it as lecture material; one chapter per lecture. It would also then naturally be useful as a general reference book. Although I have not yet had a chance to read it, I do fully intend to, as it seems to be ritten at the right level, and in a genuinely instructive manner. The introduction lays out all of the concepts necessary for the understanding of the rest of the book (an introduction to neurons and their models, and an overview of rate and spike codes, for example). The rest of the book is divided into three parts, which are described concisely in the Preface:

In order to understand signal transmission and signal processing in neuronal systems, we need an understanding of their basic elements, i.e., the neurons, which is the topic of part~I. New phenomena emerge when several neurons are coupled. Part~II introduces network concepts, in particular pattern formation, collective excitations, and rapid signal transmission between neuronal populations. Learning concepts presented in Part~III are based on spike-time dependent synaptic plasticity.

Wednesday, October 25, 2006

Links #1

A couple of links I found today which may be of interest:

- AGIRI: Artificial General Intelligence Research Institute homepage. The institutes aim is to "foster the creation of powerful and ethically positive Artificial General Intelligence". The online discussion forum may be interesting.

- MachinesLikeUs: What seems to be a relatively new website on matters concerning evolutionary thought, cognitive science, and artificial life and intelligence. On the welcome page, it lists four concepts which it aims to promote, the second of which ("Religions and their gods are human constructs, and subject to human foibles") seems to me to be unnecessary and even counterproductive in the context of investigating fundamental cognitive processes and the foundations of intelligence. Having said that, the discussion forum has some interesting topics (on consciousness for example).

- As I mentioned in my last post, The Brain 0 Project webpage contains a number of resources which explains the concepts I attempted to briefly review yesterday: that the human brain may be considered a Universal Learning Machine.

Dynamically Reconfigurable Universal Learning Neurocomputer

In this lecture by Victor Eliashberg, I was introduced to the idea of the brain as a universal learning computer. The lecture itself, having not been familiar with any of the material covered, I found quite confusing, with seemingly quite a few disparate elements 'thrown together'. Having conducted a little further reading into the topic, however, it became quite clear what the fundamental thesis of the work was.

The underlying motivation is the assertion that in order to understand the brain, one must study it as a whole, and not decompose it into functional regions for relatively independent study (the major approach used in cognitive psychology). The reason for this decomposition in most studies on the brain is that it is practically impossible to study the human brain in its entirety when fully developed – and it is this state where it exhibits behaviours which are of interest to psychologists etc. Given this, it would then seem reasonable to study the brain in its 'starting state', i.e. the state in which learning has not yet begun.

Given that 'W' represents the world, 'B' represents the brain, and 'D' represents the body (of the human), such that a system (W, D, B) is one which we wish to describe (in other words the behaviour of the human in the world), and that B(t) is the formal representation of B at time t (such that B(0) is the brain in its starting state), then four propositions are made:

(1) It is possible to find a relatively small representation of B(0). It is speculated that this amount of information may fit on a single floppy disk (?!).
(2) Any formal representation of B(t) when t is large (on the scale of years perhaps) would be huge (possibly terabytes). This is due to the presence of the persons personal experiences.
(3) It is practically impossible to reverse engineer B(t) (when t is large) without first reverse engineering B(0).
(4) It is practically impossible to model the behaviour of the system (W, D, B) without a representation of B(t).

Based on these four propositions, the project (known as Brain 0) is to reverse engineer the starting state of the brain. Eliashberg provides a possible model: that of a universal learning computer, based on a universal Turing machine.

Tuesday, October 17, 2006

A Note on the Candlestick approach to writing Project Proposals

I was talking to my supervisor today about the structure of the perfect project proposal, in our case for a grant, but it would probably apply to any kind. A fairly standard view is the funnel approach, which is where you start with the broad background, and then gradually hone in on your precise aims, objectives, issues of interest etc. However, this misses something, we mused. What you also need, to finish it off, is to provide the applications for the work, to enable the reader/reviewer to really see why the work is necessary/important/relevant. The sort of stuff you can't put in the scope, as it's a bit vague or ill-defined. Hence the candlestick approach. You start with the scope of the project, narrow down to the aims, objectives, deliverables, issues of interest, etc., before finishing with a summary of the aims, and the broad implications/applications of the work as the base of the candlestick.

Monday, October 16, 2006

Book: Models of Working Memory

In this book, edited by Akira Miyake and Priti Shah, ten contemporary ‘theories’ of working memory (WM) are outlined. I say ‘theories’ because there are a great number of overlaps between the presented chapters, and so whilst each presents a view in its own right, a number are based upon the same principles. The ten chapters cover the following theories/models (with a very brief synopsis of each):

- Baddeley’s tripartite model of working memory: WM as a functionally and structurally separable memory system from long-term memory. The well known phonological loop and visuospatial sketchpad slave systems controlled by the attentional controller, the central executive. More recently, this model has been appended with the episodic buffer – invoked to help explain the growing evidence of more than just cooperation between the working and long-term memory systems.
- Nelson Cowan’s embedded processes model of working memory: inspired by the Craik and Lockhart levels of processing information processing architecture, this theory states that WM is simply that region of long-term memory which is in an activated state. Furthermore, a focus of attention is a subset of this, and contains the contents of conscious awareness at any given time. Again, a central executive is used as an attention controlling mechanism, among other things.
- Engle et al’s individual differences model of working memory: very closely related to both Cowan and Baddeley’s models, it views WM as long-term memory plus attention. As the name suggests though, it emphasises the differences in capacity between individuals.
- Lovett et al’s working memory in ACT-R: the ACT-R cognitive architecture is essentially made up declarative memory (networked nodes, each representing a piece of knowledge, and each having an activation level – this level representing attention), and procedural memory (symbolic, of the type condition->action). Given a current goal, working memory would consist of all those production rules (from procedural memory) and those declarative nodes activated by those rules which are relevant to the current goal. It is a computation model, with development tools freely available on the internet.
- Kieras et al’s working memory in the EPIC architecture: a symbolic computational architecture with a well developed interface with perceptual and motor processes. Though not explicitly a model of WM, as with ACT-R and SOAR, it does nevertheless operate in certain domains in a way functionally analogous to WM.
- Young et al’s working memory in SOAR: SOAR is a purely symbolic computation model of human cognition. Its working memory space contains the current goal (broken down into subgoals if necessary), and those pieces of information relevant to it/them. Production rules relevant to the goal at the top of the ‘goal stack’ all fire at once, and new rules may be created if no relevant rules are found. Processing, as with ACT-R, is thus goal directed.
- Ericsson and Delaney’s long term working memory: This theory of working memory distinguishes between the oft-discussed short-term working memory discussed by most other theories, and long-term working memory which is made up of those knowledge structures which enable the fast recall from nonactivated long-term memory. These structures are studied by examination of ‘memory experts’, for example people who can remember long series of numbers, which theoretically exceed the limits of standard ‘short-term’ working memory.
- Barnard’s working memory in ICS: The interacting cognitive systems model is another general cognitive architecture, however, it began life as a model of Baddeley’s phonological loop. Baddeley’s tripartite structure maps very well on top of it, although each system is described as the interaction of a number of other, more fundamental, modality specific systems.
- Schneider’s working memory in CAP2: The control and automatic processing model is a connectionist model that started life as specifically a model of working memory, but became a general cognitive architecture. In this architecture, a central executive is presented as a hierarchically structured series of neural networks.
- O’Reilly et al’s biologically based computational model of working memory: This model is based on the assumption that the effect of working memory is an emergent property of the interaction of three specialised brain systems; the prefrontal cortex, the hippocampus, and the perceptual and motor cortices. I cannot do justice to the theory here; would like to review it in a later post. On a quick note: it forms the basis of a computational model of the prefrontal cortex, which has been successfully implemented in robotic setups (presented at COGRIC recently). I also intend to post on this subject in the near future.

When I first approached the subject of WM for my research, Baddeley and Hitch’s tripartite model seemed by far the most influential. This is indeed the case – however, a multitude of models were always present even if just at the fringes. One thing I found most disconcerting was the lack of a standard definition of working memory: it seemed to be used to cover a wide range of things. More recently, however, this prevalence of Baddeley’s model seems to be declining, in favour of what may be described as more functionally general and neurally more explicit models. This book I found extremely useful as a starting point in trying to resolve the issue of definitions in particular, and to provide an overview of the majority of contemporary working memory theories. I highly recommend it. It even provided me with a brief introduction to the cognitive architectures of ACT-R and SOAR. Partly as a result of reading this book, and in conjunction with other research, I settled on a general definition of working memory upon which my current research is aimed: working memory is the interface between cognition on the one hand, and memory on the other.

REF: A. Miyake and P. Shah, "Models of Working Memory: mechanisms of active maintenance and executive control." Cambridge: Cambridge University Press. 1999.

Wednesday, October 11, 2006

Language and Personality - idle thoughts

Having read a number of posts on Developing Intelligence about Language and Cognitive development in general (which, if I might add, I must recommend), I was reminded of something that I was interested in a few years back – the influences of language on what would be best described as personality, especially on those lucky enough to be described as genuinely bilingual. These are just a few thoughts of mine, in no way scientifically verified, simply conjectures based on personal experience and conversations with others.

My Mother Tongue is English. I can also speak Dutch and French – although not in any way trilingual, I am perfectly capable of a lot more than basic communication with natives of the two languages. One thing I have noticed when speaking Dutch over the period of a few days, is that I notice a slight difference in behaviour, even attitudes, in myself. Now this is of course pure speculation, speculation which friends of mine agree with when it is pointed out to them. This 'phenomenon' seems at first glance rather trivial, and indeed I have treated it as such, until I read “Language and Thinking-For-Speaking” on Developing Intelligence. The paper that this post describes is concerned with the relative classification of nouns with respect to gender specific qualities, tested on people whose languages have grammatical gender (German and Spanish). For example, subjects were asked to give adjectives describing a series of words, for instance 'Key' (masculine in German; feminine in Spanish). If I might borrow the same quote that Chris at Developing Intelligence used to illustrate the results:

"There were also observable qualitative differences between the kinds of adjectives Spanish and German speakers produced. For example, the word for "key" is masculine in German and feminine in Spanish. German speakers described keys as hard, heavy, jagged, metal, serrated, and useful, while Spanish speakers said they were golden, intricate, little, lovely, shiny and tiny. The word for "bridge," on the other hand, is feminine in German and masculine in Spanish. German speakers described bridges as beautiful, elegant, fragile, peaceful, pretty and slender, while Spanish speakers said they were big, dangerous, long, strong, sturdy and towering."

Further discussion of this paper suggests that language may have an effect on non-linguistic thought in a profound way. Given my personal experience, and more general observations, my query is whether truly bilingual or trilingual persons may possibly display slightly different personality characteristics when in a particular 'language mode'. Based on this paper, that answer seems to be a resounding maybe – or rather it leaves this phenomenon as a possibility rather than an impossibility. As I've mentioned, these are thoughts of mine, not empirically proved theories. However, I do think it would be interesting to see if studies were conducted with multilingual people were conducted to assess this. Quite how personality traits would be 'measured', I am not sure. Perhaps like/dislike judgements in combination with other standard psychological tests. These people would naturally have to be 'immersed' within the testing language before any testing could be useful, and course the multitude of other factors, such as differences in contemporary culture between the language groups – but I maintain that it would be interesting at the very least if conducted properly. As a tongue-in-cheek final question: Are multilinguists all suffering from a latent multiple personality disorder?

Tuesday, October 10, 2006

Cortical Dynamics of Working Memory

Notes on a lecture given by Joaquin Fuster (2006), at the Almaden Institute Conference on Cognitive Computing - http://video.google.com/videoplay?docid=-3002336180397686566&q=almaden+cognitive+computing.

This fascinating lecture provides of summary of Prof. Fusters work on cortical dynamics, in this case focusing on working memory. He defines working memory at the start as the active (online) retention of information for a prospective action to solve a problem or to reach a goal (with the emphasis on prospective action). This is what I consider to be a fairly generic definition of what the purpose of working memory is - although naturally, over the course of my research, the systems/dynamics underlying this statement vary quite significantly. From this basic definition, two further qualifications are presented: Firstly, the actively held information is unique for the present context, but is inseparable from past context - in essence, it is long-term memory updated for prospective use. Secondly, working memory and long-term memory share much of the same cortical structure. This distinction I will dwell on
for a little. The implication of this statement is first that long-term and working memory are functionally distinct systems; secondly, significant common ground between the two memory systems exists, showing that while they may be functionally separable, they are not neurally separable. If I may mention the 'classic' working memory (WM) theory of Baddeley and Hitch, one can see that the functional separability is very clearly defined (due to it being based on behavioural studies) in Baddeley's WM, whereas the second point seems to have been largely neglected, until recently at least (with the introduction of the episodic buffer). On a final note, Prof. Fuster notes that working memory may also be described as attention towrds an internal cue - an interesting note in itself.

One thing that I am still slightly bewildered by, and something which I would be glad of comments on, is the difference between working memory on the one hand, and short-term memory on the other. The way I currently understand it, short-term memory may be seen as somewhat of a special case of working memory. Together with the basic sensory stores (the persistence of sensory 'representations' for a short period of time after the occurrence of the stimulus), working memory replaced the Atkinson and Shriffrin view of a short-term memory store. Corrections and other points of view gladly accepted on this point.

Points of note in the presentation (and approximate time in minutes):

- 20mins: when describing the overlapping cortical hierarchies of the frontal lobe and posterior sensory regions, Fuster makes reference to the fact that Semantic memory is a result of the abstraction over time of individual experienced instances (episodic memory). Given that Endel Tulving defined episodic memory as a subset of semantic memory, there seems to be a large discrepancy between the now classical view of the structure of declarative memory, and the theories presented by Prof. Fuster. On the other hand, more recent evidence has suggested that episodic and semantic memory essentially work in parallel (see here - http://www.cus.cam.ac.uk/~jss30/pubs/Graham2000%20Neuropsygia.pdf).

- 31mins: a description of the Cortical Perception-Action Cycle - a fundamental principle of neural operation.

- 48mins: The description of constraints on the modelling of the memory network. Two types of constraint are described - structural and dynamic terms. In structural terms, the model would have to have a network structure, would have to deal in a relational code (resultant from the fact that memories are defined by associations, at all levels), it must be both hierarchical and heterarchical (in other words, the interaction of multiple hierarchies in parallel - the overlapping cortical hierarchies), and finally it must be capable of plasticity (the networks must not be static, they must be able to change). In Dynamical terms, the memory must be content addressable (be addressable by content or by association), it must accomodate variability (be stochastic), the system must be capable of reentry (that is, it must be able to update its own 'knowledge', or long-term memory - hence plasticity), the obvious necessity for parallel processing (he gives the example of perceptual processing), although having said that, there is a necessity for serial processing when it comes to conscious attention (the Global Workspace Theory of Bernard Baars springs to mind here...), and finally, the model must be able to accomodate category, in both perception and action (another spring to mind: Krichmar and Edelman's brain based device work http://cercor.oxfordjournals.org/cgi/content/abstract/12/8/818). Although Prof. Fuster notes that these are his personal views on constraints, in my humble opinion, they deserve at least acknowledgement when it comes to such model building, due to his vast experience and research credentials.

- Animations of brain activations of patients performing delayed matching to sample tasks (the basic working memory task for humans) in various modalities: visual (54mins), spatial (56mins), verbal/auditory (57mins). This are particularly impressive, and display beautifully what he said previously concerning the overlapping cortical networks - I must recommend them.

- 70mins: a question regarding the relationship between perception and memory - essentially the answer described that perception is formed by memory, thus aligning with the theories of active perception, or expectation-driven perception/attention.

As a final note, I would just like to mention his conclusions. Firstly, that memories are defined by associations at all levels. Secondly, that hierachical networks for perceptual memory reside in posterior regions, whereas executive memory networks reside in the frontal cortex. Thirdly, that the prefrontal cortex (at the top of the perception/action cycle hierarchy) mediates cross temporal contingencies (as eloquently argued in his first book "The Prefrontal Cortex"). And finally, working memory is maintained by recurrent activity between the prefrontal cortex and the perceptual posterior regions - as shown in those animations.

Tuesday, October 03, 2006

The beauty of "Biologically Non-Implausible"

The expression "biologically non-implausible"is a phrase I first heard used by Murray Shanahan at COGRIC (http://www.cogric.reading.ac.uk), in description of his work on cognitive architectures. In terms of the work of cognitive roboticists such as Shanahan, and many others, I think the term is perfectly suited.

The phrase "biologically inspired" is well known, and used extensively by a wide range of activities which carries out some activity, or process, in a way which takes some idea from a natural biological system - although the final instantiation often bears no resemblance to the system from which it took inspiration. And of course, there is no necessity (or obligation) for this: in this way, a task may be completed where it previously would have been overly complex. For example a number of search and optimisation algorithms inspired by the workings of an ant colony when searching for food/an alternative nesting site.

Onto the phrase "biologically plausible", and what you get is something with an entirely different connotation. This expression implies not just that this system or other is possibly instantiated in an actual biological system, but also that this system or other is likely to be instantiated in the actual biological system of interest. I've come across this statement in numerous articles and papers, although not necessarily so explicitly stated. So, for example, based on the activation of such and such a brain region when a patient performs a certain task, it is biologically plausible that this brain region performs function x. The use of the term in this context I think is perfectly justified. However, when it comes to building cognitive models - perhaps making a model of how a certain task is performed within the brain - I get a little uncomfortable with the term biologically plausible. This, to me at least (and I'm sure many may disagree), seems to overstate the significance of the created model. Such a model may well describe how a specific task is completed in the greatest detail, but to then say that it is biologically plausible, and then perhaps search for corroborating neural analogues for each processing, seems to overstate it. In my view, it would surely be unwise to claim biological plausibility for a model of a certain task in a specific domain, until it is known how this task fits in the context of the entire human information processing brain; perhaps there may be numerous ways in which such a task may be completed, none of which are intuitively possible until other aspects of brain function are enlightened?

And so the expression "biologically non-implausible". This, to me, implies that the presented statement, or system, lies within the realms of possible explanations for the functioning of the biological system, rather than being the most likely candidate. Basically, this argument boils own to emphasis - or more precisely, what point of view I perceive these two expressions to emphasise. Largely pedantic perhaps, but in my view an important qualitative distinction. I'm really just justifying why I like the expression "biologically non-implausible".

Saturday, September 30, 2006

The mysteries of memory...

This is a republish of my post on the 19th September:

All these doth that great receptacle of memory, with its many and indescribable departments, receive, to be recalled and brought forth when required; each, entering by its own door, is hid up in it. And yet the things themselves do not enter it, but only the images of the things perceived are there ready at hand for thought to, recall.” St. Augustine, 398 A.D., "The Confessions of St. Augustine", Book 10, Chapter 8, paragraph 13.

I came across this quote when looking for a broad definition of memory I could reference to. The whole chapter is good, but these two sentences sum up for me what memory does. Although this was written more than 16 centuries ago, it is still very relevant to the current state of the art in memory research, at least from my point of view. As a quote on memory, I haven't found better yet, and it was the perfect way to start my first year report!

Hello (again...)

My name is Paul, and I am a PhD student at the University of Reading, UK, and am part of the Cybernetics Intelligence Research Group (CIRG). Very briefly, my research is concerned with the examination of memory, or more specifically, working memory as the interface between memory and cognition. My extended interests cover far more than this though. I hope to more completely describe my work and interests over the next few posts.

I started this blog on the 18th of September this year, with the intention of making notes on my work, and comments on the work of others which I have come across - essentially a means of throwing ideas at myself, and a way of making my notes accessible wherever I go. If anyone else happened across them and happened to take an interest (and provide comments!), then that would be welcome a bonus.

However, I started on a bad note - whining about instant tea. So, this is my attempt to get back on track. For those unlucky few who have come across this blog before: apologies, all the rubbish that was here before has now been deleted. One thing I must mention first of all. My main motivation in reverting back to my original intention for this blog is a blog called "Developing Intelligence" http://develintel.blogspot.com/. This I would have to recommend to anyone - the paper reviews are interesting, and it provides good links to other sources. This is the aim for my blog - albeit with more personal thoughts on relation to my research from time to time.