Scientific Creatures

In which I try to create a bigger picture of general intelligence by combining a number of different concepts. These concepts are:

  • Ashby: W. Ross Ashby’s concept of ‘Intelligence Amplification’.
  • Brooks: Rodney Brooks’s concept of the ‘Subsumption Architecture’.
  • Clark: Andy Clark’s theory of ‘The Extension of Mind’.
  • Dennett: Daniel Dennett’s theory of the ‘Tower of Generate and Test’.
  • Exocortex’
  • Friston: Karl Friston and ‘The Free Energy Principle – A Unified Brain Theory?’
  • Gendler: Tamar Gendler’s concept of ‘Alief’.

Concept 1: The Tower of Generate and Test

blah

Darwin’s Dangerous Idea

In ‘Darwin’s Dangerous Idea’ (1995?), Daniel Dennett proposes an evolutionary scale of intellectual development from Darwinian creatures up to Scientific creatures (as noted previously ‘Dennett’s Dangerous Idea’ ).

Dennett’s scale is as follows:

  1. Darwinian creatures are created by random mutation and thenceforth are fixed. They are then subjected to the ‘survival of the fittest’ test as-is. The creatures themselves are the ‘hypothesis’, both generated and tested in the environment.
  2. Skinnerian creatures can learn by testing actions in the external environment. Favourable actions are reinforced. The advancement over Darwinian creatures is that the generation of hypotheses moves inside the creatures; the creatures are changed as a result. Named after B. F. Skinner.
  3. Popperian creatures can preselect action from a number of options. They have a model of the external environment inside of themselves and so can consider the results of actions internally before engaging in the external environment, to weed out weak hypotheses so that actions have a better chance of success (as Karl Popper said, we “permit our hypotheses to die in our stead”). Their advancement over Skinnerian creatures is that the testing of the hypotheses has moved inside the creature.
  4. Gregorian creatures import tools (physical or linguistic) from the outside world to create an inner environment which improves both the generators and testers. Named after Richard Gregory. Their advancement over Popperian creatures is that they are using the immediate outside world for improved generation and testing.
  5. Scientific creatures are social. Language allows communication between creatures, eventually leading to scientific methods. Their advancement over Gregorian creatures is that testing is conducted socially which helps to reduce errors that a single individual might make (‘making mistakes in public’).
Die Ruckseite des Spiegels

Behind the Mirror

Level 5 is a subset of level 4 which in turn is a subset of level 3 which is a subset of level 2 which is a subset of level 1.

He is not the first to propose a scale like this and he himself acknowledges that the ‘father of ethology’, Konrad Lorenz, proposed a not—dissimilar scheme in ‘Behind the Mirror’ (1973). Both are theories in the field of ‘evolutionary epistemology’.

Concept 2: Subsumption

Dennett also acknowledges the ‘subsumption architecture’ idea of Rodney Brooks. In brief, this is:

  1. Having created a particular ‘good’ organism, evolution cannot make a better one by radically reshaping it via genetic mutation and can only build on the existing DNA blueprint.
  2. A better organism can evolve by building a new layer on top of the existing structure. This higher layer can tap into the lower level.
  3. More and more layers can evolve ‘upwards’.
  4. The lower levels provide fast responses (e.g. ‘reflex’) whereas higher layers can produce more intelligent responses.

Figure 1: Subsumption

As Dennett puts it:

earlier design decisions come back to haunt – to constrain the designer”.

They:

“put major constraints on the options for design improvement”.

These layers are shown diagrammatically in the figure. Arrow 1 represents perceptions and arrow 2 represents actions. Layer H evolves on top of the original entity, layer G, and layer I then evolves on top of H. The resulting agent then comprises G+H+I. The fastest response to the environment is directly through G but more sophisticated behaviour is possible through utilizing H and I. Decomposing a complex organism into layers is of course a gross simplification, but it is hoped it is one that provides some insight into the behaviour of autonomous agents.

Note: Subsumption was discussed in more detail in the talk ‘Free-Will/Free-Wont’ .

Concept 3: The Extension of Mind

blah

Andy Clark (U. Edinburgh)

Andy Clark has proposed the idea of the ‘Extension of Mind’ in which ‘the mind exploits resources wherever they are’, whether inside or outside of the individual. The idea that an individual’s intelligence could be improved further through evolution is of no consolation to that individual. Given their physical limitations, they can (and will) extend their cognitive abilities by exploiting resources in their immediate environment. A commonly-cited example of this is using one’s own fingers to help count, which allows the individual to concentrate on recalling whatever entities it is that he is counting.

This is shown diagrammatically below. Arrows 1 and 2 are still the perception and action of the individual, I, as before. But I can wrap itself up including objects in its environment, J, to create a better-thinking agent. There is no reason why this process could not be repeated by adding more objects, K, which may be less immediately accessible than J. I am uneasy with the idea that it is ‘mind’ that is being extended here – using a rather ambiguous dualist term; I prefer ‘extension of agency’ but that is less attention-grabbing.

Figure 2: ‘Extension of Mind’

This was discussed in more detail in the talk ‘Extension of Mind’.

Concept 4: ‘IA’ – Intelligence Amplification

The psychiatrist and cyberneticist W. Ross Ashby wrote of ‘amplifying intelligence’ and ‘IA’ (intelligence amplification) has subsequently been contrasted favourably with ‘AI’ (artificial intelligence) as a mean of extending our intelligence.

Here is an interpretation example of IA from me – making a hot drink (ignoring the fact that this is already a Gregorian action – using tools in the environment to heat up water for some advantage to the agent): An example at the simplest level is using an electric kettle which involves:

  1. Putting water into the kettle.
  2. Turning the switch on to start heating.
  3. Turning the switch off when the water is hot enough.
  4. Pouring water out of the kettle.

A typical kettle will perform action 3 of its own accord. It is a fail-safe action, which can be trusted to such a simple contraption, unlike the other 3 actions. This is as simple an example of external intelligence that I can think of (even simpler than that ‘simplest’ of machines the consciousness of which is debated over – the thermostat). We can amplify intelligence further and further, allowing more and more intelligent acts to be performed on our behalf. At the other end of the continuum is a robot who we ask (through natural language) ‘can you make me a cup of tea?’ and it goes off and performs all the actions – presumably because it is trusted to because it is sufficiently competent.

Such automated responses are performed to allow the agent to devote attention elsewhere. It filters out perceptions, only letting them pass through to place demands on the agent in exceptional circumstances (‘we’ve run out of tea bags!’). But even during normal operation, the agent generally has the opportunity to nullify the automated responses, after the responses have been made (see ‘Free-Will/Free-Wont’).

It is debatable whether we would classify ‘action amplifiers’ (such as a pneumatic drill) or ‘perception amplifiers’ (such as a microscope) as ‘intelligence amplifiers’ in their own right, but they may contribute to IA along with the ‘intelligent’ command/control responses.

Concept 5: The Exocortex

The ‘exocortex’ is a recently-created term used for machinery that can extend our intelligence that is invasive, unlike the benign, everyday ‘extension of mind’ mechanisms. For example, connecting an electronic device to the prefrontal cortex through a ‘brain-computer interface’ such as neural implants. The concept currently sits more comfortably in the realm of science fiction than science fact.

Concept 6: Alief

Tamar Gendler coined the term ‘alief’ in contrast to ‘belief’, for a belief held at a lower level that can act against an individual’s conscious (higher level) wants. For example: a paralysing fear of heights can prevent us from moving across a bridge even though we consciously believe it to be structurally safe.

Alief was discussed in more detail in the talk ‘Free-Will/Free-Wont’ .

Combining Subsumption with Extension

So far, a number of ideas involving levels/layers of behaviour have been introduced. Now let’s start combining them, starting with the ideas of subsumption and the ‘Extension of Mind’. The figure below shows this by combining the previous diagrams. G, H and I (subsumption) comprise the conventional limits of the agent and J, K and E are its conventional understanding of its environment (which the agent has extended into). The agent’s ‘Mind’s I’ ‘sees’ a diminishing level of agency the further we expand through H, G, J and K. Of course, the full environment comprises everything including the agent. We can understanding this as the environment similarly encroaching into the agent, with diminishing influence.

Note that extension adds lower-levels to the hierarchy, not higher levels. By extending ourselves, we might have wanted to create a higher level, but we have only inserted an extra layer between ourselves and the environment (which may at least serve to insulate ourselves from the environment). The level with which we associate consciousness (level ‘I’) is still the one furthest removed from the environment.

Figure 3: Subsumption and the ‘Extension of Mind’

Some more observational notes:

  • Clark’s ‘loops out in the environment’ are represented by the loop from 2 through J to 1.
  • A reflex reaction is represented by the loops from 1 through G to 2.
  • An example of ‘intelligence amplification’ is an outer layer making decisions on our behalf – such as the kettle switching itself off, as represented by the loop from 9 through J to 10.
  • We can extend the concept the concept of alief outside of ‘skin and skull’ to when those machines acting on our behalf are acting in a manner contrary to what we want.
  • There is a gradual boundary between agent and environment – in contrast with the Cartesian dualist’s sharp boundary. There is intersecting interaction between the agent and its environment – a bit like a ‘physicalist dualist’ notion of agency! (Of course, the agent itself is entirely within the environment.)
  • Whilst liking Clark’s Extended Mind thesis, I was sceptical of its usefulness and more inclined to see intelligence (and consciousness) as more localized rather than less (i.e. further to the left in the diagram). This gradual transfer from agent to environment of more subtle and more satisfactory.
  • ‘Extending the mind’ creates an extra layer of interaction between the agent and its environment.
  • As already noted, where we associate consciousness to be is furthest to the left, furthest from the environment, regardless of whether we are ‘extending’ or ‘subsuming’. The Exocortex potentially builds something that subsumes ourselves rather than extends ourselves it – and so would sit to the left of ‘I’ (and I leave you to draw your own conclusion).

Criticism of Dennett

In presenting the ‘scientific creature’ in his scale, I think Dennett was too eager to take the step from individual to social, through language. To be fair to him, the chapter in which his scale is introduced is entitled ‘The Role of Language in Intelligence’. But I would criticise his scheme for being too one-dimensional – literally!

I would propose a two-dimensional model, the 2 axes being, as I would call it:

  • ‘micro-science’ : the emergent intelligent behaviour of an individual.
  • ‘macro-science’ : the emergent intelligent behaviour of a society of individuals.

Figure 4: Intelligence and Cooperation

This:

  • allows us to have a more refined understanding of individual intelligence, and
  • acknowledges the importance of ‘the social dimension’.

I’m not entirely comfortable with describing this second axis as ‘social intelligence’. For me, intelligence is still an attribute of an individual (the first axis) and a different word is more appropriate for this second axis -‘cooperation’.

In this two-dimensional space:

  • Traversing up the left-hand side leads to high intelligence of an individual (along the Darwinian – Skinnerian – Popperian – Gregorian – Scientific scale). An agent/’creature’ may be capable of a high level of intelligence but can easily degenerate to a lower level in particular circumstances.
  • The swarming behaviour of many many individuals would sit in the lower right corner..
  • James Surowiecki’s ‘wisdom of crowds’ behaviour, in which many individuals combine their disparate knowledge in a way that produces results superior to that of a smaller number of experts, would lie near the top-right corner.

In the diagram, I have not shown ‘cooperation’ to be orthogonal to the individual’s intelligence. It is clear that high levels of cooperation within a society can have a significant improvement to the intelligence of an individual within that society. But further discussion of the ‘social dimension’ and its effect on intelligence must be left for another time.

A Grander Scale of Intelligence

Separating out the social aspect allows us to have a more refined scale of individual intelligence. Steps 1 to 4 remain the same but more steps beyond that are now included:

The scale now becomes:

  1. Darwinian creatures are created by random mutation and thenceforth are fixed. They are then subjected to the ‘survival of the fittest’ test as-is. The creatures themselves are the ‘hypothesis’, both generated and tested in the environment.
  2. Skinnerian creatures can learn by testing actions in the external environment. Favourable actions are reinforced. The advancement over Darwinian creatures is that the generation of hypotheses moves inside the creatures; the creatures are changed as a result. Named after B. F. Skinner.
  3. Popperian creatures can preselect action from a number of options. They have a model of the external environment inside of themselves and so can consider the results of actions internally before engaging in the external environment, to weed out weak hypotheses so that actions have a better chance of success (as Karl Popper said, we “permit our hypotheses to die in our stead”). Their advancement over Skinnerian creatures is that the testing of the hypotheses has moved inside the creature.
  4. Gregorian creatures import tools (physical or linguistic) from the outside world to create an inner environment which improves both the generators and testers. Named after Richard Gregory. Their advancement over Popperian creatures is that they are using the immediate outside world for improved generation and testing.
  5. Scientific-I creatures create objects in the external environment to help them which are models of the environment. The Corinthian Antikythera mechanism http://en.wikipedia.org/wiki/Antikythera_mechanism , which was used to predict eclipses, is an example of this. Their advancement over Gregorian creatures is in getting beyond the cognitive limitations of the individual by placing the model in the agent’s near environment.
  6. Scientific-II creatures create objects in the external environment to automatically stimulate the environment. The agent then observes the outcome. This can be considered as ‘let my slave die in my stead’. A rather advanced example: remote-controlled bomb-disposal ‘robot’.
  7. Scientific-III creatures create objects in the external environment to automatically change the model. The model in the environment is automatically adapted. Example: simulated annealing and genetic algorithms.
  8. Scientific-IV creatures create objects in the external environment that automatically generate new models of the environment, whether through simulation of the environment or actual engagement with the environment. The agent can learn from this automatic generation. At this stage, these objects may be called ‘autonomous’ but there is still a definite ‘chain of command’ (‘responsibility’?) back to the agent.

We can present the above scale as an evolution towards the following relationship between an agent and its environment:

Figure 5: Agent and Environment

An agent ‘A’ interacts in an environment ‘E’. Enhancements of the agent include:

  • E’: A ‘representation’ of the environment within the agent.
  • A’: An extension of the agent into the environment.
  • E”: A ‘representation’ of the environment within the environment.

One immediately-discernable problem is that the agent’s extension into the environment (A’, with its internal model E”) looks very similar to ‘someone else’ (another agent, A2, with its internal model E2)!

Concept 7: Friston’s ‘Free Energy’

blah

Karl Friston (UCL)

The various concepts so far have been pieced together to form a model of agents comprising a hierarchy of interactive layers. Each concept shed light on a part of the model. Unified, they present a picture of ourselves as intelligent agents acting/reacting to our environment using internal ‘representations’ to predict what the outer environment will do.

See figure 3 (again), below, showing the hierarchy of layers and Figure 5 (above) showing the ‘representations’. A more generalised form of this would be one where there were ‘representations’ at every layer. So as well as representations already made explicit:

  • A conscious representation of the world our body inhabits, and
  • A model of a part of the outer environment running on a computer of ours in the lab.

there are other representations, e.g.

  • An internal model of the dynamics of our limbs which help us to move (and give us our sense of proprioception) – ‘kinaesthetic intelligence’.

At each level, the internal predictions are being used along with the actual sensation from the environment, to guide our behaviour.

This is beginning to look very similar to Karl Friston’s ‘variational free energy’ principle of the brain which ‘minimizes surprise through perception and action’ (mathematically-speaking, ‘surprise’ is the log of the reciprocal of probability – a very high probability attached to some event yields very little surprise when that event actually happens).

‘Free Energy’ is a term that originates from thermodynamics where it is the maximum amount of useful energy that can be extracted from a system (as opposed to the entropy of the system which represents the ‘non-free’ energy. Applied to the brain, it is equivalent to the prediction error.

The Free Energy theory provides a universal model of how actions are generated from perceptions. For example, a sudden noise to one side might surprise us. We might imagine a whole number of possible causes for that noise, to which we could attribute probabilities. But we can minimize the total error by directing our sense organs towards the direction the noise came from. This might result in us determining the source of the noise, or at least narrowing down the options. In short: we will shift our eyes or move our head to see what happened. The result will be a reduction in prediction error.

But the theory also potentially links what the observed behaviour with our understanding of what the neurons in our brains are actually doing. Hence the theory is noteworthy in that it provides a single conceptual framework to bring disparate fact-finding enterprises within neuroscience together under a single umbrella, to cement the discipline together. It may not be the right theory but it could be significant step towards such a theory and shows, by example, what an overarching theory might look like.

Figure 3 again!: This time… A hierarchy of layers minimizing surprise

In the figure above, the ‘outbound’ arrows (6, 4, 2, 10, 8) represent predictions effected onto the outer environment and the ‘inbound’ arrows (7,9,1,3,5) represent prediction errors. At each level, the prediction error is used to refine the internal model prediction for better predictions in the future but it is also used more immediately to generate a new action. They are therefore like a hierarchy of (not one but many) Popperian creatures.

A Note about Layers

I’ve represented a person as just 3 distinct levels. This is purely arbitrary and a gross oversimplification just to make some first steps to get across the ideas of hierarchy and perception/action or prediction/error interaction. In reality, things are far more messy with interactions going all over the place.

Notwithstanding this, the 3 layers could be mapped to:

  • I = conscious intelligence
  • H = emotional intelligence
  • G = kinaesthetic intelligence.

…which could be present in varying degrees within a single organism.

A Note on Internal Models and External Models

When we talk about ‘internal models’ and ‘representation’, we must be careful what we mean. Dennett makes this point very effectively:

“The inner environment, whatever it is, must contain lots of information about the outer environment”

but

“we must be very careful not to think of this inner environment as simply a replica of the outer world.”

(See also ‘Taxonomy without Representation’)

In contrast, external models that we create, designed by us, can correspond to the outer world much more closely.

 

Further Reading

Some new links for further reading (not referenced elsewhere on this website):

Advertisements
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

8 Responses to Scientific Creatures

  1. Very interesting post! You cite an eclectic range of sources and ideas. Your section ‘A Grander Scale of Intelligence’ reminds me of my blog post http://alanwinfield.blogspot.co.uk/2007/04/walterian-creatures.html
    in which I proposed another level beyond scientific creatures – which I call Walterian, after W Grey Walter.

    I’m sceptical of a theory of intelligence that relies on Dennett’s Tower of Generate and Test, since on that scale the vast majority of animals are ‘Darwinian Creatures’, and I think a Unified Theory of Intelligence should be able to account for the difference in intelligence between say, an e-coli and a cockroach, or – for that matter – a sea slug and an oak tree. All of these organisms would be off the bottom of the scale in the kind of theory that (I think) you’re proposing.

    • headbirths says:

      Thanks for your link to your Walterian creatures piece, Alan. I liked the term ‘left the toolbox’ but less liked the idea of it. The name ‘Kurweilian’ came to mind rather than Walterian – which will probably annoy you! I like the idea of Intelligence Amplification and your robotics ethics – with a clear chain of responsibility back to us, keeping technology on a chain. (It’s a timely existential fear – today’s article about CSER.org: http://www.bbc.co.uk/news/technology-20501091).

      I fully accept your comment about neglecting the e-coli and sea slugs at the bottom end of the scale. I suppose I am interested in the humans-with-technology top end whereas you are more interested at that bottom end – where robots are currently.

      My current interest in Dennett’s scale is how it seems to be saying something similar to Friston’s Free Energy – how it might be describing a subset of a much grander theory.
      Generally, I’m fitting together things I’ve looked at previously (e.g. the Extended Mind thesis) with more promising areas, where I’m now heading. Whereas Dennett’s tower is purely behaviourist, Friston’s theory maps onto the actual physics (i.e. neuronal structure) of the brain. Just how good a theory Friston’s is I can’t possibly comment. But maybe Friston’s theory will have something useful to say about how we can define intelligence. It might help us quantify just how much of the scale there actually is at the bottom end of the scale. I have some wildly speculative thoughts on this but nothing more so far (and precious little knowledge of current literature).

  2. Pingback: Free Energy: Hierarchical Message Passing | Headbirths

  3. Pingback: Agent versus Environment: An Analogy | Headbirths

  4. Pingback: The Science Delusion, Part 3 | Headbirths

  5. Pingback: Deontology | Headbirths

  6. Pingback: Others, Orders and Oughts | Headbirths

  7. Pingback: Bubbles | Headbirths

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s