In many of the posts on this blogsite, I have been advocating a physicalist worldview in which an agent interacts with the environment in which it is in. This includes many references to Karl Friston’s ‘variational free energy’ theory of how the brain works (with similar schemes elsewhere such as the ‘Bayesian Brain’ and the ‘Predictive Brain’). It involves breaking our concept of the brain into smaller processes that form a ‘hierarchy of predictors’ and is an improvement over the ‘dual process’ model which just has 2 levels (subconscious fast and conscious slow levels) and doesn’t provide any explanation of how, when and why one level will over-ride the other.
The bubble diagram representing a hierarchy of predictors, left, has made a number of appearances on this blogsite, its first being in ‘Hierarchical Message Passing’ which gave a basic explanation.
Here, I’ll look at it again more thoroughly.
Pseudo Closed Loop Feedback
I’ll start by revisiting ‘pseudo-closed-loop feedback’ on the sense of proprioception. I likened this to the problem of controlling a JCB digger. An inexperienced operator will have no sense of which level, pushed/pulled in which direction and to what extent will move the digger’s arm to where they want it to be. Only by looking at the arm (and possibly at the levers too) can movements be made and these will be slow, hesitant and jerky. As experience is gained, a memory is built up of how to get the arm most of the way to where we want it to be, relying on sight only for the fine positioning.
So it is with our own internal sense of how to move our body to where we want to be – without even looking for the most part. The case of Ian Waterman is an example of how important this learnt memory is. He has lost this memory (following a virus) so that even the most basic of tasks takes an extraordinary effort.
The original Pseudo Closed Loop Feedback diagram was:
But redrawing this in a modified, more abstract form:
The mapping from the old to the new diagrams is as follows:
- 1 represents the actions of the higher-level ‘agent’ A.
- The ‘agent’ (‘A’) was formerly called a ‘mini-me’, i.e. something still me (‘I’) but minus the lowest layer (‘B’).
- 4 and 5 represent ‘actions’.
- The environment E replaces 6, 7 and 8 represent the environment, E.
- 9 and 10 represent ‘senses’.
- 14 represents the ‘prediction error’ which becomes the sense input for the higher-level ‘agent’ A.
And 11 and 12 represent the ‘model’ E’. The model contains the memory built up of what has happened before and it is used to predict what is likely to happen in the future if the agent finds itself in similar circumstances. The model is a coarse replica of the environment – a representation. But as Rodney Brooks says, we can manage without representation since the environment is the best representation of itself! (‘the world is its own model’.)
That is true. But the coarse internal model can be faster. Employing both is best – ‘acting fast and slow’:
- Fast: initially and for much of the time, the system works in an open-loop manner – the memory within the model is used to predict what is going on in the environment without having to look (act and respond). But it operates with the near-accuracy of a closed-loop system (looping from A to E and back to A) – because of the model. It is like a closed-loop system, but it is not; it is ‘pseudo-closed-loop’.
- Slow: later, the system works in an open-loop manner, which (somewhat perversely) can be be seen as ‘pseudo-closed-loop’ using the world as its own model!
If the prediction error is small then the model behaved well and so the stimulus to the higher level agent is reduced as is the degree to which that agent can act. It does not demand the attention of the higher-level agent.
Bubble FB reduces the degree to which that agent can act. The feedback loop from environment through FB back to the environment is a fast reflex reaction.
If the model does nothing, the diagram degenerates to the ‘closed-loop feedback’ which corresponds to new situations – like operating a JCB digger. If the model is absent, all action is slowly directed by the higher-level agent – as it is with Ian Waterman.
Imagination, Hypothesis, Deliberation
Rotating the diagram above so that the environment E is at the bottom, and the model of the environment E’ for prediction is split outside of the lower-level agent B, we get the diagram to the right.
However, a subtle change has been made: the action for the model can now be different from the action to the environment:
- The arrow from FB to E still represents an action: ‘I do this…’, but
- The arrow from FB to E’ represents a hypothesis: ‘if I do this…’ and the arrow from E’ to PE represents a prediction: ‘…then this may happen’.
In the diagrams above, B represents a tiny portion of the brain and A represents everything else. But if A is replaced by an agent comprising both A and B parts and this is done a number of times, we end up with the frequently-shown diagram of a ‘hierarchy of predictors’.
Every time an extra level is inserted, the A process is pushed back, being required to do less and less. A is like a homunculus (marked as ‘mini-me’ in the original pseudo-closed-loop feedback diagram): its workings are mystical but, by inserting more and more levels under it, there is less and less for it to do – until it does nothing. However, a problem remains with the ‘hierarchy of predictors’ as to what happens at the top of the hierarchy. (Perhaps Dennett’s concept of the homunculus as a black box should be called a ‘homunculess’!)
The ‘hierarchy of predictors’ model of the brain is a grossly simplified model, but it is at least an improvement over the dual-process theory:
- It generalizes to more than just 2 levels and there is some correspondence between the interconnectedness of the processes and the physical connectedness of the brain.
- It includes the concept of a deliberating process using an internal model to predict consequences in order to choose best actions.
- It provides some understanding of why lower-level processes are faster than higher level ones.
- It provides some understanding of how the resulting action of the agent depends on that of its constituent processes.