This is the second part of the talk ‘Free Will / Free Won’t’. Notes in the text in the form [ABC] can be found by following the above link.
Daniel Wegner’s ‘Conscious Will’ has been presented as a physicalist account of Free Will, explaining the experience, but it does not involve how we make decisions (make choices).
5. Free Will = Free + Will
Many people have offered accounts of how we make choices – an account of ‘empirical will’, that causes action, rather than the ‘phenomenal will’ merely experienced [WEG]. Many accounts, including those of William James and Karl Popper, can be described as two-stage models, in which the ‘free’ and ‘will’ parts are separated:
- The first stage generates multiple ‘possible actions’ that are free. Proponents of these models are often relying on indeterminacy, particularly that of quantum mechanics, for this freedom.
- The second stage just selects the best one – and there is no problem that this can be deterministic (mechanically, the winner is the one with the highest cost-function).
This ‘two-stage model of free will’ is shown in Figure 1. (The squiggles on the left represent an eye; the squiggle on the right is more obviously a hand. Senses such as sight are our inputs; motor functions like hand movement are our outputs.)
This is a reasonable starting model – apart from any insistence on indeterminacy in the creation of the ‘free’ choices.
If we re-draw the boundary of ourselves to externalise the source of randomness, the new ‘I’ becomes deterministic and enslaved to an ‘irrational’ decision-making source. This is virtually the same as making all our decisions by the roll of a dice. Humorous aside: Watch the example of Sheldon Cooper doing exactly this from “The Big Bang Theory” (from 28 seconds in):
This indeterminacy does not help us assign responsibility to agents. Freedom does not need indeterminacy. If we could roll back time to exactly the same situation, then, in a deterministic world, we would do exactly the same thing again. But if things were ever-so-slightly different, different courses of action would result. We do not need indeterminacy for the beating of a butterfly’s wings in Brazil to affect whether there is a hurricane over Texas soon afterwards [BRA]. Quantum mechanics adds to the unpredictability but isn’t a ‘magic ingredient’.
The other way of letting indeterminacy in is to allow a matter-independent ‘mind’ to help generate the options ‘A’, ‘B’ or ‘C’. That is – our thoughts somehow affect (cause) the physical in contrast to a physicalist stance of the physical causing ‘mind’. But Benjamin Libet’s controversial experiments of the early 1980s suggest the latter causal relationship holds, rather than the former [PHY].
6. Benjamin Libet’s Half-Second
A basic overview of Libet’s experiment is as follows:
- A subject is sat in front of a clock which has a single hand that makes 1 complete revolution about every 2 seconds (i.e. the hand travels fast).
- The subject is told to press a button at the time of their own choosing. They are not to pre-decide, for example ‘I will press it when it’s 3 on the clock’, but just notice and report back what the clock says at the time of pressing. An electromyograph (EMG) detects muscle movement when the button is pressed [BUT].
- An electroencephalograph (EEG) over their head monitors brain activity.
- The clock, EMG and EEG results are all synchronized so that it is possible to tie the timing of the various events together.
- It is found that there is a 500ms delay from the start of EEG activity [REA] to the EMG triggering but only a 200ms delay from the conscious report of the time on the clock to EMG triggering. This therefore implies that there is a 300ms delay from the physical evidence (EEG) to the experiential evidence.
So there is a clear time order here: first brain activity, then conscious awareness [LIB]. The experiment remains controversial and the method has been much discussed [DEN]. The main objection: just because brain activity starts 300ms beforehand doesn’t mean that the decision is made then. The experiment has been repeated many times, with modifications to counter objections (for example, the choice is between pressing button ‘A’ or button ‘B’). Fairly recently (2007), John-Dylan Haynes [HAY] repeated the experiment using a Functional MRI (fMRI) scanner rather than EEG, showing large-area brain activity prior to the conscious reporting [MRI].
Regarding this ordering of mental and physical events, a determinist would respond ‘what else would you expect? It’s only a problem for libertarians and dualists’.
But a particular point I’m wanting to emphasize here is just how slowly things happen in the brain. As a point of reference, you can fit a billion clock cycles of a 2GHz Pentium processor into Libet’s half-second! The role of time is an important theme of this talk.
7. Free Won’t
Libet’s original position on this was that most of the 200ms duration from consciousness to the muscle action (omitting the last 50ms to get from the brain to the finger) is available to veto what has been unconsciously decided in the 300ms beforehand. Libet called this the ‘power of veto’. Scott Kim, a columnist for Scientific American at the time, described it instead as ‘free won’t’ [KIM] and this term has subsequently been used by others [HOF]. Our unconscious selves do things which we consciously can decide to abort. See Figure 2. This divides our brain into a (fast) unconscious ‘UNC’ and (slower) conscious ‘CON’ that can issue the ‘power of veto’.
How does this compare with the ‘Free Will’ model of Figure 1?
- The Free Will model does not include any notion of time.
- The Free Won’t model is an enhancement of the Free Will model. Figure 1 can be viewed as a model of a so-called ‘winner-takes-all’ neural network. As such, it can be considered as a crude model of the mechanisms that go into both the ‘UNC’ and ‘CON’ ‘clouds’ of neurons, rather more crudely for ‘CON’ than for ‘UNC’.
8. Good and Bad Homunculi
The homunculi argument’ fallacy is where we try to account for what is going on in our heads by ultimately having to resort to a homunculus – a ‘little man in our head’. Presumably, our explanation of the behaviour of that homunculus would resort to there being a homunculus inside the head of the homunculus, leading to an infinite regress of Russian doll-like homunculi. The ‘homunculus argument’ is no argument at all. Except that, as Daniel Dennett points out, if at each stage, the homunculi is doing slightly less, then we are making progress in providing a physicalist account.
The ‘Free Won’t’ model feels a bit like this. A single cloud of neurons with an homunculus lurking inside has been replaced by a purely ‘mechanical’ cloud (‘UNC’) and a cloud (‘CON’) with an homunculus that now only has the power of veto. This free won’t model is a halfway-house. It is going in the right direction. It has introduced the concept of Free Won’t but we need to go further.
9. Rodney Brooks
The basic idea of ‘free won’t’ looks very similar to Rodney Brooks’s concept of the ‘subsumption architecture’, used in robotics. Rodney Brooks is an Australian-born now-retired professor of robotics at MIT. His approach is a reaction to ‘Good Old-Fashioned Artificial Intelligence’ (GOFAI). Examples of GOFAI include the classic ‘Eliza’ program that fooled unsuspecting students in the mid-1960s and very recently, IBM’s ‘Watson’ and Apple’s ‘Siri’. His criticism of GOFAI is that it merely mimics intelligence rather than creating it. Brooks looks to biology for inspiration. His basic argument is that animals interact with their environment rather than reason with symbols. He looks at the problem from the opposite direction to GOFAI: building upwards from very little (‘bottom-up’) and ending up with something seemingly intelligent. His method is consistent with evolution (‘naturally!’).
How can intelligent behaviour emerge from simple behaviour? I will provide an example from a recent paper in ethology, concerning how wolves hunt in packs. Our prior assumption is that wolves, social creatures that they are, are communicating with one another in their organized pursuit of their prey. We imagine it is as if we were hunting. Imagine a bunch of 21st Century office working, out hunting for the first time in centuries. It would probably involve lots of gesticulating to communicate to each other where we should go, without alerting the prey. But the paper explains the wolves behaviour in terms of two basic rules, in which each wolf independently:
- move towards the prey until a particular distance away.
- Then moves away from other wolves at this minimum distance from the prey.
This results in the prey being encircled. Whether or ot this is actually how wolves hunt is irrelevant here. The point is we can explain complex (seemingly intelligent) behaviour in simple terms, building up layers of simple behaviour, rather than complex explanations.
Rodney Brooks provides many examples of his layered ‘subsumption architecture’ [SUB] approach. One practical example is a commercial robot vacuum cleaner [ROO]. It’s 3 layers of behaviour are (from bottom, up):
- Avoid obstacles: don’t get too close.
- Explore the environment (randomly).
- Look for things.
The lowest levels are the simplest and hence fastest to respond. Higher levels are increasingly slower to respond. This example is a successful, practical solution: it is possible to achieve a lot of intelligent behaviour with not very much.
In contrast with this is the top-down traditional AI which builds up internal representations of the robot’s environment. That internal representation will be broken down into detected objects and then some understanding of relationships between said objects may be deduced.
Brooks is anti-representational. An often quoted phrase of his is ‘the world is its own best model’. It is much faster to consult the actual environment than the representation of it. It has the further advantage that, unlike the internal representation, it is never out of date. This allows his lower levels to respond a fast reflex reaction which higher levels can then override to provide more intelligent behaviour.
This sounds a lot like the ‘free won’t’ described earlier, but with more levels. And as it is robots we’re talking about here rather than us, there is no potential trap of accidently inferring any homunculus at the top level.
Brooks’s thinking is inspired by biology. Consider a biological example of antelopes on the savannah. There is strong evolutionary pressure to select those with fastest reaction time to evade predators. There is normally a cost function at work, balancing ‘false alarms’ against ‘slow detects’. But here it is better to err on the side of false alarms. There is plenty of time to process higher level reasoning whilst running away from the (potential) lion. It is better to look silly in front of your antelope peers than be someone else’s meal. This shows the importance of the role of time. There is plenty of time for the antelope to cancel its running away action.
Daniel Dennett makes a distinction between ‘ballistic’ and ‘guided’ actions. Example: If I shoot you, events are determined from the moment the bullet leaves the gun. But if I fire a guided missile at you, I can choose to cancel my initial (perhaps reflex) action of launching the missile by steering it away from the initial target. There is a time window of ‘free won’t’ between the low-level reflex up to the point where the action can no longer be cancelled. [BAL] [TEN]
This ‘multiple-level’ free won’t is shown in Figure 3 – with levels 1 up to 5 having been built up by a ‘free-wontification’ of the previous levels. Assigning arbitrary names to the levels: level 1 is ‘reflex’ and levels 2 and 3 are other unconscious levels (‘impulse’ and ‘instinct’?). The top levels are at the level of consciousness (however that arises!). Of course, there is a huge simplification here: the brain is an evolved network of 100 billion neurons and isn’t going to be ordered into neat layers. The layers here are just to help in the explanation of the concept; I’m just arguing that it is a better concept than the crude 2-layer approximation.
10. Stanley Milgram on the Subway
Libet’s notion of the ‘power of veto’ involved the higher level vetoing the action within the 150ms window of opportunity before the lower level action was realised. My take on ‘free won’t’, above, allows the higher level much more time to cancel the action, possibly long after the body has initiated the action.
Do we ever experience actually doing things before we consciously realise that we don’t actually want to? That is, do we ever find we weren’t able to veto the action before the action was performed? I think so. Here are two examples which reminded me of this, one for entertainment value, the other more serious.
Comedian Michael McIntyre (29 seconds into the Youtube video, above):
“Sometimes you’re on the phone and people will go ‘Have you got a pen?’ to take down their number and you’ll immediately say ‘yes’ even though you don’t have a pen!
“Sometime you never find a pen and you pretend to take down the number and you never even get the number ‘cause you feel too embarrassed to say ‘I lied about the pen!’”
A more serious example: In his recent TV series ‘The Brain: A Secret History’ [MOS], Michael Mosley tried recreating Stanley Milgram’s New York subway experiment. Milgram asked his student to go down into the subway and ask people ‘Can I have your seat?’. A surprising number of people complied.
In both of these examples, the person responding has said ‘yes’ perhaps before they have really realised what’s going on. Only afterwards might they have thought ‘why the hell did I say that?’ At a lower level, the requests are so innocuous and asked so reasonably, a learnt instinctive reaction would be to say ‘yes’. There might also be an instinctive ‘flight rather than fight’ reaction to comply. Interestingly, on the subway experiment, the more justification that was offered, the less likely people were to stand up [MOS] – the longer you are delayed in responding, the more time a higher level can kick in to veto the instinct. This is an alternative explanation to one invoking higher-level social-pressure explanations.
11. Afterword: An Alternative Feedback Model
Previous diagrams all show information signals going from left to right, from the eye (input) to the hand (output). The higher levels are alternative (slower) paths than the lowest, instinctive one. As I say elsewhere, these diagrams are gross simplifications; in reality there is feedback as well as feedforward in the billions of neuronal connections. Indeed, feedback appears to dominate. An alternative diagram is shown below. David Marr, layered, low-level to higher level cognition. David Hubel and Thorsten Wiesel investigating the physiology of the visual cortex. – more connections from the ‘higher’ down to the lower levels (towards the retina) than there aregoing to the upper levels [NOE]. Expect to be inhibitory. Diagram below: ‘FB’ feedback function is shown larger than ‘FW’ forward path. Just another way of representing the same basic principle: fast path from input to output for the quickest possible response response time. Extra complexity grows on the (inhibitory) feedback path.
12. Afterword: McGilchrist’s Master and Emissary
[This is the end of Part II. Follow the link at the top of this blog entry for the next part.]