Alan Turing’s seminal 1950 paper ‘Computing Machinery and Intelligence’ spawned what has subsequently become known as the ‘Turing Test’ (see also the Wiki page). The paper begins by asking ‘can machines think?’
How do we make progress towards answering this question in the affirmative? There are two ways:
- The obvious way is through technological progress. If we feel there has been little progress in the last sixty years it is because we underestimated the task in hand – by a few orders of magnitude. The Loebner prize may not have been awarded yet but we do have some promising practical examples emerging such as IBM’s Watson and Apple’s Siri.
- An alternative approach is that we can redefine what it means to think. I do not mean this second approach as a joke. We should ask ourselves what was Turing actually trying to get at in asking the question in the first place?
Turing reframes the original question to one of a game:
to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator … The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.
…but the twist is that a computer takes the place of one of the participants.
(Note: in the following quotes from the paper, the bold emphasis is mine.)
…we only permit digital computers to take part in our game. …we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well. … Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?
Turing acknowledges that the problem is framed such that actually passing the imitation game puts the answer beyond doubt.
The game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.
Turing then sets out his prediction:
I believe that in about fifty years’ time it will be possible, to programme computers … to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
The emphasis above is mine but the words are Turing’s: ‘the use of words … will have altered so much that one will … speak of machines thinking’. In talking only of ‘imaginable computers’, with the judgement only being made by an average interrogator after only 5 minutes, with the onus of proof on the side of the computer, Turing is making an intellectual argument rather than laying down a technological challenge. As far as answering his question, the Turing prize is irrelevant.
By the year 2000, average people were speaking of machines thinking without being contradicted. Just type ‘my computer thinks’ into Google and it will automatically offer completions to the question such as:
- ‘My computer thinks my iPhone is a camera.’
- ‘My computer thinks I’m in a different country.’
- ‘My computer thinks I have two monitors.’
The test was passed – maybe as long ago as the 1960s, with Joseph Weizenbaum’s ELIZA program.
The Codebreaker – Alan Turing’s life and legacy exhibition at the Science Museum commemorates 100 years since the birth of Alan Turing (it runs through to July 2013) and is a stark reminder of 1940’s (pre-transistor) technology. The particular exhibits’ charm lies in them being (predominantly) mechanical and hence non-magical – their workings have no mystery but the sum of the parts is wonderful.
In asking what Turing was getting at in his 1950 paper, we need to look at the ideas of the 1940s rather than the technology.
In substituting a person with a computer, on the end of a teletype, Turing is taking a behaviourist stance. If the computer behaves in a way indistinguishable from a person then it must also be attributed the observable qualities of that person. This is in contrast with the mind-body dualism prominent in 1950, in which there are two types of ‘substance’ that exists – mind and matter. Matter is physical, located in space. Mind is non-physical, not located in space. Regardless of the precise ideologies of philosophers such as Descartes, the common-place Western view has been a traditional Christian one in which the mind is the seat of both consciousness and intelligence. Their not being located in the physical world allows them to survive mortality. Thus the word ‘thinking’ combined the two notions, inseparable. The material world, in contrast, is a cold mechanical one. No amount of machine-building in the material world could ever hope to create something that can only exist in a different realm.
Gilbert Ryle was editor of ‘Mind’ when Turing’s paper was published in it in 1950. Ryle’s attack on dualism in ‘Concept of Mind’ had only appeared a year earlier, in 1949, and Ryle would presumably have been very supportive of Turing’s paper.
The fundamental problem with dualism lies in the interaction between the two realms of mind and matter.
Ideas have changed over the last 50 years and, nowadays, the dominant philosophical position among academic philosophers is physicalism (also known as materialism and formerly called realism) – the doctrine that there is only one essential substance – matter. A study of ‘mind’ is no longer condemned to be a ‘black box’ study of the inaccessible (we can see activity going on inside our heads, through MRI scanner for example – and make correlations between that and our reported sensations). The fundamental problem is now the so-called ‘hard problem of consciousness’ – how does consciousness arise from matter? We have ‘advanced’ largely by abandoning the nebulous notion of ‘mind’ and replacing it with the nebulous term:‘consciousness’. The progress that has been made is that intelligence has been accepted into the realm of the physical. So there is then no problem in machines being intelligent.
And it it clear that Turing is referring to intelligence rather than consciousness in asking the question ‘can machines think?’. If the title of the paper ‘Computing Machinery and Intelligence’ had not already made it obvious, he writes:
Professor Jefferson’s Lister Oration for 1949, from which I quote. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” … It is in fact the solipsist point of view.
I do not wish to give the impression that I think there is no mystery about consciousness. … I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned.
And Turing takes a good physicalist approach to the ‘problem of mind’. For example, below is an argument that predates Dennett’s ‘homunculus argument’ with which it is virtually identical:
The ‘skin-of-an-onion’ analogy is also helpful. In considering the functions of the mind or the brain we find certain operations which we can explain in purely mechanical terms. This we say does not correspond to the real mind: it is a sort of skin which we must strip off if we are to find the real mind. But then in what remains we find a further skin to be stripped off, and so on. Proceeding in this way do we ever come to the ‘real’ mind, or do we eventually come to the skin which has nothing in it? In the latter case the whole mind is mechanical.
The key point I want to make here: there is the danger that in, say, 50 years’ time, people we look back to now and read current discussions and be incredulous:
‘How could people think that computers didn’t think?! It is even more incredulous than the idea that people used to think the Earth was flat! At least the Earth looks flat when you on the ground. But computers obviously think – that’s what they do – they’re intelligent!’
The everyday use of words might have changed such that ‘thinking’ was virtuously synonymous with being intelligent, and completely detached from being conscious.
Picture credits: Science Museum
Thanks to Alan Winfield for his blog posting which persuaded me to read Turing’s paper in the first place.