jump to navigation

Those mental processes that we don’t understand June 23, 2011

Posted by Ezra Resnick in Computer science.
add a comment

turingToday is the 99th anniversary of the birth of Alan Turing, father of computer science and artificial intelligence. Turing also led the British code-breaking effort during World War II which succeeded in deciphering German communications, contributing greatly to the allied victory.

In a 1951 lecture entitled “Intelligent Machinery, A Heretical Theory,” Turing  argued that “machines can be constructed which will simulate the behaviour of the human mind very closely.” At the end of the lecture, Turing speculated about the day when building such computers will be practical:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious toleration from the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, trying to understand what the machines were trying to say, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control…

But there are those who object in principle to the idea that a machine could ever be truly intelligent, could ever be said to think. In a January 1952 BBC broadcast, Turing explained why trying to define “thinking” in this context is pointless and unnecessary:

I don’t really see that we need to agree on a definition [of thinking] at all. The important thing is to try to draw a line between the properties of a brain, or of a man, that we want to discuss, and those that we don’t. To take an extreme case, we are not interested in the fact that the brain has the consistency of cold porridge. We don’t want to say “This machine’s quite hard, so it isn’t a brain, and so it can’t think.”

Asking whether a computer could ever really think is like asking whether a jet plane can really fly: if you include “flapping wings” in your definition of flying, then the answer is no — a jet plane doesn’t fly using the same methods as a bird — but so what? What matters are competences. If a computer could perform all the tasks we consider as requiring intelligence, then the fact that its internal mechanisms are different from a human’s should be insignificant.

But could a computer ever mimic the most complex aspects of human intelligence, like natural language? When Turing suggested that a computer might be able to learn using methods analogous to those used in the brain, he was asked: “But could a machine really do this? How would it do it?” Turing replied:

I’ve certainly left a great deal to the imagination. If I had given a longer explanation I might have made it seem more certain that what I was describing was feasible, but you would probably feel rather uneasy about it all, and you’d probably exclaim impatiently, “Well, yes. I see that a machine could do all that, but I wouldn’t call it thinking.” As soon as one can see the cause and effect working themselves out in the brain, one regards it as not being thinking, but a sort of unimaginative donkey-work. From this point of view one might be tempted to define thinking as consisting of “those mental processes that we don’t understand.” If this is right then to make a thinking machine is to make one which does interesting things without our really understanding quite how it is done.

It used to be claimed that a machine would never beat a human grandmaster at chess, since good chess requires “insight.” As soon as computers became better at chess than the best human players, the goalposts were immediately moved: Deep Blue wasn’t really thinking — it’s merely a very fast machine with lots of memory following a program. (So apparently, playing good chess doesn’t require intelligence after all.) We are assured, however, that a computer will never write a novel — that requires genuine creativity (which only humans have). And when a computer does write a novel…

In the spring of 1952, Turing was tried and convicted for having homosexual relations, then illegal in Britain. To avoid going to prison, he had to agree to chemical castration via female hormone injections. His security clearance was revoked, barring him from continuing his cryptographic consultancy for the British government.

Alan Turing committed suicide on June 7, 1954, several weeks before his 42nd birthday.