The question of whether machines will be capable of human intelligence is ultimately a matter for philosophers to take up and not something scientists can answer, an inventor and a computer scientist agreed during a debate late last month at the Massachusetts Institute of Technology.
Inventor Ray Kurzweil and Yale University professor David Gelernter spent much of the session debating the definition of consciousness as they addressed the question, “Are we limited to building super-intelligent, robotic ‘zombies,’ or will it be possible for us to build conscious, creative, even ‘spiritual’ machines?” Although they disagreed, even sharply, on various points, they did agree that the question is philosophical rather than scientific.
The debate and a lecture that followed were part of MIT’s celebration of the 70th anniversary of Alan Turing’s paper On Computable Numbers, which is widely held to be the theoretical foundation for the development of computers.
In a separate 1950 paper, Turing suggested a test to determine “machine intelligence”. In the Turing Test, a human judge has a conversation with another human and a machine, not knowing which responses come from the human or the machine. If it cannot be determined where the responses come from — the human or the machine — then the machine is said to “pass” the test and exhibit intelligence. The Turing Test itself is the source of ongoing dispute.
Kurzweil and Gelernter weren’t so much interested in that dispute, but more focused on how Turing’s test could be applied. Kurzweil’s position was that machines will, in fact, some day pass the Turing Test, because modelling of parts of the brain is already leading to the ability to replicate certain human functions in a machine.“We’ll have systems that have the suppleness of the human brain,” Kurzweil said duruing the debate, adding that to contemplate how those machines will be developed, it’s important to accept that current software and computing power aren’t yet up to the task and that technological advances are necessary first.
Humans will recognise the intelligence of such machines because “the machines will be very clever and they’ll get mad at us if we don’t”, he joked.
Gelernter smiled at that, but he also shook his head. He’s not buying it because logically any machine that is programmed to mimic human feelings, which are an aspect of consciousness, is programmed to lie because a machine cannot feel what a human feels. That’s the case even if the machine seems to be able to “think” like a human.
“It’s clear that you don’t just think with your brain,” he said during the debate. “You think with your body.”
Kurzweil noted that a computer recently simulated protein folding, something that was believed to be impossible for a machine to do, suggesting that it’s difficult to predict what machines will be capable of doing. Gelernter had an answer for that, too — that’s all that happened, just the simulation of the folding, and that the process stopped there.
“You can simulate a rainstorm and nobody gets wet,” he said, using another example.