Sixty-five years ago, the mathematician Alan Turing proposed a test to ascertain whether a machine can communicate intelligently enough over a period of time to fool a human into believing that it is human.
He predicted that by the start of the 21st century machines would fare reasonably well. He was wrong but to be fair to machines, in my experience there are Vodafone employees who would not pass the Turing Test.
Since Turing’s time, humans have been fascinated and terrified by the possibility of machines acquiring human character or intelligence, which inescapably includes a form of self-awareness. But machines continue to be stupid. The day when my Mac would delete the previous line automatically may never arrive.
However, the computational strength and sophistication of modern machines are so great that what now scares scientists is not only their prophesised intelligence but their unambiguous stupidity. This fear inspired a letter last week, which was signed by hundreds of eminent people including the theoretical physicist Stephen Hawking, who says many things that would look foolish if you said the same, and the chief of SpaceX and Tesla Motors, the yet-to-despised billionaire, Elon Musk.
At the heart of the letter is the somewhat honourable principle that murder is a human privilege and machines should not be empowered enough to kill enemies of the state without human supervision. The signatories called for a ban on the development of robotic weapons that can identify and eliminate predetermined targets on their own without any further human intervention. They fear that such machines would make the decision of waging war too easy, and that rogue states and terrorists would find it easier to replicate them than, say, nuclear technology, which was invented only once — in the United States — and is hard to copy.
The fear of stupid machines is, in some cases, reasonable, but it is acquiring the same disproportionate fantastical alarm that the believers in sentient machines once exhibited and still do.
Among the people who fear Artificial Imbecility, you would often hear a story about how paperclips can destroy the world. It is derived from a thought experiment of the Swedish philosopher, Niklas Bostrom, who is alive unlike most men who are known as philosophers. He raises the scenario of a sophisticated but stupid machine programmed to maximise the production of paperclips. How anyone would get the funding to create a machine that makes paperclips is obviously not a concern of the thought experiment, which is surprisingly naive for its fame.
The paperclip maximiser, Bostrom says, would create new technologies and other machines to consume all the resources on earth to produce as many paperclips as is possible. The paperclip maximiser would also destroy all humans because they may switch it off, thereby interfering with the production of paperclips. Also, human bodies can be converted into paperclips.
In a review of Bostrom’s bestseller, Superintelligence: Paths, Dangers, Strategies, in The Bulletin of the Atomic Scientists, Edward Moore Geist argues machines that invent whole technologies to achieve a goal are not possible. Imagining machines of this order of complexity would imply, he writes, “a fanciful understanding of the nature of technological development in which ‘genius’ can somehow substitute for hard work and countless intermediate failures. In the real world, the ‘lone genius inventor’ is a myth; even smarter-than-human AIs could never escape the tedium of an iterative research and development process.”
A technological breakthrough is never a result of computational power but of time, failure, teamwork and accidents. A scientific achievement then is unique to humans. A paperclip maximiser can never be as powerful as Bostrom imagines.
The fear of the stupid machine is in reality a respectable front for the fear of the intelligent machine. It is impossible to separate Artificial Intelligence from the hope and fear that seeded the idea in the human mind — the possibility of a machine achieving singularity or sentience. By the existing laws of science, no machine known to man can achieve it even by chance. And, much of what passes off as AI science fiction is pure fantasy on the low level of magical realism. Science is unable to understand consciousness in animals in the first place. It cannot begin to describe consciousness in machines.
Humans have demonstrated that they cannot cease to be anthropomorphic; they tend to attach human attributes to things that may be fundamentally different from human nature and form. The billions of dollars that the western world spends on the search for aliens are invested in this mental condition. They are searching for Earth-like planets, they are seeking water and organic life. They are, in reality, looking for themselves but elsewhere. Respectable scientists have even said that aliens would look a lot like humans or animals; they have even described their possible heights and mass, which are remarkably similar to Caucasians.
Robotics, too, is anthropomorphic. The most useless robots certainly are, they look human and perform silly tasks. The most sophisticated robots do not look human at all — for instance the software of Google. These bots do not even have a form.
There is a powerful anthropomorphic idea though — the insertion of the human mind, whatever that might be, inside a machine or a software. If Google is implanted in your brain, you will only appear smart, like quizzers, but if you are implanted inside a software you would make it sentient and it would make you immortal. Ray Kurzweil, the writer and a director of engineering at Google, suggests it is possible, even probable. Would people choose such a form of immortality? I certainly would, but inside a biped upgradable cyborg. I promise to be responsible.
Manu Joseph is a journalist and the author of the novel, The Illicit Happiness of Other People. He tweets by the handle: @manujosephsan. The author's views expressed are personal.