The men trying to save us from the machines
Will computers grow so intelligent they wipe out the human race? Nicole Kobie meets the team guarding against that very threat
Are you more likely to die from cancer or be wiped out by a malevolent computer?
That thought has probably never occurred to you, but it’s been bothering one of the co-founders of Skype so much he’s teamed up with Oxbridge researchers who are trying to predict what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity.
Technology has long been a source of danger in the fertile imaginations of sci-fi novelists, but the idea is gaining academic support, with researchers at the University of Oxford’s Future of Humanity Institute (FHI) joining those from the newly launched Centre for the Study of Existential Risk (CSER) at the University of Cambridge to look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations.
The concept is simple, but the solutions are anything but. The researchers are trying to avoid a situation where we outsmart ourselves, and create a system that can in turn invent its own technologies, which could "steamroll" humanity – not because it’s evil, but simply because we couldn’t foresee the long-term ramifications of how we programmed it.
Weighing up the risks
This idea has been studied since 2005 by the FHI, which was last year joined by the CSER, founded by Huw Price, Bertrand Russell professor of philosophy at the University of Cambridge, astronomer royal Lord Martin Rees, and Jaan Tallinn, co-founder of Skype.
Are we more likely to die from an AI accident than from cancer or heart disease?
The institute was sparked in part by a conversation between Price and Tallinn, during which the latter wondered, "in his pessimistic moments", if he’s "more likely to die from an AI accident than from cancer or heart disease".
The CSER launch announcement started with what sounds like the premise of a nerdy joke, but quickly became more serious: "A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge to address developments in human technologies that might pose ‘extinction-level’ risks to our species, from biotechnology to artificial intelligence."
It sounds fantastical, but the trio, and their Oxford colleagues, are deadly serious. The work has long roots, too. In 1965, Irving Good – a friend of Alan Turing who worked at Bletchley Park – described the first "ultra-intelligent machine" in a positive light in a New Scientist paper, notes the CSER.
"This machine, he continued, would be the ‘last invention’ that mankind will ever make, leading to an ‘intelligence explosion’ – an exponential increase in self-generating machine intelligence. For Good, who went on to advise Stanley Kubrick on 2001: A Space Odyssey, the ‘survival of man’ depended on the construction of this ultra-intelligent machine," the CSER release notes said.
"While few would deny the benefits humanity has received as a result of its engineering genius – from longer life to global networks – some are starting to question whether the acceleration of human technologies will result in the survival of man, as Good contended, or if in fact this is the very thing that will end us," the CSER stated solemnly.
At the core of this is an idea commonly referred to as "singularity" – the point at which technology can start to make its own technology and become more advanced than us, making it impossible to predict what comes next.
Philosopher Nick Bostrom, the founder of Oxford’s FHI, hesitates to use that word, since it means "so many things to different people, all rolled up into this bundle of breathless expectation". Instead, as he said at a conference held by The Economist earlier this year, he simply refers to “super-intelligence".