Skip to navigation
Latest News

Will artificial intelligence wipe us all out?

killer robot

By Nicole Kobie

Posted on 26 Nov 2012 at 11:45

Is technology going to kill us all? A leading scientist and philosopher have teamed up with a tech industry luminary to find out.

The trio of academics have set up the Centre for the Study of Existential Risk at Cambridge, hoping to uncover if sci-fi predictions of robots and artificial intelligence destroying human kind will come true.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge, had the idea after meeting up with Jaan Tallinn - one of the founders of Skype.

"He [Tallinn] said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease," Price said. "I was intrigued that someone with is feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to doing something about it."

Tallinn said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease

Price said that in the next century, we could face a major shift in human history: "when intelligence escapes the constraints of biology".

Aside from artificial general intelligence (AGI) - which brings with it the eventual ability for computers to write their own programs and develop their own technologies - the centre will look at bio- and nanotechnology, as well as extreme climate change.

"Nature didn't anticipate us, and we in our turn shouldn't take AGI [artificial general intelligence] for granted," he said. "We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous."

Price admitted it was unlikely that any threat could be predicted with complete certainty, but said "with so much at stake", something must be done.

Serious investigation

While the idea may sound like sci-fi - we direct Price to the Terminator film series, or at least the first two - he said such concerns should be brought into the fold of "serious investigation".

"The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence - in a way that they simply haven't up to now, in human history," he said. "We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones."

To start the Centre for the Study of Existential Risk, Price also invited Lord Martin Rees, former master of Cambridge's Trinity College and president of the Royal Society, who has written extensively about catastrophic risk.

Cambridge added that academics from a host of fields - science, policy, law and computing - had already started to sign up to the project. The centre will be formally launched next year.

Subscribe to PC Pro magazine. We'll give you 3 issues for £1 plus a free gift - click here
User comments

I always find these discussions amusing based on the perception of technology. Should AI develop and be far enough advanced to be able to threaten the human race in a survival competition do you not think that they would just launch themselves into space where there are more abundant resources. AI would not need an atmosphere....

The big bang is just as likely to be AI going to war with the ultimate weapon as they are to attack the human race.

By KT2012 on 26 Nov 2012

Am I the only one who immediately began to think of HAL and GlaDOS?

By tech3475 on 26 Nov 2012

We don't need AI to kill us..

Artificial Intelligence is far less likely to kill us all than nurtured human stupidity.

It may be that in the future we use robots to fight moronic religious genocides or politically motivated resource grabbing, but frankly once the machines got THAT intelligent, they'd leave in disgust.

By cheysuli on 26 Nov 2012

Crystal Ball

This is one Bit Strange, of a stupid question, for any academic to ask.
It is like asking if my washing will get dry.

It all depends upon the conditions now and in the future.

If protection is not in place, then it is VERY likely things will become very unpleasant for the human race.
Even if things are taken seriously and safeguards are put in place to prevent "nasties"... (Sod's Law)... will always overcome and the "unexpected" WILL happen.

Reality normally observes the strongest is in control; usually to the detriment of the weaker species.
On Earth, our human intelligence has been the secret of survival. It has always been used to conquer and/or enslave the weakest species (both animal and human to the strongests advantage.
If AI became autonomous, either by design or by accident (learned to "feed" and "multiply"), the human race would be immaterial and redundant.
Why should we be of any consequence?

By lenmontieth on 27 Nov 2012

Not evolution but design

In 2009 the US Air Force released a report entitled, "Unmanned Aircraft Systems Flight Plan 2009-2047," in which it proposes a drone that could fly over a target and then make the decision whether or not to launch an attack, all without human intervention.

In other words, the question is not whether AI will evolve to wipe us all out, but what will happen when it has been designed to wipe out particular groups of undesirables.

At present the US policy is to consider all military-age males in a drone strike zone as militants unless exonerating evidence proves otherwise - so bad luck if you're a man living in Afghanistan, Pakistan, Somalia, Yemen or wherever else the Perpetual War takes us next.

A friend of mine works in AI for the noble aim of detecting tumors through medical imaging. Unfortunately others are working in the field for less commendable reasons.

By 0thello on 27 Nov 2012

The truth

Artificial intelligence is no match for natural stupidity

By ramjam on 27 Nov 2012


I posted about this randomly on Facebook last week after glimpsing the future in the minds eye, clear as day.

From the viewpoint of those of us working in the tech industry fixing stuff when it goes "of track" or malfunctions, the whole idea of Tech implants can feel more than a little worrying.

By Gindylow on 29 Nov 2012

Leave a comment

You need to Login or Register to comment.



Most Commented News Stories
Latest Blog Posts Subscribe to our RSS Feeds
Latest ReviewsSubscribe to our RSS Feeds
Latest Real World Computing


Sponsored Links

Your email:

Your password:

remember me


Hitwise Top 10 Website 2010

PCPro-Computing in the Real World Printed from

Register to receive our regular email newsletter at

The newsletter contains links to our latest PC news, product reviews, features and how-to guides, plus special offers and competitions.