Four ways that technology could destroy mankind

Author

Matthew Sparkes Deputy Head of Technology

December 4, 2014

Stephen Hawking has warned that artificial intelligence could rise up and destroy mankind. Is he right? We look at four ways that technology could be the end of us

Stephen Hawking has pushed forward humanity’s understanding of the universe with his theories on gravitational singularities and black holes, so when he speaks up it’s wise to listen. Which makes his warning yesterday that artificial intelligence could mean the end of humanity all the more concerning.

“The primitive forms of artificial intelligence we already have, have proved very useful,” he told the BBC . “But I think the development of full artificial intelligence could spell the end of the human race.

“It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Is he right? Did the screenwriters behind The Matrix, 2001: A Space Odyssey and The Terminator all have a point? Will machines rise up and destroy us? We look at four ways that technology could destroy the human race.

Artificial intelligence

Hawking’s argument is that once a machine can truly think for itself it could learn to make improvements to itself, slowly becoming more and more capable and intelligent. Eventually it could become all-powerful. And we may not be able to stop this process, long and convoluted as it may be, because it could happen in the blink of an eye in human terms.

How close are we to a thinking machine? There are already designs produced by simple artificial intelligence that we don’t fully understand. Some electronic circuits designed by genetic algorithms, for example, work better than those conceived by humans – and we aren’t always sure why because they’re too complex.

Combine this software intelligence with robot bodies and you have a science fiction film. But because every aspect of our lives is controlled by computers, this malevolent super-intelligence wouldn’t need arms and legs to make life unpleasant.

You can argue that we could do artificial intelligence experiments on computers isolated from sensitive systems, but we don’t seem to be able to keep human hackers in check so why assume we can outwit a super-intelligent thinking machine? You can also argue that AI may prove to be friendly, but if they treat us the way that we treat less intelligent creatures then we’re in a world of trouble.

Scientific disaster

There were fears that the first atomic bomb tests could ignite the atmosphere, burning everyone on Earth alive. Some believed that the Large Hadron Collider would create a black hole when first switched on that would consume the Earth. We got away with it, thanks only to the fact that both suggestions were hysterical nonsense. But what’s to say that one day we won’t attempt an experiment which has apocalyptic results?

Grey goo

A decade ago it seemed like distant sci-fi but we’re all familiar with 3D printers now: you can buy them on Amazon. We’re also creating 3D printers which can replicate by making parts for a second machine.

But imagine a machine capable of doing this which is not just microscopically small, but nanoscopically small. So small that it can stack atoms together to make molecules. This could lead to all sorts of advances in manufacturing and medicine.

But what if we get it wrong? A single typo in the source code and instead of removing cancerous lump in a patient these medi-bots could begin churning out copies of themselves over and over until the patient is nothing but a grey goo composed of billions of machines. Then the hospital, too, and the city it’s in. Finally the whole planet. This is the ‘grey goo’ scenario.

If one machine made two machines over a period of 1,000 seconds, then they each made two, and so on, in ten hours you’d have 68 billion. Prince Charles famously warned the Royal Society to consider this risk in 2003 and was mocked for it.

The well-respected nanotechnologist Chris Phoenix discredits the idea, saying that ‘grey goo’ could not happen by accident but only as the “product of a deliberate and difficult engineering process”. It’s lucky, then, that nobody has ever carried out a difficult engineering project with the sole intention of harming millions of people. Oh, wait…

Climate change

By far the most likely doomsday scenario is also the least dramatic: our materialism and lack of care for the environment continue to affect the climate to the point where we cannot survive in it.

Great ! Thanks for your subscription !

You will soon receive the first Content Loop Newsletter