A New Zealand AI company is creating an extremely angry robot. But is this a cause for concern or an exciting computing development?
It sounds like the beginning of an apocalyptic sci-fi film. A New Zealand artificial intelligence company is building the angriest robot in the world in the hopes of helping companies to understand and placate angry customers. The technology firm, Touchpoint group, has spent more than £230,000 on the project, and is expected to be live by the end of the year.
So, is it time to start preparing for the robotic revolution? Not quite – though the long-term future of artificial intelligence is undeniably unnerving.
The robot will only simulate anger
Though it may seem aggressive, Touchpoint’s robot won’t come close to experiencing bona fide rage. Instead, the machine will have hundreds of millions of angry customer interactions uploaded to its database and the robot will be programmed to mimic and repeat these conversations. Dr Stuart Armstrong, a research fellow at the Future of Humanity Institute at the Oxford Martin School, Oxford University, says that this is a relatively easy emotion to seemingly replicate in robots.
“There’s not much variety in human anger. If someone’s angry they’ll just hurl insults at you, there’s not much subtlety of interaction so you don’t have to code anything complicated. Anger is easy to imitate without having to go into depth,” he says.
Fake robot anger is very basic – for now
Touchpoint’s angry robot will only be programmed to show basic signs of rage, and will behave in a markedly different way from a genuinely angry human. Dr Armstrong explains:
Why would we be afraid of a human who’s angry? Well, because they might do something stupid and lash out. Robots are not going to start punching the person at the other end of the phone or spreading angry messages on Twitter. They’re not going to do a whole host of things that you would expect a genuinely angry person to do– unless it had been programmed to do that. And that’s how you can tell that their anger is purely situational. A sign that a robot has feeling is if they act in a way that we would expect a human to do but they weren’t programmed for.
Truly scary robots don’t show any emotion
Theoretically, we might one day be able to build robots that exhibit all human signs of anger. There is a complicated philosophical debate about the point at which mimicked emotion and consciousness is indistinguishable from actual emotion and consciousness. If a robot can exactly mimic human consciousness, and react with the same emotional responses to the same events, then are we really justified in calling it unconscious?
But such an advanced computer is a hypothetical creation that we might see by the end of the century – and certainly won’t see in the next decade. And even if we do create an angry, human-esque robot, these are a far smaller concern to humans than emotionless robots.
Computer scientists are far more likely to make an error with emotionless robots, which they might be less wary of, than robots that exhibit anger. “If we can create genuine anger as an emotion in robots, everything in our background tells us that this is dangerous and this is not something that should be placed in a position of power,” says Dr Armstrong.
And programmers with sinister intentions would intentionally avoid angry robots. “If you want to cause harm, create a murderous robot but don’t make it angry. If you want to cause harm then creating the thing that signals danger to all humans is exactly what you want to avoid,” says Dr Armstrong.
Instead, robots that have no human evidence of emotion could (hypothetically, far in the future) create a far greater threat. Dr Armstrong says:
Everything in our evolutionary background prepares us to deal with angry entities and knowing how to deal with them and whether or not to trust them. If we get a robot that’s angry in the classically human sense, we know so much more about how to deal with it then a robot that does not exhibit anger of any sort but may have goals that are very dangerous. The dangerous ones are the ones that do not correspond to anything that we can classify on a human scale – the ones that are indifferent to some crucial aspect of the world. If AI are indifferent to humans, it’s obvious how that could go wrong. If they’re indifferent to some aspect of humans and they get great power, well that aspect of humanity may vanish.
Should we worry about AI at all?
Though angry robots may not be such a threat, Dr Armstrong says that it’s extremely difficult to predict whether or not Artificial Intelligence will eventually cause harm to humans.
“Intelligence itself has allowed us to dominate the planet, so potentially higher intelligence might lead to much higher power,” he says. “AIs could become extremely powerful and then their preferences would influence the direction of the future. If these preferences are indifferent to some human element then things could end up quite badly for us.”
But though it seems certain that AIs will become more powerful in the coming decades, it’s far from certain that they’ll reach a significant level of power. “There’s great uncertainty here,” says Dr Armstrong.
How to program robots to avoid harm
Dr Armstrong says that the threat posed by Artificial Intelligence is a “technical problem” being addressed by computer scientists.
One option is specify and code human values – which is an extremely difficult task. “The disagreements among human values is almost unimportant in comparison with the difficulty of specifying this value sufficiently so that it can be coded. You have to solve all of moral ethics in computer code,” says Dr Armstrong.
Dr Armstrong is working on an alternative solution, and is investigating whether robots can be made safe by programming it for “reduced impact.” He explains:
“If you programmed a robot to remove a tumour then, once the tumour is gone, it might then immediately cut off someone’s leg. It’s motivation to remove the tumour is not safe. But if you programmed it to remove the tumour and have a small impact on the person, then it will remove the tumour without then doing something so drastic. So we’ve made an unsafe goal safe by adding reduced impact to it. Many values become a lot safer if you program the robot so that it won’t do a huge change.
The future of Artificial Intelligence is extremely uncertain, and computer scientists like Dr Armstrong are working to make sure we’ll be safe from dangerous robots. But for now, the angry robot in New Zealand poses no serious threat. Sci-fi horror stories haven’t become reality yet.
This article was written by Olivia Goldhill from The Daily Telegraph and was legally licensed through the NewsCred publisher network.