The Truth About An AI Apocalypse
Illustration by Mr Giordano Poloni
Everything you need to know about the robots taking over.
Read this quickly because, if you believe the hype, we haven’t got long. According to luminaries such as Tesla’s Mr Elon Musk and Professor Stephen Hawking, the ongoing revolution in artificial intelligence (AI) heralds the inevitable end of humankind. The AI explosion is, no doubt, remarkable. In just a few years, AIs have learnt to beat the best human at the world’s hardest board game, drive cars, fly drones and play the stock market. They can compose music, fake voices, even write screenplays. There are AI lawyers, AI radiographers and AI scientists. In fact, this very article might have been written by a machine. (It wasn’t.) So should we be worried about Black Mirror-style scenarios, such as a paperclip robot going rogue, or AI making us all jobless? Here, two leading experts discuss some of the most common theories.
AI will take all our jobs
“The layperson is right to worry about AI taking jobs,” says Mr Toby Walsh, professor of artificial intelligence at the University of New South Wales and author of Android Dreams. “If you earn your living driving a taxi or a truck, you have to ask yourself what other skill you have that people will pay for besides driving. In 20 years’ time, very few people will earn their living driving. It will be far, far safer and cheaper to have a robot do this.” Similarly, jobs that rely on pattern recognition, such as radiology, or large-scale data, such as accountancy, will go or change beyond recognition.
“Equally, some of the claims are overblown,” says Mr Walsh. “I am a scientist, so I looked at the data in the well-known Frey and Osborne report predicting 47 per cent of jobs are at risk of automation in the next two decades. Some of the predictions are clearly wrong. For example, they predict with 98 per cent certainty that models will be automated. We don’t care about what robots look like in clothes. This is a job that will remain with humans.”
Strawberry fields forever
“Let’s say you create a self-improving AI to pick strawberries,” Mr Elon Musk told Vanity Fair in April last year, “and it gets better and better at picking strawberries… so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields.” (A similar example, by the AI ethicist Mr Nick Bostrom, involves paperclips.) “There is evidence things can go wrong with technology and get out of hand,” says Mr Joshua Gans, a professor at the University Of Toronto’s Rotman School of Management and co-author of Prediction Machines (out 17 April). “Bitcoin was one person’s idea and implemented with very limited resources. It now consumes the energy of several small countries. The paperclip thought experiment is an example of this. [But] there are reasons to suppose that the risk of that is lower than some think. This is because its takes a whole lot of unlikely things to occur in a row for it to happen.” It would require an AI that is both super intelligent, but too stupid to interpret nuance in its instructions, for a start. “What is more likely is that AI causes problems when the people controlling it have less innocuous motives.”
Artificial intelligence will seem human
From Terminator onwards, we imagine AI in our own image. But a strong body of work suggests that human intelligence has evolved in the way it has because of our survival instincts. We need to eat, so we developed the ability to co-operate. Silicon-based AI won’t feel hungry, or get cold. It won’t have the fickle influences of hormones and genes. “We have no idea what an AGI [artificial general intelligence] would be like,” says Mr Walsh. “Would it be conscious? Would it have or need to have emotions like us? This is why AI is such an interesting scientific challenge. Will it be like biological intelligence? Or is intelligence created in silicon something different – less emotional, more rational?”
One thing is certain, however, says Mr Walsh. “We will find out in the next century.”