Should We Worry About Artificial Intelligence?
Photograph by Hulton-Deutsch Collection/Corbis via Getty Images
Reasons to be cheerful about human kind’s replacement by computers – if we don’t blow ourselves up first.
The inevitable rise of artificial intelligence may be just one of many impending crises humankind is considering, but it’s also, arguably, the most unfathomable. If we are able to develop autonomous, reflexive combinations of software and hardware – such as self-driving cars, or chatbots, or computers that can create their own software solutions to problems – will we be freed to focus on other things? Or will we find ourselves unemployed? The moment that true AI arrives, will it help us or supersede us? Is true artificial intelligence – that is, a computer with consciousness – possible? If so, when exactly will it arrive?
There are no real answers to these questions. At best, we have a bundle of educated guesses, many of them contradictory. Few people are better placed to hypothesise about the future of AI and what it means for us than Professor Max Tegmark, a Swedish-American cosmologist, professor of physics at the Massachusetts Institute of Technology and co-founder of The Future Of Life Institute, a volunteer-run organisation that exists, says its website, “to catalyse and support research and initiatives for safeguarding life and developing optimistic and positive ways for humanity to steer its own course considering new technologies and challenges”.
In his book Life 3.0, Professor Tegmark considers not just what AI will mean for the near future, but for the next billion years of humanity, and covers topics as diverse as lawmaking, employment, the meaning of the word “consciousness” and the physical future of our expanding universe. Some of the scenarios he conjures up are, as you might expect, terrifying. At one point, he considers the fact that AI may have very little impact, because we might, after all, destroy ourselves accidentally by launching a major nuclear war. The greater mission of the book is to encourage us to take charge of the future by thinking about the ways in which, given the choice, we’d like to see AI develop. How, Professor Tegmark asks, can we define goals for machines in a way that means they won’t try and eliminate us? On a more positive train of thought, what are the incredible things we might be able to achieve if we set artificial intelligences on the correct task?
There’s a lot of doom and gloom flying about at the moment, and in many ways Professor Tegmark’s book supplies a refreshing antidote to the pessimism. As a taster, we’ve highlighted three of his (many) hypotheses about what might be coming next below.
Three reasons to be cheerful
Machines could be benign descendants
The idea of humanity being superseded by machines is a horrifying one. But, says Professor Tegmark, there is a scenario in which we could pass on the mantle of existence to inorganic (or partly organic) beings in a more hopeful and less confrontational way. He describes this as the “descendants” scenario, in which “AIs replace humans, but give us a graceful exit, making us view them as our worthy descendants, much as parents feel happy and proud to have a child who’s smarter than them, who learns from them and then accomplishes what they could only dream of – even if they can’t live to see it all”.
The billion-year future
This brings us neatly to the next point. As a species, we’re well aware that the lifespan of our planet is finite, that eventually our sun will use up all its energy, just like any other star, and Earth will become uninhabitable. But, suggests Professor Tegmark, this is only inevitable if we remain in our current form, with our current technologies. It’s possible to imagine a world in which super intelligences figure out alternative energy sources, or ways of working around the problems of the dissipating universe that mean life will continue to flourish, despite the apparently adverse conditions. In this light, AI is not just unstoppable, but completely necessary. “If we don’t keep improving our technology, the question isn’t whether humanity will go extinct, but how,” writes Professor Tegmark. “What a wasted opportunity that would be.”
AI is limited by physics
Movies such as The Terminator and The Matrix spooked us about the idea of an omnipotent, omnipresent AI that establishes a digital dictatorship across the globe. But, Professor Tegmark notes, the limitations of the laws of physics mean the “megabrain” idea is not entirely practical. Even a controlling AI may have to organise itself a little more like a human society, in a hierarchical fashion. In short, there may be no limits on how fast an AI can think, but there are limits on how quickly it can communicate. “The round-trip travel time [at the speed of light] for a message crossing the Earth is about 0.1 seconds (about the timescale on which we humans think),” writes Professor Tegmark. “So an Earth-sized AI brain could have truly global thoughts only about as fast as a human one… This physics-imposed speed limit on information transfer therefore poses an obvious challenge for any AI wishing to take over our world, let alone our universe.” Phew, right?