In Plato’s “Phaedrus,” Socrates tells the legend of King Thamus who is given the gift of writing from Theuth, the Egyptian god of knowledge. Writing, Theuth persuades, will act as a mass remedy for memory loss, allowing information to be more easily remembered and stored. No longer will people have to rely on oral tradition to learn and pass on information. King Thamus is unconvinced and argues writing will in fact have the opposite effect, that writing will lead to laziness, not enlightenment. Instead of internalizing information, younger generations would rely on notes, books, reminders and otherwise externalized forms of knowledge to formulate the illusion of knowledge. To King Thamus, the technology of writing is a threat to the prosperity of human civilization.
Centuries have passed, and it seems King Thamus was wrong: It will not be writing that degrades or destroys humanity after all. The Egyptian leader could not have foreseen hydrogen bombs capable of escalating nuclear warfare to the point of eradicating every lifeform on earth or industrial levels of carbon emissions predicted to drastically alter the planet’s climate. He didn’t know about biological agents, that in the wrong hands, could wipe out entire countries or populations at a time. While Thamus was worrying about writing, the rest of humanity was busy cooking up newer, deadlier technologies capable of existential destruction far beyond the ancient ruler’s comprehension.
The real threats
If not writing, then what? In most contemporary academic and political circles, the two biggest threats discussed are climate change and nuclear warfare. With the current White House administration demonstrating unprecedented hostility towards global climate agreement matched only by its apparent willingness to use nukes if necessary, both of these threats are timely and worthy of concern. There is another threat, however, that has received disproportionately little attention and contemplation. Artificial intelligence — AI, superintelligence, autonomous robots — could spiral out of control and drastically alter the world as we know it, but that thinking was typically left for the likes of H.G. Wells and Michael Crichton. Many researchers, scientists and philosophers, however, are convinced the threat of AI is more than science fiction.
On Aug. 20, a group of AI innovators and robotics experts, including Tesla’s Elon Musk and Mustafa Suleyman of Google DeepMind, penned an open letter to the United Nations calling for a complete ban on autonomous weapons.
“As companies building the technologies in Artificial Intelligence and robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” the letter reads. “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
But autonomous killer robots are just the tip of the technological iceberg that is artificial intelligence. Automated weapons, while terrifying to think about in the hands of unstable regimes, still rely on human-coded algorithms. Like Deep Blue, an IBM-developed computer that beat the world chess champion in 1997, these weapons, which are being developed today in the form of pilotless planes and military combat drones, are subordinate to human programming and development.
The real worry of AI critics is a superintelligence runoff that spirals beyond human comprehension or control. In academic circles, it is known as an intelligence explosion or “the singularity,” a term coined in 1983 by sci-fi writer Vernor Vinge. Based on the idea that computer processing speed doubles over regular intervals of time, it is easy to imagine computer processing reaching an exponential rate in which programming outpaces human understanding. The argument goes like this: One day, be it next year, next decade or 50 years from now, researchers will develop a machine smarter than any human. As a machine of superior intelligence, it will be better at programming itself than its creators. As it improves itself, it will get smarter and eventually create a machine smarter than itself. Then, that machine improves itself and creates something even smarter. Then the next machine, then the next. Then, intelligence explosion.
While the singularity argument is logically sound, it is little more than a thought experiment that relies on crucial philosophical assumptions, like that sentient, humanlike intelligence is something that can even be programmed into a machine. Sure, there are calculators that can compute numbers and functions faster than the human brain, but can a computer program decide whether it is ethical to engage in war? Or to topple an oppressive dictator? Not everyone is convinced.
“It’s not obvious that a computer can capture what a human brain does,” said Thomas C. Henderson, a professor in the University of Utah school of computing. There are hypotheses that such a code could be developed down the line, but “nobody knows if they’re really true.”
Henderson’s research is aimed at understanding robot cognition and developing computer programs that understand and do things in the real world: walking, driving, making motions. In his lab sits a number of automated machines, including unmanned aerial vehicles and small wheeled rovers. He is hopeful about developments in AI technology, but believes popular perceptions about the technology are based largely in film and media.
“There is always this notion that we can create robots that can even walk, that can somehow resemble human-type capabilities,” Henderson said. “It’s much tougher than it seems, building mechanisms that work well and are robust. Nobody’s really figured that out yet.”
Whether or not AI technology is good or bad depends on who uses it, Henderson says. He believes there is “a lot of straightforward benefit to most people.” There is, however, “good potential for abuse,” Henderson admits.
“I think a lot of good can come from it, because AI techniques could be used to help figure out how to keep the power grid from going down, so that’s good,” Henderson said. “But it can also be used to take down the power grid.”
Henderson hasn’t given much thought to the notion of regulating AI, but he believes laws will start being implemented on a case-by-case basis. He uses speed limits as an analogy. As a society, no one really thought about regulating highway speeds until cars were advanced enough to justify doing so.
“You only impose speed limits once you build cars that can go faster than is safe,” he said. “I think regulations will follow the implementations of technology.”
As improvements are made to self-driving cars, like Google’s Waymo or Tesla’s Autopilot, Henderson said governmental agencies are going to have to start finding ways to regulate the technology.
When speaking with Henderson, it is immediately obvious he does not view AI in the immediately threatening way that people like Musk or Nick Bostrom. Bostrom is an Oxford philosopher and researcher of superintelligence who has been at the forefront of AI criticism.
“There seems to always be an issue of how theory and technology are applied,” Henderson said. “[For example], there are good and bad applications of nuclear technology. A lot of things [that] come out of that are quite useful, and then some things are kind of dangerous.”
In the end, it all depends on “how people are exploiting it,” according to Henderson.
With its sensational discussions about sentry-yielding war machines and maleficent supermachines, it is easy to leave criticisms of AI in comic books, graphic novels and purely conceptual philosophical discussions. Looking at the history of human development, however, defined by agricultural, industrial and technological revolutions, it seems unfeasible developments in artificial intelligence won’t drastically alter the world.
Superintelligence has the potential of saving the world by supplying enough physical and intellectual labor to allow humans to live freely. It also has the potential of spiralling into something drastically beyond our comprehension or control, and unless its interests perfectly coincide with our own, we should be worried. The greatest takeaway from the threat of runoff technology comes from the genre of science fiction.
In Crichton’s “Jurassic Park,” Dr. Ian Malcolm chastises the park’s creators for arrogantly thinking they can exert control over their scientific creation. The fact is, we don’t know what artificial intelligence will look like or be capable of, but this fact alone is reason enough to worry.