The Future of Life Institute, an organization founded to “catalyze and support research and initiatives for safeguarding life,” claims that recent breakthroughs have clearly made possible the achievement of “superintelligence” in our lifetime. While no longer strictly science fiction, it is also unclear what effect superintelligence — which is a thinly veiled term for self-bettering artificial intelligence — will have on humanity. Very often we are presented the dystopian vision of this aftermath. In “The Matrix,” humans are farmed by machines for energy; in “2001: A Space Odyssey,” the AI unit HAL attempts to kill all human crewmembers; in “Ex Machina” the protagonist attempts to release an imprisoned AI (to spoiler-ridden effect). While these portraits of AI as deviant machines inevitably cause anxiety in our increasingly technologized world, there is less to fear about artificial intelligence than you may think.
The “Technological Completion Conjecture,” proposed by Nick Bostrom, director of the Future of Humanity Institute at Oxford University, says that so long as humans have the capability to develop technology, then “all important basic capabilities that could be obtained through some possible technology will be obtained,” meaning that all technology that can be developed by humans will be developed. So if we assume that AI is on the horizon of human achievement, then we must also morally grapple with the conundrum of whether or not AI poses a real threat to humanity.
Technology — if used effectively and appropriately — can be of real benefit to humankind. Taken to its zenith, this means inevitably that artificial intelligence — if developed responsibly and with an eye toward its effect on human life, happiness and security — will be one of the greatest and most beneficial of humanity’s achievements.
Pseudo-artificial intelligence has already seen positive application in an array of areas. Semi-autonomous and self-driving vehicles have proven to be safer than their human-only counterparts. Many automated systems are far more efficient than humans could ever be. Dangerous jobs, like handling harmful materials or working in places with little oxygen, are now being done by machines. While it’s true that machines are not yet self-aware, the advent of AI will only boost efficiency in these many areas.
While artificially intelligent machines may replace their human counterparts, we should not see this as displacement but as freeing those people to focus on things they would otherwise be inclined to do. It’s hard to imagine a future in which this works, economically, politically and socially, but theoretically, leaving menial tasks to artificially intelligent workers, undoubtedly much more capable and efficient, will free up time for the mass of humanity to really focus on things that make a positive difference. Along with the exponential increase technology will see in the coming century, we could make equal strides in the fields of humanitarianism, peace and conflict resolution, global political stabilization, poverty and the elimination of plutocratic class-based systems, race relations, health and disease research, the responsible use of earthly resources, renewable energy, space exploration, deep earth exploration and increasing the longevity of human life.
It may sound utopic, and it’s supposed to. While it seems that in literature and film the future has a way of becoming dystopic, fiction isn’t truth. I’m not saying it will be simple to effectively and holistically merge AI technology with the rest of humanity. The usual questions will be raised: how, if they are artificially intelligent, can we justify enslaving a race of beings to do our most menial tasks? How, if they really are superintelligent, are we expected to stop them if they decide to enslave us? Could we even be the dominant species when machines reach peaks of intelligence we can hardly theorize?
Many moral futurists, working in a field of research and conjecture that is large and growing, ponder these questions everyday at forums, conferences, college classrooms and boardrooms around the world. A dedicated portion of these futurists — some transhumanistic — make it a point to trouble the waters for anyone who thinks AI technology is unerringly a great idea. Many of these same futurists, however, make the case for a morally sound and ethically observant path down which AI research and development can, without significant danger, continue.
This is the path I believe we will travel. Moreover, I hold to the old adage: “I’m a cynic by intellect, but an optimist at heart.” While I enjoy dystopian AI novels and films artistically, I can’t philosophically subscribe to them. I believe that with a little luck, a lot of moral reasoning and some truly extra-human hope, artificial intelligence just might save humanity as we know it.