Support First Things by turning your adblocker off or by making a  donation. Thanks!

Superintelligence: Paths, Dangers, Strategies
by nick bostrom
oxford, 352 pages, $29.95

Since cofounding the World Transhumanist Association in 1998, the Swedish-born Oxford philosopher Nick Bostrom has attempted to give a serious academic mien to the movement known as transhumanism. Transhumanists aim to use advanced computing, engineering, and biological manipulation to improve and extend human life. First associated with twentieth-century eugenics, transhumanist ideas are increasingly popular among the people who design the products that affect our lives.

Rapid advances in computing have fueled transhumanist hopes that human and computer intelligence will one day merge. The idea of the “singularity”—the moment at which advanced computers become so powerful that they can keep upgrading themselves absent human help—has gained prominence through the efforts of people like engineer and ­futurologist Ray Kurzweil, whose 2005 book The Singularity Is Near was a bestseller. Google’s appointment of Kurzweil as its director of engineering in December 2012 marked transhumanism’s entrance into the mainstream.

Bostrom’s latest volume, Superintelligence, describes the advanced forms of artificial intelligence likely to appear in coming decades. When machine superintelligence appears it will, says Bostrom, “greatly exceed . . . the cognitive performance of humans in virtually all domains of interest.” Machines will achieve natural language ability and surpass human perception and intelligence, making the world a very different place. The arrival of superintelligence could be a great boon—and a great threat.

The advantages of superintelligence are clear. A mechanical “substrate” is more durable than a brain, its software is more easily modified, and advanced computers can network with greater speed and ease. All that remains is a superintelligent machine’s ability to undergo “recursive self-improvement” and increase its intelligence without human involvement.

That prospect has its dangers. If one machine gains a decisive strategic advantage, Bostrom warns, it could become a singleton—“a world order in which there is at the global level a single decision-making ­agency.” What would this superpowerful ­superintelligence want to do with its advantages?

A superintelligence might wish to take over the world, and not in a way friendly to human values. A weak AI might initially inspire human confidence, but once it becomes superintelligent, Bostrom suggests, “it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.” Even if a superintelligence had agreeable goals, like making human beings smile, it could bring them about in disagreeable ways—“manipulating facial nerves” to make us smile all the time. A superintelligence with advanced manufacturing technology might fulfill its production goals by covering the globe with factories.

These “existential risk” scenarios are dismissed by many computer and materials scientists, but Bostrom proposes them in order to direct human planning well in advance. Unlike “bioconservatives” (as transhumanists call their critics) who simply oppose this future, he encourages a symbiosis of mankind and artificial intelligence. However circumspect his recommendations are, the possibility that such a symbiosis might go horribly wrong should inspire caution. The modern project of controlling nature is founded on the belief that technological innovation improves the human condition and should be encouraged rather than controlled. Yet the very technologies Bostrom and other transhumanists propose to overcome human limits pose ­catastrophic risks.

Human beings, Bostrom complains, often lack the foresight to follow through on their own strategic advantages. But the limitations he cites—our ­bounded utility functions, our risk aversion, our “non-maximizing decision rules”—might help us avoid the sort of catastrophe superintelligence portends. Bostrom’s effort is to supply artificial limitation to artificial intelligence, through providing AI methods of decision-making that limit catastrophic risk. In contrast to question-answering machines, a “sovereign” intelligence designed to operate autonomously requires ­careful planning.

Bostrom suggests programming AI with a self-determining ethical system called “coherent extrapolated volition.” This is Eliezer Yudkowsky’s term for teaching AI to do what we would all want under some ideal circumstance. Bostrom admits that superintelligence may end up being very different from our own, so we cannot know exactly what its volition will be. Perhaps, he says, it will be a selfish superintelligence that quickly strays from our best interests. Or perhaps the superintelligence will return us to a pastoral life, whether we like it or not. Anyone who has watched C-SPAN or attended an academic conference will be less than inspired by Bostrom’s call for an international effort to decide upon a value-creation system for a future superintelligence.

Bostrom believes we are headed to a world of omnicompetent artificial intelligence, but he wants to preserve something human as we make the transition. He foresees “the far longer and richer lives that a friendly superintelligence could bestow.” AI properly programmed could provide us “the opportunity to develop into beatific posthuman spirits.”

These posthuman spirits, Bostrom writes in an earlier essay, “Letter from Utopia,” achieve every pleasure of human life and the additional ­pleasure of understanding “the complex relationships” among all things. Post­human pleasure captures the peak of human happiness, the moment just before “the softly-falling soot of ordinary life” returns to cover our joy. “Your body is a deathtrap.”

This Gnostic sentiment underlies the crucial ambivalence of transhumanism: the tension between a humanist vision of enhancing humanity and a rationalist goal of creating an all-powerful superintelligence. The human values we might add to a burgeoning superintelligence are the same values that should cause us to question whether its perfect rationality would really be beneficial to us.

Will superintelligence arrive? Leading neuroscientists such as Duke’s Miguel ­Nicolelis reject the transhumanist assertion that the brain could be modeled by computers. And materials scientists view transhumanist hopes about all-powerful nanotechnology with serious doubts. Right or wrong in their predictions, though, transhumanists shouldn’t be dismissed. They articulate a vision that shapes the way we think about human nature.

Given the possibility of a superintelligent machine, we face an essentially political decision about its invention. Bostrom calmly recommends “arranging matters in such a manner that somebody or something could intervene to set things right if the default trajectory should happen to veer toward dystopia.” If only it were so easy. Once we leave human values for the benefits of artificial intelligence, we’ve already lost the human standard by which we might set it right.

Gladden J. Pappin is a fellow of the Potenziani Program in Constitutional Studies and the Center for Ethics and Culture at the University of Notre Dame.

Dear Reader,

While I have you, can I ask you something? I’ll be quick.

Twenty-five thousand people subscribe to First Things. Why can’t that be fifty thousand? Three million people read First Things online like you are right now. Why can’t that be four million?

Let’s stop saying “can’t.” Because it can. And your year-end gift of just $50, $100, or even $250 or more will make it possible.

How much would you give to introduce just one new person to First Things? What about ten people, or even a hundred? That’s the power of your charitable support.

Make your year-end gift now using this secure link or the button below.
GIVE NOW