Support First Things by turning your adblocker off or by making a  donation. Thanks!

“Any two AI designs might be less similar to one another than you are to a petunia.” - Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk .

John Schwenkler was kind enough to point me towards this post by Tyler Cowen at Marginal Revolution in which Cowen discusses his new paper on the Turing Test .

The paper is fun, enlightening, and makes the excellent point that Turing never considered passing his test to be a necessary condition for the presence of intelligence (though I would argue that Turing probably thought it a sufficient one — barring gimmicks like infinitely long lookup-tables).

This leads, in a roundabout way, to my astonishment at the hubris of the researchers who come up with things like “The Analysis and Design of Benevolent Goal Architectures” . The space of all possible minds, as Yudkowsky points out in the quotation with which this post opens, is inconceivably huge. Indeed, as he goes on to point out:

The term “Artificial Intelligence” refers to a vastly greater space of possibilities than does the term “Homo sapiens”.  When we talk about “AIs” we are really talking about minds-in-general, or optimization processes in general.  Imagine a map of mind design space. In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general.  The entire map floats in a still vaster space, the space of optimization processes.

The naive beliefs that we could reliably predict where in this space an artificial intelligence could end up or that any such intelligence could pass a Turing Test stem from our tremendously circumscribed experience with the universe of optimization processes. I’ve heard proposals for “ensuring” that AI is friendly as naive as feeding a neural network positive reinforcement when it perceives a smiling human face, but as Yudkowsky points out:
More than one model can load the same data.  Suppose we trained a neural network to recognize smiling human faces and distinguish them from frowning human faces.  Would the network classify a tiny picture of a smiley-face into the same attractor as a smiling human face?  If an AI “hard-wired” to such code possessed the power - and Hibbard (2001) spoke of superintelligence -  would the galaxy end up tiled with tiny molecular pictures of smiley-faces?

AI researchers need to give up on the idea — a favorite of movie directors and science fiction writers — of machine intelligence as being “like people but different” as opposed to “unfathomably alien”. Successful artificial intelligence research is as likely to be mankind-destroying as it is to be singularity-ushering. The distance from an amoeba to Einstein is but a speck on the map of total mindspace. Cowen’s and Yudkowsky’s papers are both excellent reads that make this point — the former subtly and the latter emphatically.

P.S. Back in the Dark Ages when Pomocon was hosted at Typepad, James and I had a back-and-forth in which we ended up agreeing that one of the most important questions when it came to AI was whether we would be able to preserve a belief in ourselves as incarnated creatures. I can’t find it for the life of me, but I’d be much obliged if somebody did.

Dear Reader,

While I have you, can I ask you something? I’ll be quick.

Twenty-five thousand people subscribe to First Things. Why can’t that be fifty thousand? Three million people read First Things online like you are right now. Why can’t that be four million?

Let’s stop saying “can’t.” Because it can. And your year-end gift of just $50, $100, or even $250 or more will make it possible.

How much would you give to introduce just one new person to First Things? What about ten people, or even a hundred? That’s the power of your charitable support.

Make your year-end gift now using this secure link or the button below.
GIVE NOW

Comments are visible to subscribers only. Log in or subscribe to join the conversation.

Tags

Loading...

Filter First Thoughts Posts

Related Articles