Support First Things by turning your adblocker off or by making a  donation. Thanks!

Jeffrey Satinover has written an audacious book. He believes that he has found, in two great breakthrough ideas, the keys to understanding the human mind. He is not the originator of these ideas, which are the result of the work of many researchers in neuroscience, computer science, and physics; he is their herald. Although a psychoanalyst by profession, he writes with an impressive level of knowledge and sophistication about all of these highly technical disciplines.

The first of the two key ideas of The Quantum Brain is “bottom-up computation.” In the book’s first half, Satinover guides the reader through the penetralia of that field: neural networks, spin glasses, Hopfield nets, cellular automata, Belousov-Zhabotinsky reactions, and much, much more. The nontechnically inclined should be prepared for some hard spelunking; but these are fascinating ideas, and worth the effort.

Satinover believes that in this part of the book he has succeeded in exposing the general principles on which the human brain is built. He is convinced that artificial thinking machines based on these principles are just around the corner and will far outstrip human mental powers. He likes to think, he says, that these artificial superbrains will “in their kindness . . . be willing to keep us [around]”flawed racehorses turned out to pasture.” With this view, of course, goes the idea that we ourselves are machines. “Looking back at the territory we’ve covered,” he says near the end of the book’s first part, “we . . . arrive at the following conclusion: Man is a machine.” And yet he finds this idea disquieting. He doesn’t mind being a machine, even an obsolete machine, but he does not want to be just a machine. He wants to have some semblance of free will. He wants at least to keep that shred of dignity. This is where the second key idea of the book, “quantum indeterminacy,” comes into play. Quantum theory, Satinover suggests, will rescue us from mere mechanism.

As he is aware, the problem of mechanism did not begin with the modern computer. It began with Newton. The great problem that Newton left philosophers was to reconcile human freedom with the rigid determinism of physical law. With the advent of quantum theory”which showed that the laws of physics predict only the probabilities of future events”many philosophers and physicists have come to think that such a reconciliation might be possible. I am inclined to believe that they are right.

However, there are several difficulties with the idea that quantum indeterminacy provides an opening for free will. The most formidable is the fact that quantum indeterminacy usually only plays a significant role in events at the atomic scale, whereas neurons, generally thought to be the basic building blocks of the brain, are a great deal larger. Consequently, neurons should behave in a “classical” or Newtonian manner. The quantum indeterminacy should get averaged away, washed out. It is this well-known argument that Satinover attempts to overcome in the second half of his book. He uses various facts about the internal structure of neurons and chaos theory to argue that quantum indeterminacy on the subcellular level can get amplified, rather than washed out, at the macroscopic level.

Quantum indeterminacy, of course, can have noticeable effects at the macroscopic level; otherwise, we couldn’t know that it exists. The reason that physicists can see quantum indeterminacy in action is that they are able to build special devices, such as Geiger counters, that amplify the effects of quantum indeterminacy at atomic scales to produce results that they can observe with their (macroscopic) sense organs”audible clicks, for example. Satinover uses this fact to construct a very ingenious and highly original argument that quantum indeterminacy must play a role in the workings of the human brain.

His argument goes like this. The only examples in nature (he thinks) of “quantum amplification” to the macroscopic level occur in laboratory situations, which are after all contrived and brought about by human brains. Therefore, one may say that human brains do quantum amplification. Or, to put it another way, human brains are quantum amplifiers. Q.E.D. This is such an ingenious argument that it is a pity to have to point out that it is totally fallacious. A brain may make use of telephones, say, without being a telephone. A brain may build devices that employ the principle of the lever, or the principle of the laser, without its own internal workings depending on those principles. So, too, with quantum indeterminacy.

It is not clear why Satinover feels that he has to resort to such legerdemain, when much more solid arguments are available. Why not simply start with the evident fact of free will? Since free will requires a breakdown of physical determinism, and quantum theory gives us just such a breakdown, one would seem to have grounds enough for suspecting a role for quantum theory in explaining the human mind. The reason Satinover does not argue in this fashion seems to be that he is not quite sure that we do have free will. The only freedom he is sure exists is the “freedom” of quantum systems. In Satinover’s view atoms are definitely “free,” but it remains to be seen whether we are. (His understanding of freedom is evidently not the traditional one, according to which only rational beings can be free.) He believes that if people succeed in building artificial “quantum computers” that can think, it would “open the possibility that nature has likewise learned how to employ quantum effects in our own evolution . . . [and that we] no longer need consider ourselves mere machines but rather creatures who . . . embody . . . some of the freedom that quantum systems alone appear to possess.”

In fact, it appears that Satinover is not quite sure that human beings are even conscious: “Unlike lightning, say, consciousness cannot even be shown to exist.” How strange. Satinover is a psychiatrist, so I presume that he believes in the existence of the Unconscious. It is the existence of the Conscious he appears to doubt. I suppose that (reversing the famous “Turing test”) he will come to believe in his own consciousness (and free will) only when some computer of the future interrogates him and certifies them to be genuine.

Will we ever build machines that are smarter than we, as Satinover believes? The fundamental question is not one of technology or resources. It has to do with whether any machine that was only a machine, quantum or otherwise, could ever really have “mind” in the sense that we do. As we have seen, Satinover “arrive[s] at the . . . conclusion” that man is a machine. However, he does not arrive at it by way of any argument. Nowhere in his book is a single word of argumentation to be found for or against the proposition that man is a machine. There is plenty of discussion about the brain. But none of it touches the central question: Is the human mind explicable just in terms of the brain?

But even leaving aside this all-important issue, the brain builders of the twenty-first century must still face another huge question: What kind of machine is the human brain? For decades the prevailing view in cognitive science and artificial intelligence was that our brains work somewhat in the manner of the conventional Turing type of computer, which uses programs. Satinover, however, rejects this “top-down” model of the brain in favor of a “bottom-up” model based on “neural networks.” A neural network is a system of nodes and connections that begins in a quite undifferentiated state”a tabula rasa”but which “learns by doing” its appointed task. Through a training process involving trial and error”in which those “neural” connections that are in­ volved in successful trials are strengthened, while those involved in failures are weakened”the network evolves, adapts, and takes on definite shape. Since no one “programs” the network, by the time it is fully trained no one can really tell how it works.

Satinover likes the idea that the human brain spontaneously self-organizes like this, and that intelligence (and will, if we have it) wells up from below, from the undirected local interactions of matter. It has a pleasant Darwinian feel. Moreover, since no one knows how the human brain does most of what it does, such bottom-up evolutionary methods seem like the best bet for artificially making human-like or even superhuman brains. However, there is more to the human mind than Satinover seems to appreciate.

It is true that a lot of the things we pick up from experience, such as riding a bicycle or recognizing someone’s voice, we seem to learn in the manner of a neural network. In the end, we can do the thing, but can’t explain how, and certainly can’t boil it down to a set of instructions or a “program.” But there are also a lot of things that we do according to quite definite procedures, rules, and methods that are easily formulated, such as logical reasoning, putting together grammatical sentences, doing long division, filling out tax forms, and playing games of chess. At these things computer programs are vastly superior to neural networks.

Of course, viewing the human mind as merely a conventional Turing kind of computer is also grossly inadequate. There are many things the human mind can do that computer programs fail miserably at. An important example is the human capacity for global or “abductive” thinking. Humans can understand context, relevance, and significance”in other words, we can see the big picture. In his recent book The Mind Doesn’t Work That Way , the eminent cognitive scientist Jerry Fodor observes that “the failure of artificial intelligence to produce successful simulations of routine commonsense cognitive competences is notorious, not to say scandalous. We still don’t have the fabled machine that can make breakfast without burning down the house.” He notes that many have suggested neural networks as the answer, but argues convincingly that they are not. “[Neural networks] notoriously can’t do what Turing architectures can, namely, provide a plausible account of the causal consequences of logical form. But they also can’t do what Turing architectures can’t, namely provide a plausible account of abductive inference. It must be the sheer magnitude of their incompetence that makes them so popular.”

What then is the answer to the abductive thinking riddle? In Fodor’s view, “Nobody has the slightest idea.” “Personally, he writes, “I am not inclined to celebrate how much we have learned about how our minds work . . . . The current situation in cognitive science is light years from being satisfactory. Perhaps somebody will fix it eventually; but not, I should think, in the foreseeable future, and not with the tools we currently have in hand.”

There is a great deal in Dr. Satinover’s book that I think wrong, implausible, or unpersuasive. However, he is dealing with deep and perhaps humanly insoluble puzzles. That he does not have all the answers, and that many of the answers he does have are wrong, is only to be expected. And yet on one issue of great importance I think that he is right and most of the experts wrong. I, too, believe that quantum theory has something profound to tell us about the human mind and the problem of free will. So also thought Herrmann Weyl, one of the great mathematicians and physicists of the twentieth century, who wrote in 1931:

We may say that there exists a world, causally closed and determined by precise laws, but . . . the new insight which modern [quantum] physics affords opens several ways of reconciling personal freedom with natural law. It would be premature, however, to propose a definite and complete solution of this problem. One of the great differences between the scientist and the impatient philosopher is that the scientist bides his time. We must await the further development of science, perhaps for centuries, perhaps for thousands of years, before we can design a true and detailed picture of the interwoven texture of Matter, Life, and Soul. But the old classical determinism of Hobbes and Laplace need not oppress us longer.

Stephen M. Barr is a theoretical particle physicist at the Bartol Research Institute of the University of Delaware.