Why Only Us: Language and Evolution
by robert c. berwick and noam chomsky
mit press, 224 pages, $22.95

Perhaps the most sensitive point of contact between religion and science is the issue of human distinctiveness. Christian teaching affirms that there is an “ontological discontinuity” between humans and other animals. Only humans are made in the image of God and have immortal souls endowed with the spiritual powers of rationality and freedom. This does not admit of degrees: One either has an immortal soul or one does not. The discontinuity must therefore be historical as well as ontological. In our lineage there must have been a first creature or set of creatures who were human in the theological sense, but whose immediate progenitors were not.

This seems to fly in the face of evolutionary biology. Evolution occurs gradually, by the accumulation of genetic changes that spread through populations. New species do not appear at a single stroke, in one generation; there was not a “first cat” whose parents were non-cats. There is no contradiction with theology, however. Biological speciation is indeed a gradual process, but in the traditional Christian view, the conferring of a spiritual soul upon human beings is not a biological process at all. It is quite consistent to suppose that a long, slow evolutionary development led to the emergence of an interbreeding population of “anatomically modern humans,” as paleo-archeologists call them, and that when the time was ripe, God chose to raise one, several, or all members of that population to the spiritual level of rationality and freedom.

Many authors, including C. S. Lewis, have proposed such a view of human origins. It is suggested also in the Vatican document “Communion and Stewardship: Human Persons Created in the Image of God,” issued in 2004 with the authorization of then Cardinal Ratzinger:

Catholic theology affirms that the emergence of the first members of the human species (whether as individuals or in populations) represents an event that is not susceptible of a purely natural explanation and which can appropriately be attributed to divine intervention. Acting indirectly through causal chains operating from the beginning of cosmic history, God prepared the way for what Pope John Paul II has called “an ontological leap . . . the moment of transition to the spiritual.”

The idea of human exceptionalism goes against the grain of much of modern thought. Since the time of Copernicus, science has eschewed anthropocentrism in any form. And several major breakthroughs in science have been seen as contributing to what Stephen Jay Gould called “the dethronement of man.” Not surprisingly, therefore, a great deal of recent research has been devoted to finding animal analogues of human mental abilities.

It has been found, for instance, that some species are remarkably clever problem-solvers, including animals as simple and distant from us evolutionarily as crows. The “mirror self-recognition test” suggests that some kinds of animals have some self-awareness. Several species use tools, such as rocks to crack nuts, and some even “make” tools; for example, elephants tear branches from trees to use as flyswatters and monkeys strip leaves off of sticks and use them to draw insects out of holes. In 1999, an article in Nature went so far as to claim that chimpanzees have “cultures,” because different bands of them exhibit numerous differences in “technology and social customs.”

None of this adds up to rationality, of course, which most people would agree emerged only in the genus Homo. When it emerged and how, whether slowly or suddenly, cannot be studied directly, unless one has a time machine. Therefore arguments in this area are highly inferential. One approach is to look for signs of creative activities that require symbolic thought, such as art, body decoration, finely wrought tools of novel materials, “grave goods” (possibly indicating belief in an afterlife), and so on—what one author calls “the five behavioral B’s: blades, beads, burials, bone tool-making, and beauty.” These seem to appear only with Homo sapiens and proliferate rather suddenly (in evolutionary terms) roughly 50,000 years ago. The earliest symbolic artifact found so far is a piece of ochre with a cross-hatch design carved in it, discovered in the Blombos Cave in South Africa and dated to about 80,000 years ago.

A more promising approach to finding the beginnings of human rationality may lie with the study of language. This is paradoxical, perhaps, in that spoken language leaves no fossils or artifacts. One can, however, investigate the neural machinery of language, the genetic basis of that machinery, and the deep underlying structures of language itself. This is the avenue pursued in the remarkable new book Why Only Us by Robert C. Berwick and Noam Chomsky. It is a breathtaking intellectual synthesis. Using an array of sophisticated arguments based on discoveries in linguistics, neuroscience, genetics, computer science, evolutionary theory, and studies of animal communication, they develop a set of hypotheses about the nature and origins of human language, which will (if they hold up) have far-reaching implications. As the title of their book implies, Berwick and Chomsky argue that only human beings have language. It is not that there are other animals possessing it in germ or to a slight degree; no other animals, they insist, possess it at all. The language capacity arose very suddenly, they say, likely in a single member of the species Homo sapiens, as a consequence of a very few fortuitous and unlikely genetic mutations.

It would be impossible to do justice to Berwick and Chomsky’s arguments in a short space; they are numerous, technical, and involved. But the main outlines and a few key points can be sketched.

The starting point is a radical dissimilarity between all animal communication systems and human language. The former are based entirely on “linear order,” whereas the latter is based on hierarchical syntax. In particular, human language involves the capacity to generate, by a recursive procedure, an unlimited number of hierarchically structured sentences. A trivial example of such a sentence is this: “How many cars did you tell your friends that they should tell their friends . . . that they should tell the mechanics to fix?” (The ellipses indicate that the number of levels in the hierarchy can be extended without limit.) Notice that the word “fix” goes with “cars,” rather than with “friends” or “mechanics,” even though “cars” is farther apart from “fix” in linear distance. The mind recognizes the connection, because “cars” and “fix” are at the same level in the sentence’s hierarchy. A more interesting example given in the book is the sentence “Birds that fly instinctively swim.” The adverb “instinctively” can modify either “fly” or “swim.” But there is no ambiguity in the sentence “Instinctively birds that fly swim.” Here “instinctively” must modify “swim,” despite its greater linear distance.

Animal communication can be quite intricate. For example, some species of “vocal-learning” songbirds, notably Bengalese finches and European starlings, compose songs that are long and complex. But in every case, animal communication has been found to be based on rules of linear order. Attempts to teach Bengalese finches songs with hierarchical syntax have failed. The same is true of attempts to teach sign language to apes. Though the famous chimp Nim Chimpsky was able to learn 125 signs of American Sign Language, careful study of the data has shown that his “language” was purely associative and never got beyond memorized two-word combinations with no hierarchical structure.

To be sure, linear order plays an important role in human language as well, but only in its externalization, according to Berwick and Chomsky. It obviously must do so there, since words can only be spoken in a temporal sequence. Interestingly, there are similarities between the kinds of linear ordering found in the externalization of human language and in animal communication. For example, the rules governing them can be generated by the same kinds of algorithms (called in the jargon of computer science “finite-state transition networks”). There are affinities also at the level of brain structure and genetics, which suggests an evolutionary connection. The same is not true, however, of hierarchical syntax, which is unique to humans and can only be generated and parsed by more sophisticated algorithms.

The question of how the human brain processes hierarchical syntax has preoccupied linguists for more than six decades. Chomsky and others expended enormous efforts trying to find adequate “generative grammars” for specific human languages and a “universal grammar” underlying them all. But the results were at first of byzantine complexity. Berwick and Chomsky write, “A glance at appendix II of Chomsky’s Syntactic Structures (1957) with its twenty-six highly detailed rules for a fragment of English immediately reveals this intricacy.”

Presumably, the basic computational procedures underlying all human languages must be “hardwired” into our brains and a common genetic inheritance of all human beings. This would explain not only certain invariant features of human language, but the astonishing capacity of small children to “quickly acquire” linguistic knowledge that “vastly exceeds evidence available to [them].” But all the generative procedures proposed were so complicated that it was clearly impossible for them to have evolved in the first place. Here is where Berwick and Chomsky believe they have made a crucial breakthrough. They have identified an extremely simple procedure, which they call “Merge,” which can generate the hierarchies found in human language. Merge takes two linguistic units, call them X and Y, and combines them into an unordered pair {X,Y}. By successive application, or recursion, hierarchical sentences of unlimited length and complexity can be built up.

Berwick and Chomsky give an instructive example in the short hierarchical sentence “Guess what John is eating.” The mind builds this in stages. First, the word “what” and the phrase “John is eating what” are Merged to form “what John is eating what.” This is then Merged with the word “guess” to form “Guess what John is eating what.” (A logician might express the meaning of this as “Guess an X, such that John is eating X.”) But to externalize language, the brain has to do a lot of work, so it cuts corners by dropping the redundant second “what” (which is nevertheless still understood) to produce the spoken sentence “Guess what John is eating.”

Notice how the word “what” ended up getting displaced from “eating,” just as in the earlier examples “cars” got displaced from “fixing” and “instinctively” from “swimming.” The phenomenon of displacement, which is a ubiquitous feature of human languages, seems puzzling, but is naturally explained by the “Merge” hypothesis.

So how and why did the brain acquire this Merge procedure, and with it the possibility of human language? Berwick and Chomsky argue that Merge is simple enough that it could have arisen by a “slight re-wiring of the brain,” which may in turn have required only a few genetic mutations; so it is evolvable. Even so, this is quite a big change for evolution to make, and would have been impossible through a succession of “numerous,” “slight” changes, which is how Darwin himself thought evolution had to work.

But evolutionary theory and evolutionary data have come a long way since Darwin. It is now thought that a number of evolutionary developments may have involved fairly large qualitative jumps, including the first appearance of DNA, of cells with nuclei (eukaryotes), of multicellular organisms, and of sexual reproduction. Such jumps are very rare “one-off” events. So it must have been, argue Berwick and Chomsky, with Merge. It presumably had to happen in a single individual:

Such a change takes place in an individual—and perhaps, if fortunate, in all of [his or her] siblings too, passed on from one or (less likely) both parents. Individuals so endowed would have advantages, and the capacity might proliferate through a small breeding group over generations.

What advantage did Merge and hierarchical language confer? Here Berwick and Chomsky make one of their most important claims: Merge and syntactically hierarchical language were not, to begin with, an instrument of communication at all, but of thought. This makes sense, as it would have been valueless for communication when only one person possessed it. Externalization developed later and more gradually.

This brings us to a deep puzzle, which Berwick and Chomsky are brave enough to point out. The Merge procedure requires something “to work on,” namely the “word-like atomic elements,” which they also call “conceptual atoms of thought,” “lexical items,” “atoms of computation,” “symbols of human language and thought,” and simply “human concepts.” Where did these originate? They write,

The atomic elements pose deep mysteries. The minimal meaning-bearing elements of human languages—word-like, but not words—are radically different from anything known in animal communication systems. Their origin is entirely obscure, posing a very serious problem for the evolution of human cognitive capacities, language in particular. There are insights about these topics tracing back to the pre-Socratics, developed further by prominent philosophers of the early modern scientific revolution and the Enlightenment . . . though they remain insufficiently explored.

Is there an ontological discontinuity between humans and other animals? Berwick and Chomsky arrive, on purely empirical grounds, at the conclusion that there is. All animals communicate, but only humans are rational; and for Berwick and Chomsky, human language is primarily an instrument of rationality. They present powerful arguments that this astonishing instrument arose just once and quite suddenly in evolutionary history—indeed, most likely in just one member of Homo sapiens, or at most a few. At the biological level, this involved a sudden upgrade of our mental machinery, and Berwick and Chomsky’s theories of this are both more plausible than competing theories and more consistent with data from a variety of disciplines. But they recognize that more than machinery is involved. The basic contents and meanings, the deep-lying elements of human thought—“word-like but not words”—were somehow there, mysteriously, in the beginning.

Stephen M. Barr is professor of physics at the University of Delaware and author of The Believing Scientist: Essays on Science and Religion.

This is the first of your three free articles for the month.