Support First Things by turning your adblocker off or by making a  donation. Thanks!

A

venerable rule of predication is that certain words—or, at least, certain homonymous terms—admit of univocal, equivocal, and analogical acceptations. That is to say, there are times when a term has precisely the same meaning in two or more discrete instances of its use: say, “blue” as applied to two different visible objects situated in the same range of the chromatic spectrum. Then there are times when a single term has altogether different meanings: say, perhaps, “blue” as applied both to the color of an object and also to a private mood (assuming there is no actual affective connection there). And then there are times when a term retains some sort of proportional relation of meaning—though not a synonymy—across discrete usages: say, “blues” as describing both a subjective mood and also the objective structure of a particular chord progression (assuming there is an affective connection there, whether neurological or cultural). And, of course, there are times when it is not immediately clear precisely which of these relations obtains.

For instance, a word in great currency these days—in the “hard” sciences, philosophy, computational theory, and so on—is “information,” and many of its uses are remarkably nebulous in meaning. But at least two senses of the word can easily be isolated: sometimes, “information” means simply “data,” objective facts given “out there,” things, processes, brute events; at other times, however, it means the cognitive contents of subjective knowledge “in here” about things, processes, events, and so on. And it is not entirely clear whether these two uses should be viewed as analogous or merely equivocal. Perhaps the former: mental information being the word’s primary reference and empirical “information” its analogical extension. Or perhaps the latter: given the total qualitative difference between the active, directed mental “intentionality” exhibited in conscious cognition (that is to say, the “aboutness” of thought and perception, the “meaningfulness” of reality as apprehended under finite phenomenal, conceptual, and semiotic aspects) and the passive, undirected indeterminacy of any reality that might exist independent of mental acts. But, analogous or equivocal, it is beyond question that the two usages are definitely not univocal. Objective “information” and subjective “information” are distinct realities whose nature, structure, and logic are radically dissimilar. Yet, curiously, a vast quantity of contemporary philosophy of mind and cognitive scientific theory depends almost entirely on a failure to observe this elementary and obvious distinction.

I was reminded of this with particular poignancy a few days ago, when I read that the Japan Science and Technology Agency had awarded a grant of $3.4 million to a group of Japanese and American researchers in “evolutionary science and technology” for a project to be conducted at Monash University, the ultimate aim of which is to determine—based on models provided by Integrated Information Theory (IIT)—whether it is possible to create “artificial consciousness.” It seemed to me, to be frank, a rather exorbitant amount of money to be squandered on a simple category error occasioned by a beguiling homonymy. Then again, the business of academic research endowments, especially in fields as conceptually confused as artificial intelligence or cognitive science, really is all about exploiting the credulity of wealthy foundations and corporations and private donors (sometimes with devious cynicism, sometimes in deluded innocence). If, however, the JST would like to save some of that money for other equally plausible endeavors—communicating with extraterrestrials by way of séances, discovering a mathematical theorem that can spontaneously generate new universes, proving that the color blue has opinions, or what have you—I would be quite happy, on a very modest retainer, to provide them the results of the Monash project in advance.

I

IT, I should note, is quite in vogue these days. Invented by the physician and accidental mystic Giulio Tononi, it has won the support of a great many prominent ­scientists. Max Tegmark has become one of its most robust promoters. It has even made a convert out of the famed neuroscientist Christof Koch, and persuaded him to abandon his long arduous quest for the origins of consciousness through the ganglial forests of the cerebral cortex. At the heart of IIT is the radical conjecture that “information” and hence “consciousness” is ubiquitous and that, therefore, consciousness can be measured quantitatively by the variety and degree of information that exists in particular integrated systems of interdependent functions. The name that Tononi has given the quantum of consciousness he claims can be measured is “Phi,” and he has proposed ways of objectively determining how much “Phi” any composite system possesses. Moreover, it is his argument that any truly integrated system—a brain, a computer, the Internet, but also a barometer, a photodiode, a geranium, a sheet of paper—has some calculable Phi value; consciousness is qualitatively the same in all things, but in terms of intensity and capacity it increases along with the complexity, “synergy,” and ordering of cognitive information in organized wholes, and along with the richness of the information it integrates in “holographic” or “crystallized” conceptual structures. This means also, perhaps, that many integrated systems are modular or concrescent totalities, and hence a mind (for instance) might comprise smaller integral unities that are, so to speak, smaller minds.

Actually, the various indices of Phi that Tononi and others have advanced—what constitutes a particular integrated unit, what the principle of unity is, what the mathematical description of this supposedly objective quantum is, what the calculus of information values or integration are, and so forth—are uniformly gibberish, and the quasi-mathematical technical jargon that has sprung up around IIT, when subjected to any serious analysis of premises and applications, soon dissolves into a gauzy mist of empty assertions. But that is not the true scandal. What is most amusing about IIT (and the JST funders ought to have noticed this before reaching for the checkbook) is that it is not a theory of consciousness at all. Rather it openly presumes the metaphysical fantasy of material “panpsychism,” the notion that consciousness is a universal “property,” existing from the subatomic level upward, not produced by matter but rather constituting the subjective side of every material reality. Consciousness does not emerge from matter, because everything is conscious already; what is emergent is merely the complexity of systems of interaction.

I have to say, while there are various idealist forms of panpsychism that make sense to me, my every encounter with the materialist version of the idea makes me feel rather as if I have stepped through the looking-glass and am listening to Kant’s second and third Critiques being expounded by Humpty-Dumpty. For one thing, the basic paradox of the presence of mind within a mechanistic universe remains unchanged. It has simply been moved to a more basic level: One and the same atom possesses two utterly contradictory aspects, the nomological and the pathological (to misuse the Kantian terms)—the one objective, deterministic, mechanical, and empirical, the other subjective, intentional, teleological, and transcendental. And the interaction between them is no less mysterious for having been atomized.

More to the point, the very notion that consciousness can be conceived as a cumulative quantity, whose smaller units can combine into larger unified aggregates, is self-evidently absurd. Consciousness is not, contrary to what Tononi and Koch claim, a “property,” at least not in the way that, say, mass, color, or any other predicable attribute is. It is an act, one with a specific phenomenological shape: an indivisible apperceptive unity and intentionality, a logically prior and transcendental simplicity that organizes the many into one, a subjective vantage formally constitutive of the totality it perceives. It is not the effect of material integration, but the power that integrates experience from an irreducibly simple perspective. And it is bizarre that Koch, having recognized that unified intentional consciousness is irreducible to neurological complexity, fails to see that it is also irreducible to minute particles of “information.” But he has failed to see it. Somehow he has confused the objective and subjective senses of the word “information,” collapsed an equivocity (or analogy) into a univocity, and conflated the two poles of an indissoluble antinomy. Yet it is that intractable difference of meaning that, in a sense, is the problem of consciousness. Simply said, no information theory can be also a theory of consciousness.

I

n any event, there really is no question where the Monash project will end. Consciousness cannot be quantified by any measure, Phi included. Intelligence, neurological intricacy, states of responsiveness or awareness—yes; consciousness—never. So, inevitably, the researchers at Monash will produce just another large body of comparative cognitive studies—humans are better than cephalopods at crossword puzzles, no photodiodes can play chess but many Belgians can . . . that sort of thing—all meretriciously tricked out in the useless patois of ­Integrated Information Theory. After a decade or so, the laughter will begin; heads will fondly shake at the naivete of the project, at least among the few who have any recollection of it; and computers will remain blissfully unconscious of the whole silly episode. But by then, of course, the money will all have been spent long ago—which, I suspect, is the real point of this study.