Support First Things by turning your adblocker off or by making a  donation. Thanks!

Copyright (c) 1999 First Things 96 (October 1999): 25-31.


For two hundred years materialist philosophers have argued that man is some sort of machine. The claim began with French materialists of the Enlightenment such as Pierre Cabanis, Julien La Mettrie, and Baron d’Holbach (La Mettrie even wrote a book titled Man the Machine ). Likewise contemporary materialists like Marvin Minsky, Daniel Dennett, and Patricia Churchland claim that the motions and modifications of matter are sufficient to account for all human experiences, even our interior and cognitive ones. Whereas the Enlightenment philosophes might have thought of humans in terms of gear mechanisms and fluid flows, contemporary materialists think of humans in terms of neurological systems and computational devices. The idiom has been updated, but the underlying impulse to reduce mind to matter remains unchanged.


Materialism remains unsatisfying, however; it seems inadequate to explain our deeper selves. People have aspirations. We long for freedom, immortality, and the beatific vision. We are restless until we find our rest in God. And these longings cannot be satisfied by matter. Our aspirations are, after all, spiritual (the words are even cognates). We need to transcend ourselves to find ourselves, but the motions and modifications of matter offer no opportunity for transcendence. Materialists in times past admitted as much. Freud saw belief in God as wish-fulfillment. Marx saw religion as an opiate. Nietzsche saw Christianity as a pathetic excuse for weakness. Each regarded the hope for transcendence as a delusion.


This hope, however, is not easily excised from the human heart. Even the most hardened materialist shudders at Bertrand Russell’s vision of human destiny: “Man is the product of causes which had no prevision of the end they were achieving” and which predestine him “to extinction in the vast death of the solar system.” The human heart longs for more. And in an age when having it all has become de rigueur, some step forward with a proposal for enjoying the benefits of religion without its ontological burdens. The erstwhile impossible marriage between materialism and spirituality is now consummated, they tell us. Screwtape’s “materialist magician” who combines the skepticism of the materialist with the cosmic consciousness of the mystic is here at last.


For the tough-minded materialists of the past, human aspirations were strictly finite and terminated with the death of the individual. Whatever its inadequacies, that materialism was strong, stark, and courageous. It embraced the void, and disdained any impulse to pie in the sky.


Not so for the tender-minded materialists of our age. Though firmly committed to materialism, they are just as firmly committed to not missing out on the benefits ascribed to religious experience. They believe spiritual materialism is now possible, from which it follows that we are spiritual machines. The juxtaposition of spirit and mechanism, which previously would have been regarded as an oxymoron, is now said to constitute a profound insight.


Consider Ray Kurzweil’s recent The Age of Spiritual Machines: When Computers Exceed Human Intelligence (Viking, 1999). Kurzweil is a leader in artificial intelligence, specifically in the field of voice-recognition software. Ten years ago he published the more modestly titled The Age of Intelligent Machines, where he gave the standard strong artificial intelligence position about machine and human intelligence being functionally equivalent. In The Age of Spiritual Machines, however, Kurzweil’s aim is no longer to show that machines are merely capable of human capacities. Rather, his aim is to show that machines are capable of vastly outstripping human capacities and will do so within the next thirty years.


According to The Age of Spiritual Machines, machine intelligence is the next great step in the evolution of intelligence. That man is the most intelligent being at the moment is simply an accident of natural history. Human beings need to be transcended, not by going beyond matter, but by reinstantiating themselves in more efficient forms of matter, to wit, the computer. Kurzweil claims that in the next thirty or so years we shall be able to scan our brains, upload them onto a computer, and thereafter continue our lives as virtual persons running as programs on machines. Since the storage and processing capacities of these virtual persons will far exceed that of the human brain, they will quickly take the lead in all aspects of society. Those humans who refuse to upload themselves will be left in the dust, becoming “pets,” as Kurzweil puts it, of the newly evolved computer intelligences. What’s more, these computer intelligences will be conditionally immortal, depending for their continued existence only on the ability of hardware to run the relevant software.


Although Kurzweil is at pains to shock his readers with the imminence of a computer takeover, he is hardly alone in seeking immortality through computation. Frank Tipler’s The Physics of Immortality (1994) is devoted entirely to this topic. Freeman Dyson has pondered it as well. Alan Turing, one of the founders of modern computation, was fascinated with how the distinction between software and hardware illuminated immortality. Turing’s friend Christopher Morcom had died when they were teenagers. If Morcom’s continued existence depended on his particular embodiment, then he was gone for good. But if he could be instantiated as a computer program (software), Morcom’s particular embodiment (hardware) would be largely irrelevant. Identifying personal identity with computer software thus ensured that people were immortal since even though hardware could be destroyed, software resided in a realm of mathematical abstraction and was thus immune to destruction.


Curiously, the impulse to render us spiritual machines comes not just from materialists, but also from theists. Nancey Murphy, a professor of theology at Fuller Seminary, has surprised the Christian community with the news that humans do not have immortal souls capable of existing apart from the body (see her Whatever Happened to the Soul?, Fortress, 1998). Murphy is less a fan of Kurzweil’s computational reductionism than of Patricia Churchland’s neurological reductionism. Humans, according to Murphy, are purely physical beings, though as a believer she holds they are also creatures made by God. Human immortality, therefore, consists not in humans having some feature that transcends their physical bodies, but in the fact that God will resurrect them in the coming age. For Murphy, human identity coincides with bodily identity.


Murphy realizes that her view of the human person is at odds with much of the Christian tradition. She therefore tries to bridge the gap by claiming that the traditional Christian dualism of body and soul stems from the Greeks and is not properly part of the Hebraic view of human identity as found in the Old Testament. She also recounts the failure of Descartes’ substance dualism to connect body and soul. But to clinch her case she points to recent advances in neuroscience, which to her leave no doubt that we are strictly physical beings. The view of human identity that emerges from her writings is in the end no different from that of the hard-core neuroscientists. To be sure, God is thrown into the mix, religious experience is affirmed, and we have a promise of being resurrected somewhere down the line; but ultimately the only real knowledge about ourselves is what can be extracted from our physical composition and physical circumstances. If Kurzweil spiritualizes the material, then Murphy materializes the spiritual. In both cases we end up with humans as spiritual machines.


A strong case can be made that humans are not machines, period––a case I shall make in due course. Assuming that I am right, it follows that humans are not spiritual machines. Even so, it is interesting to ask what it would mean for a machine to be spiritual. My immediate aim, therefore, is not to refute the claim that humans are spiritual machines, but to show that any spirituality of machines could only be an impoverished spirituality. It’s rather like talking about “free prisoners.” Whatever else freedom might mean here, it doesn’t mean freedom to leave the prison.


By a machine we normally mean an integrated system of parts that function together to accomplish some purpose. To avoid the troubled waters of teleology, let us bracket the question of purpose. In that case we can define a machine as any integrated system of parts whose motions and modifications entirely characterize the system. Implicit in this definition is that all the parts are physical. Consequently a machine is fully determined by the constitution, dynamics, and interrela tionships of its physical parts.


This definition is very general. It incorporates artifacts as well as organisms. Because the nineteenth-century Romanticism that separates organisms from machines is still with us, many people shy away from calling organisms machines. But organisms are as much integrated systems of physical parts as are artifacts. Perhaps “integrated physical systems” would be more precise, but “machines” emphasizes the strict absence of extra-material factors from such systems, and it is that absence which is the point of controversy.


Because machines are integrated systems of parts, they are subject to what I call the replacement principle. This means that physically indistinguishable parts of a machine can be exchanged without altering the machine. At the subatomic level, particles in the same quantum state can be exchanged without altering the subatomic system. At the biochemical level, polynucleotides with the same length and sequence specificity can be exchanged without altering the biochemical system. At the organismal level, identical organs can be exchanged without altering the biological system. At the level of human contrivances, identical components can be exchanged without altering the contrivance.


The replacement principle is relevant here because it implies that machines have no substantive history. As Hilaire Belloc put it, “To comprehend the history of a thing is to unlock the mysteries of its present, and more, to disclose the profundities of its future.” But a machine, properly speaking, has no history. What happened to it yesterday is irrelevant; it could easily have been different without altering the machine. If something is a machine, then according to the replacement principle it and a replica of it are identical. Forgeries of the present become masterpieces of the past if the forgeries are good enough. This may not be a problem for art dealers, but it does become a problem when the machines in question are ourselves.


For a machine, all that it is is what it is at this moment. We typically think of our pasts as either remembered or forgotten, and if forgotten then having the possibility of recovery. But machines do not, properly speaking, remember or forget; they only access or fail to access items in storage. What’s more, if they fail to access an item, it’s either because the retrieval mechanism failed or because the item was erased. Consequently, items that represent past occurrences but were later erased are, as far as the machine is concerned, just as though they never happened. Mutatis mutandis, items that represent counterfactual occurrences (i.e., things that never happened) but which are accessible can be, as far as the machine is concerned, just as though they did happen.


The causal history leading up to a machine is strictly an accidental feature of it. Consequently, any dispositions we ascribe to a machine (e.g., goodness, morality, virtue, and, yes, even spirituality) properly pertain only to its current state and possible future ones, but not to its past. In particular, any defect in a machine relates only to its current state and possible future ones. Correcting that defect is a matter of technology: A machine that was a mass-murderer yesterday may become an angel of mercy today provided we can find a suitable readjustment of its parts. Having at some level come to view ourselves as machines, it is no surprise that we so often make use of technologies like behavior modification, psychotropic drugs, cognitive reprogramming, and genetic engineering, and that we are so sanguine about their effects.


A machine is incapable of sustaining what philosophers call substantial forms. A substantial form is a principle of unity that holds a thing together and maintains its identity over time. Machines lack substantial forms. A machine, though having a past, might just as well not have. A machine, though configured in one way, could just as well be reconfigured in other ways. A machine’s defects can be corrected and its virtues improved through technology. Alternatively, new defects can be introduced and old virtues removed through technology. What a machine is now and what it might end up being in the future are entirely open-ended and discontinuous. Despite the buffeting of history, unified things with substantial forms perdure through time. Machines, on the other hand, are the subject of endless tinkering and need bear no semblance to past incarnations.


In this light consider the various possible meanings of “spiritual” in combination with “machine.” Since a machine is characterized entirely in terms of its physical parts, “spiritual” cannot refer to some nonphysical aspect of the machine. This is true even for Christian theists like Nancey Murphy, who hold that God created humans and will ultimately resurrect them. For since they also hold that humans are purely physical systems (and thus machines in the sense defined here), it follows that nonphysical factors can provide no insight into human operations. Machines don’t care how or by whom they were created. As long as “spiritual” refers to something nonphysical, tacking “spiritual” in front of “machine” tells us nothing substantive about the machine.


If we therefore restrict “spiritual” to some physical aspect of a machine, what might it refer to? Often when we think of someone as spiritual, we think of that person as exhibiting some moral virtue like self-sacrifice, altruism, or courage. But we attribute such virtues only on the basis of past actions; yet past actions belong to history, which is what machines don’t have, except accidentally. Consider, for instance, a possible-worlds scenario featuring an ax murderer who just prior to his death has a cerebral accident that changes his brain state into that of Mother Teresa at her most charitable. The ax murderer now has the brain state of a saint but the past of a sinner. Given the equation of spiritual with moral virtue, and assuming the ax murderer is a machine, is he now a spiritual machine? Suppose Mother Teresa has a cerebral accident just prior to her death that turns her brain state into that of the ax murderer at his most barbaric. Mother Teresa now has the brain state of a sinner but the past of a saint. Assuming Mother Teresa is a machine, is she no longer a spiritual machine?


Such counterfactuals indicate the futility of attributing spirituality to machines on the basis of past actions. Machines that have functioned badly in the past are not sinners and therefore unspiritual. Machines that have functioned well in the past are not saints and therefore spiritual. Machines that have functioned badly in the past need to be fixed. Machines that have functioned well in the past need to be kept in good working order so that they continue to function well. Once a machine has been fixed, it doesn’t matter how badly it functioned in the past. On the other hand, once a machine goes haywire, it doesn’t matter how well it functioned in the past. Within the Christian tradition the spiritual formation of the human person is an arduous journey whose success depends on human perseverance and divine grace. It is not a technological fix for furnishing our brains with the proper mental state.


Attributing spirituality to machines on the basis of future actions is equally problematic. Clearly, we have access to a machine’s future only through its present. Given its present constitution, can we predict what the machine will do in the future? The best we can do is specify certain behavioral propensities. But even the best machines break and malfunction––it is impossible to predict the full range of stresses that a machine may encounter and that may cause it not to work. For every machine there are circumstances sure to lead to its undoing. Calling a machine “spiritual” in reference to its future can therefore refer only to certain propensities of the machine to function in certain ways. But spirituality of this sort is better left to a bookie than to a priest or guru.


Since the future of a machine is accessed through its present, it follows that attributing spirituality to machines properly refers to some present physical aspect of the machine. But what aspect might this be? What is it about the constitution, dynamics, and interrelationships of a machine’s parts that renders it spiritual? What emergent property of a system of physical parts corresponds to spirituality? Suppose humans are machines. Does an ecstatic religious experience, an LSD drug trip, a Maslow peak experience, or a period of silence, prayer, and meditation count as a spiritual experience? I suppose if we are playing a Wittgensteinian language game, this usage is okay. But however we choose to classify these experiences, it remains that machine spirituality is the spirituality of immediate experience. This is of course consistent with much of contemporary spirituality, which places a premium on religious experience and neglects such traditional aspects of spirituality as revelation, tradition, virtue, morality, and above all communion with a nonphysical God who transcends our physical being.


Machine spirituality neglects much that traditionally has been classified under spirituality. From this alone it would follow that machine spirituality is an impoverished form of spirituality. But the problem is worse. Machine spirituality fails on its own terms as a phenomenology of religious experience. The spiritual experience of a machine is necessarily poorer than the spiritual experience of a being that communes with God. The entire emphasis of Judeo-Christian spirituality is on communion with a free, personal, transcendent God. Moreover, communion with God always presupposes a free act by God to commune with us. Freedom here means that God can refuse to commune with us (to, as the Scriptures say, “hide his face”). Thus, within traditional spirituality we are aware of God’s presence because God has freely chosen to make his presence known to us. Truly spiritual persons, saints, experience a constant, habitual awareness of God’s presence.


But how can a machine be aware of God’s presence? Recall that machines are entirely defined by the constitution, dynamics, and interrelationships among their physical parts. It follows that God cannot make his presence known to a machine by acting upon it and thereby changing its state. Indeed, the moment God acts upon a machine to change its state, it no longer properly is a machine, for an aspect of the machine now transcends its physical constituents. If we are machines, then, we cannot say that God reveals his presence to us, and any awareness we have of God’s presence must be explained in some other way. Which means that the awareness must be self-induced, must come from us rather than from God. Machine spirituality is the spirituality of self-realization, not the spirituality of an active God who freely gives himself in self-revelation and thereby transforms the beings with which he is in communion. I therefore maintain that modifying “machine” with “spiritual” entails an impoverished view of spirituality.


The question remains whether humans are machines (with or without the adjective “spiritual” tacked in front). To answer this question, we need first to examine how materialism understands human agency and, more generally, intelligent agency. Although the materialist literature that attempts to account for human agency is vast, the materialist’s options are in fact quite limited. The materialist world is not a mind“first world. Intelligent agency is therefore in no sense prior to or independent of the material world. Intelligent agency is a derivative mode of causation that depends on underlying natural––and therefore unintelligent––causes. Human agency in particular supervenes on underlying natural processes, which in turn usually are identified with brain function.


How well have natural processes been able to account for intelligent agency? Cognitive scientists have achieved nothing like a full reduction. The French Enlightenment thinker Pierre Cabanis remarked: “Les nerfs––voil tout l’homme” (the nerves––that’s all there is to man). A full reduction of intelligent agency to natural causes would give a complete account of human behavior, intention, and emotion in terms of neural processes. Nothing like this has been achieved. No doubt, neural processes are correlated with behavior, intention, and emotion. But correlation is not causation.


Anger presumably is correlated with certain localized brain excitations. But localized brain excitations hardly explain anger any better than overt behaviors associated with anger, like shouting obscenities. Localized brain excitations may be reliably correlated with anger, but what accounts for one person interpreting a comment as an insult and experiencing anger, and another person interpreting that same comment as a joke and experiencing laughter? A full materialist account of mind needs to understand localized brain excitations in terms of other localized brain excitations. Instead we find localized brain excitations (representing, say, anger) having to be explained in terms of semantic contents (representing, say, insults). But this mixture of brain excitations and semantic contents hardly constitutes a materialist account of mind or intelligent agency.


Lacking a full reduction of intelligent agency to natural processes, cognitive scientists speak of intelligent agency as supervening on natural processes. Supervenience here means a hierarchical relationship between higher order processes (in this case intelligent agency) and lower order processes (in this case natural processes). What supervenience implies is that the relationship between the higher and lower order processes is a one-way street, with the lower determining the higher. To say, for instance, that intelligent agency supervenes on neurophysiology is to say that once all the facts about neurophysiology are in place, all the facts about intelligent agency are determined as well. Supervenience makes no pretense at reductive analysis. It simply asserts that the lower level determines the higher level––how it does it, we don’t know.


Certainly, if we knew that materialism were correct, then supervenience would follow. But materialism itself is at issue. Neuroscience, for instance, is nowhere near underwriting materialism, and that despite its strident rhetoric. Hardcore neuroscientists, for instance, refer disparagingly to the ordinary psychology of beliefs, desires, and emotions as “folk psychology.” The implication is that just as “folk medicine” had to give way to “real medicine,” so “folk psychology” will have to give way to a revamped psychology grounded in neuroscience. In place of the psychologist’s couch, where we talk out our beliefs, desires, and emotions, tomorrow’s healers of the soul will ignore such outdated categories and manipulate brain states directly.


At least so the story goes. Actual neuroscience research is by contrast a much more modest affair and fails to support such vaulting ambitions. That should hardly surprise us. The neurophysiology of our brains is incredibly plastic and has proven notoriously difficult to correlate with intentional states. Louis Pasteur, for instance, despite suffering a cerebral accident, continued to enjoy a flourishing scientific career. When his brain was examined after he died, it was discovered that half the brain had atrophied. How does one explain a flourishing intellectual life despite a severely damaged brain if mind and brain coincide?


Or consider a more striking example. The December 12, 1980 issue of Science contained an article by Roger Lewin titled “Is Your Brain Really Necessary?” In the article, Lewin reported a case study by John Lorber, a British neurologist and professor at Sheffield University:


“There’s a young student at this university,” says Lorber, “who has an IQ of 126, has gained a first-class honors degree in mathematics, and is socially completely normal. And yet the boy has virtually no brain.” The student’s physician at the university noticed that the youth had a slightly larger than normal head, and so referred him to Lorber, simply out of interest. “When we did a brain scan on him,” Lorber recalls, “we saw that instead of the normal 4.5-centimeter thickness of brain tissue between the ventricles and the cortical surface, there was just a thin layer of mantle measuring a millimeter or so. His cranium is filled mainly with cerebrospinal fluid.”

Against such anomalies, Cabanis’ dictum, “the nerves––that’s all there is to man,” hardly inspires confidence. Yet as Thomas Kuhn has taught us, a science that is progressing fast and furiously is not about to be derailed by a few anomalies. Neuroscience is a case in point. For all the obstacles it faces in trying to reduce intelligent agency to natural causes, neuroscience persists in the Promethean determination to show that mind does ultimately reduce to neurophysiology. Absent a prior commitment to materialism, this determination will seem misguided. On the other hand, given a prior commitment to materialism, this determination becomes readily understandable.


But not obligatory. Most cognitive scientists do not rest their hopes with neuroscience. Yes, if materialism is correct, then a reduction of intelligent agency to neuro physiology is in principle possible. The sheer difficulty, both experimental and theoretical, of even attempting this reduction, however, leaves many cognitive scientists looking for a more manageable field in which to invest their energies. As it turns out, the field of choice is computer science, and especially its subdiscipline of artificial intelligence. Unlike brains, computers are neat and precise. Also unlike brains, computers and their programs can be copied and mass-produced. Inasmuch as science thrives on replicability and control, computer science offers tremendous practical advantages over neurological research.


Whereas the goal of neuroscience is to reduce intelligent agency to neurophysiology, the goal of artificial intelligence is to reduce intelligent agency to computation. The idea is to develop a computational system that equals or, if we are to believe Ray Kurzweil, exceeds human intelligence. Since computers operate deterministically, reducing intelligent agency to computation would indeed constitute a materialistic reduction of intelligent agency. Cognitive scientists would still have the task of showing in what sense brain function is computational (that is, Marvin Minsky’s dictum “the mind is a computer made of meat” would still need to be verified), but they would be much closer than the neuroscientists are now.


So can computation explain intelligent agency? First off, let’s be clear: no actual computer system has come anywhere near to simulating the full range of capacities we associate with human intelligent agency. Yes, computers can do certain narrowly circumscribed tasks exceedingly well (like play chess). But require a computer to make a decision based on incomplete information and calling for common sense, and the computer will be lost. Artificial intelligence researchers call this the frame problem, the problem of getting a computer to find the appropriate frame of reference for solving a problem.


Consider, for instance, the following story: A man enters a bar. The bartender asks, “What can I do for you?” The man responds, “I’d like a glass of water.” The bartender pulls out a gun and shouts, “Get out of here!” The man says “thank you” and leaves. End of story. What is the appropriate frame of reference? No, this isn’t a story by Franz Kafka. The key item of information needed to make sense of this story is this: The man has the hiccups. By going to the bar to get a drink of water, the man hoped to cure his hiccups. The bartender, however, decided on a more radical cure. By terrifying the man with a gun, the bartender cured the man’s hiccups immediately. Cured of his hiccups, the man was grateful and left. Humans are able to understand the appropriate frame of reference for such stories immediately. Computers, on the other hand, haven’t a clue.


Ah, but just wait. Give an army of clever programmers enough time, funding, and computational power, and just see if they don’t solve the frame problem. Materialists are forever issuing such promissory notes, claiming that a conclusive confirmation of materialism is right around the corner––just give our scientists a bit more time and money. John Polkinghorne refers to this practice as “promissory materialism.” But a promissory note need only be taken seriously if there is good reason to think that it can be paid. And the fact is that the artificial intelligence community has offered no compelling reason for thinking that it will ever solve the frame problem.


In sum, the empirical evidence for a materialist reduction of intelligent agency is wholly lacking. Indeed, the only thing materialist reductions of intelligent agency have until recently had in their favor is Occam’s razor, which has been used to argue that materialist accounts of mind are to be preferred because they are simplest. Yet even Occam’s razor, that great materialist mainstay, is proving small comfort these days. Specifically, recent developments in the theory of intelligent design are providing principled grounds against the reduction of intelligent agency to natural causes (cf. my October 1998 article in First Things titled “Science and Design”).


To this point I have argued that attributing spirituality to machines entails an impoverished view of spirituality, and that the empirical evidence doesn’t confirm that machines can bring about minds. But if not machines, what then? What else could the mind be except an effect of matter? Or, in the words of Nancey Murphy, what else could the soul be except “a functional capacity of a complex physical organism”? It’s not that scientists have traced the workings of the brain and discovered how brain states induce mental states. It’s rather that scientists have run out of places to look, and that matter seems the only possible redoubt for mind.


The only alternative to a materialist conception of mind appears to be a Cartesian dualism of spiritual substances that interact preternaturally with material objects. We are left either with a sleek materialism that derives mind from matter or a bloated dualism that makes mind a substance separate from matter. Given this choice, almost no one these days opts for substance dualism. Substance dualism offers two fundamentally different substances, matter and spirit, with no coherent means of interaction. Hence the popularity of reducing mind to matter.


But the choice between materialism and substance dualism is ill-posed. Both are wedded to the same defective view of matter. Both view matter as primary and law-governed. This renders materialism self-consistent since it allows matter to be conceived mechanistically. On the other hand, it renders substance dualism incoherent since undirected natural laws provide no opening for the activity of spiritual substances. But the problem in either case is that matter ends up taking precedence over concrete things. We do not have knowledge of matter but of things. As Bishop Berkeley rightly taught, matter is always an abstraction. Matter is what remains once we remove all the features peculiar to a thing. Consequently, matter becomes stripped not only of all empirical particularity but also of any substantial form that would otherwise order it and render it intelligible.


The way out of the materialism-dualism dilemma is to refuse the artificial world of matter governed by natural laws and return to the real world of things governed by the principles appropriate to them. These principles may include physical laws, but they need hardly be coextensive with them. Within this richer world of both material and nonmaterial things, physical laws lose their status as absolutes and become subject to principles that may be quite meta-physical (principles like intelligent agency and divine providence).


Within this richer world, the obsession to seek mind in matter quickly dissipates. According to materialism (and here I’m thinking specifically of the scientific materialism that currently dominates Western thought), the world is fundamentally an interacting system of mindless entities (be they particles, strings, fields, or whatever). Accordingly, only atomistic, reductionistic, and mechanistic science is available to study the mind. After things are reduced to their mindless parts, equally mindless principles of association known as natural laws allow them to assemble in ever greater orders of complexity (even the widely touted “laws of self-organization” fall in here). But the world is a much richer place than materialism allows, and there is no reason to saddle ourselves with an artificially sparse ontology.


The great mistake in trying to understand the mind-body problem is to suppose that it is a scientific problem. It is not. It is a problem of ontology (i.e., that branch of philosophy concerned with what exists). If all that exists is matter governed by natural laws, then humans are machines. If all that exists is matter governed by natural laws together with spiritual substances that are incapable of coherently interacting with matter, then, once again, humans are machines. But if matter is merely an abstraction gotten by removing all the features peculiar to unified things, then there is no reason to think that combining it with natural laws (or anything else for that matter) will entail the recovery of things. And in that case, there is no reason to think that humans are machines.


Owen Barfield described the material or the physical as a “dashboard” that mediates things to us. But the mediation is fundamentally incomplete, for the dashboard can only mirror certain aspects of reality, and that imperfectly. Materialism deconstructs the things of this world, and then tries to reconstitute them. Yet it can never put things back together again. This is not for want of cleverness on the part of materialists. It is rather that reality is too rich and the mauling it receives from materialism too severe for even the cleverest materialist to put things right. Materialism itself is the problem, not the brand of materialism one happens to endorse.


Over a hundred years ago William James saw clearly that science would never resolve the mind-body problem. In his Principles of Psychology he argued that neither empirical evidence nor scientific reasoning would settle this question. Instead, he foresaw an interminable debate between competing philosophies, with no side gaining a clear advantage. The following passage captures the state of cognitive science today:


We are thrown back therefore upon the crude evidences of introspection on the one hand, with all its liabilities to deception, and, on the other hand, upon a priori postulates and probabilities. He who loves to balance nice doubts need be in no hurry to decide the point. Like Mephistopheles to Faust, he can say to himself, “dazu hast du noch eine lange Frist” [for that you’ve got a long wait], for from generation to generation the reasons adduced on both sides will grow more voluminous, and the discussion more refined.


William A. Dembski is a fellow of the Center for the Renewal of Science and Culture at the Seattle-based Discovery Institute. His new book, Intelligent Design: The Bridge Between Science and Theology, will be published in November by InterVarsity.