Support First Things by turning your adblocker off or by making a  donation. Thanks!

You cannot call up any wilder vision than a city in which men ask themselves whether they have any selves.

—G. K. Chesterton, Orthodoxy

Chesterton was wrong, for that other vision stood in the wings. But, writing in 1908, how could he have predicted that parents would one day pay minds so modest as these for the opportunity to teach their children that they might not exist, that the answer to the question “Are we?” is not necessarily in the affirmative? This academic fashion of self-cancellation has been neglected by those print media that decode professional philosophy. The hesitation of their publishers is understandable; tales of the evaporating self can be tediously technical. Still, if it is prudent for the journals to ask “who cares?”, the exclamatory answer must be that “whos care!”

Every idea of a real self entails authentic goods that humans do not invent but might achieve; thus, our capacity for such real goods must first be identified and assessed. There are, however, two distinct forms of good that could be our possible object. One is a moral or spiritual perfection to be realized, if at all, in the self. I call this the “First Good,” or, ultimately, the goodness of the “person.” The other (the “Second Good”) consists in correct deeds and true beliefs plus their perceptible effects. This distinction between the two goods ignites a critical disagreement among those who believe in the real self. The question disputed is whether humans can achieve their own First Good simply by seeking the Second Good, or whether, to the contrary, one must actually succeed in that effort. Do honest mistakes about truth and correct conduct prevent this goodness that is peculiar to the self?

My own view is that the human self achieves his or her perfection by making the best of whatever intellectual and other resources one commands (and these I label the “persona”). Mistakes (of conduct or belief) by definition injure the Second Good, but, if truly unavoidable, they do not prejudice the First Good. The honest bungler becomes good though he mistakes the truth. Embracing this premise, I define the self as the capacity to achieve that state of goodness (or to reject it by refusing to seek the Second Good).

Conversations on this subject go nowhere without a set of common terms. I see no way to make the going easy. Language itself has contributed to the difficulty. The genius who invented personal pronouns assumed the real, distinct, and continuing identities both of himself and all those around him (or her, obviously). These intellectual commitments are frozen in our vocabularies to the special inconvenience of the modern philosopher who yearns to exterminate the self. If he could, he would avoid such words as who, I, and we—names for those stubbornly enduring agents whose existence he rejects. He would convince us that, when we speak of human identity, this must be understood metaphorically. But who, then, is the “we” that utters this distinctly nonmetaphorical claim? Language here is not up to the task, and in desperation the philosopher ascribes this insight of a self-less race to some “perspectiveless perspective” or “view from nowhere”—a view, we are to understand, of nobody at all.

I attempt no such subtleties; a primary justification for my effort here is the crying need for a simpler statement of the whole problem—echoing John Henry Newman, for an “essay in aid of the grammar of the self.” Any hope of affirming (or even of denying) the reality of the “self” requires some shared conception that bears this name, some labeled idea of what the real thing would be (if, indeed, it did exist). This need for a common language is illustrated in William James, who contributed handsomely to the current babel. A substantial portion of Principles of Psychology was devoted to “The Consciousness of the Self”; but James attached this term “self” to diverse forms of consciousness, each resting upon some discrete source such as experience, sense, emotion, imagination, or belief. He specifically distinguished material, social, and spiritual selves plus the “pure ego” and “the Empirical Self or Me.” Each was governed by distinctive premises and epistemology, rendering the whole a collection of incompatibles. Conceding the acuity of James’ particular observations, he left “self” a term both parochial and ambiguous, hence almost useless.

This balkanization of meaning continues as a professional habit of philosophers, and not only among those who dismiss the self. Even such elevating—and affirmative—works as Charles Taylor’s Sources of the Self and Paul Ricoeur’s Oneself as Another leave its definition unsettled. Though illuminating, they seem less histories of a stable and intelligible idea than etymologies of an inconstant but imperishable word—a verbal phoenix that re-arises only to migrate, becoming the label for yet another irreconcilable theory of human individuation.

But if it is so fickle a word, why should I continue to employ it? What makes it the ideal agent of clarity? The Christian might (with Maritain) prefer “person” or even “soul.” The old Greeks might lobby for “form”; and at least some positivists will complain if we should de-emphasize “identity” and “individual.” These criticisms are plausible and, in their way, reassuring. All confirm the need for some concept with an accepted common name, some notion whose reality can be debated, then affirmed or denied.

For two reasons, I favor the word “self” for this purpose. One is its prominence in contemporary accounts of the question; as I write, I confront the spines of a dozen recent books bearing “self” in their titles. The other is more nearly substantive, if thoroughly tendentious. In a comfortable way, “self” fits into my own version of human moral architecture without disqualifying all these other favorite terms. Understood specifically as what I shall call “the capacity for responsibility,” the word “self” leaves distinctive and important uses for terms such as person, persona, soul, identity, and individual. My hope is that most of these can be reconciled in a rather surprising economy of language. Though parsimony does not guarantee truth, it serves coherence, providing a sort of Esperanto without resorting to neologisms.

Theories of the real self (or of its absence) can be usefully organized in three categories. Set #1 consists of positivist theories of the sort that I have already caricatured. All share the premise that I will now label “Radical Contingency” (or Rad-con); that is, they reduce the human reality to the ephemera of empirical data and “perceptions.” Thus on occasion I will also refer to these collectively as the “nothing but” theories of the self. Those in Set #2 I will call “Traditional,” dividing them further into two sub-schools that favor, respectively, the “Intellectual” and the “Responsible” self. What the schools of Tradition agree upon is that the self involves both reason and will; what divides them is the emphasis upon the one or the other. Bit by bit I shall try to justify a stress upon will as the better choice. Set #3, my third and final batch of theories, consists of the peculiar blend of reason and will that was confected by Immanuel Kant; I reject it but use it in what follows.

Rad-con philosophers typically claim to be “empirical,” reducing the intelligible world to detectable phenomena and the subjective experience of the individual human specimen. Material reality is all we know, and it is universally unstable and contingent. This collection of minds is as antique as Heraclitus and as trendy as Peter Singer. Rad-con doctrine concedes the reality of a human species but only as it can be located in individuated clusters of genetic material—blobs of vulnerable stuff that is ever changing and passing away. Indeed, why the Rad-con prefers the idiom of “selves” to that of “blobs” is unclear. The arbitrariness of the choice is betrayed in the occasional observation that the self is “nothing but” this mutating mass of cells that sees and feels and is seen and felt. Perhaps “self” bears an historic halo of respectability, hence a certain marketing advantage wanting in the term “blob.”

Rad-con theory does not altogether deny subjective phenomena, but it reduces them to messages from and to our own flesh. To a point, Traditional critics go along with this; they can concede that our sensations and images are no guarantee of a corresponding reality “out there.” Nevertheless, many traditionalists insist that the phenomenon we call memory is a clear exception to this flux, being by its nature more than a neurological event. Memory, they say, is the independent signature of a real self—of “somebody home.” To recall my own past is a reliable act of self-recognition. (Some, with Ricoeur, call it the narrative self.) Crudely put, I remember, therefore I am. Rad-cons typically reject this exception; even a memory in fine working order is to be understood as an event like any other within the phenomenal individual. A blob may inform me that “I’m the guy who did . . . X”; and, like data from a computer, this report may or may not convey accurate historical information. What it can never display is a somebody in charge of the computer—either the blob’s or my own.

Further, continues the Rad-con, even if one accepted memory as a reliable intimation of an enduring something discrete from the neuro-blob, the path to the self would not be easy. Among the “nothing buts” it is a parlor game to pose hypotheticals ranging from minds struck with amnesia to the science-fictional transfer of memory from one blob to another. Susie’s body has surgically been given your brain with all its recollections; now tell me who you are—or whether you are at all. This is good fun, and—if I am right—it comes at no ultimate expense to the self, for, on the premises of the Rad-con, it is strictly overkill; if, by definition, our mental life consists exclusively of neurologically determined events, then those events called memory could not prove the presence of a distinctive self. On these terms, the self could conceivably remain “nothing but” an arbitrary label for those accretions of flesh that bear proper names or social security numbers. There is, in any case, no need for despair. Even were the narrative memory insufficient to establish the self, it could also be unnecessary.

The Traditional versions of the self constitute my second basket of theories. Some are as old as Socrates. All recognize in each human some element that is not subject to the otherwise universal flux; it is stable, enduring, and important, at least during the brief season of our rationality. But, as noted, this band of “realists” is itself divided into two distinctive schools or sub-traditions. Each defines this enduring element as a thing of reason and freedom, but the one stresses reason and the other free will. As we shall see, this difference in emphasis has consequences.

Socrates and Plato taught that in the architecture of mankind there is something that eludes the flux of the empirical. It is, indeed, this uniquely human element that has the most convincing claim to be real, the rest being appearance or “mere perception.” This enduring human core consists essentially in our gift of rationality. Ideas are the only true reality, and reason empowers each of us to appropriate at least some of them. Aristotle dissented in part, restoring the dignity of the empirical; nevertheless, he agreed with Plato that it is ideas—as the form of all things—that address our reason. And the good life is one that emphasizes their contemplation and appropriation; together with a somewhat ambiguous freedom, it is reason that distinguishes the human core, making it real, stable, and potentially sovereign to our empirical aspects.

Seven centuries later such Greek ideas, especially those of Plato, came to inform the outlook of Augustine (though the word “self” was still far off). Invoking a scriptural image of man created as the image and likeness of God, he redeployed reason and will, securing—at least for believers—an enduring human nucleus. As Charles Taylor recounts, in the process Augustine had executed an intellectual U-turn. The Greeks (meta­phorically) had directed the eye of reason outward to a universe of ideas. In his Confessions Augustine redirected reason inward in a fateful focus upon the content and structure of his own mind and soul, as he now experienced them—illumined by grace.

At least until his final years Augustine continued to give place to freedom in his understanding of what would one day be called the self. However, haunted not only by his Manichean past but, soon, by Pelagian boasts of human moral competence, Augustine was never able to shake his anxieties about freedom. The human agent became for him a one-sided coin; he could be allowed autonomy only in his capacity to choose evil. Any choice of the good must be credited exclusively to God. Moreover, knowledge of the true and the good was itself the work of grace—of grace eventually interpreted as invincible. God does it all. What is important in all this for our story of the self is Augustine’s emphasis upon knowledge and reason. His own conversion, as with St. Paul’s, had come as illumination. The saving knowledge of God was thrust upon him. It is our nature to be thinkers; and God had graced the fortunate Augustine with the right thoughts.

By the thirteenth century, Thomas Aquinas would give much greater weight to the senses and to the empirical; in effect he played Aristotle to Augustine’s Plato. Nevertheless, his treatment of the self remained Augustinian in its emphasis upon reason. The human person is, first and foremost, rational in nature, and fulfillment lies in fuller knowledge (plus, of course, the love that is merited by every truth one appropriates). His psychology is so centered upon reason that, for many, Aquinas is the quintessential “intellectualist.” In his view the will merely seeks, while the mind actually possesses, thereby achieving a higher state.

This sovereignty of reason resurfaces everywhere in Aquinas. In the realm of morals only the correct grasp of what is right conduct can avoid sin. Except for nonnegligent errors of fact, even invincible ignorance will not serve; we must know the rules, and the best of intentions is not sufficient to goodness if I mistake the correct way. Knowledge is preeminent also in the final bliss of the elect; the essence of salvation is the beatific vision. Of course, there is love, and we may think of love as a voluntary act, leaving the saints theoretically free to defect; but they never do.

The devaluation of volition was to be accentuated by the Reformation and came to typify Western versions of the self until Schopenhauer and Nietzsche. To this day the subordination of will is discernible among many of the traditional philosophers who carry the torch for natural law. Nowhere in John Finnis, for example, can I find even a footnote on the fate of the moral bungler—the actor whose reason is misfiring on specific moral issues; he is doing all he can to get right answers, but his feeble gifts and fallible judgment lead him to mistakes of conduct. Aquinas had this hapless fellow “bound to know” the right answers (at the risk of his soul), and Finnis and company seem not to have rescued him even in the natural order. This abstention underscores the intellectualism of their human self; it is so constituted that its fulfillment must come by correct knowledge and appropriation of the several “basic human goods,” all apprehended by reason. To be sure, one can refuse to maximize these goods; our constitution includes free will, and the proper intention is necessary to the happy ending. But, to the intellectualist, it is reason and knowledge that stand front and center in the active structure of the self. We might add that this same emphasis dominated the twentieth-century Thomism of Jacques Maritain and Étienne Gilson.

I turn now to that other school of Tradition which has stressed freedom as the flagship of the self. It shares crucial ground with the tenant of our third set, Immanuel Kant, and I consider them together.

The various schools of Tradition agree that natural reason relies upon experience for its initial signals of reality; hence, to start philosophy with sense and image is no crime, but it is a crime to end there. By contrast, our third exemplar, Kant, refused to credit such knowledge at all, concluding with Hume that, from the world of sense and experience reason could never proceed farther than mere “appearance.” Kant then solved the start-up problem his own way; in a procedure that is roasted by diverse modern critics (including Traditional natural lawyers) he ascended to a “pure reason” that is detached from the data of sense. Precisely because it is liberated from the empirical, Kant’s “noumenal” self is able to reason its way to the “categorical imperative,” thence to the specific obligations of conduct that are disclosed by its correct application.

His desertion of experience in favor of “pure reason” thus sets Kant in opposition to both Tradition and Radical Contingency. He is of interest here, nonetheless, for the peculiar place he allows for will in its relation to reason. Though ostensibly reason becomes the supreme property of his intellectualist self, in an interesting way Kant actually undermines reason’s moral sovereignty. For on the ultimate issue of the self’s own potential perfection (what I have called the First Good or goodness of the person), he does not require correct reasoning but only honest effort. “The good will is not good because of its . . . adequacy to achieve some proposed end; it is good only because of its willing, i.e., it is good in itself.”

There is nothing so holy as a good will, and it is available to every self—period. Granted, Kant is sometimes ambiguous about the possibility and consequences of honest error. In the end he plainly allows the interpretation that the free will is preeminent in the moral architecture of the self. This is a proposition that might be domesticated within Traditional theories. Its adoption could give them a stronger purchase on the contemporary mind and a more coherent understanding of the self.

To see this we may be aided by an imaginative reordering of the steps taken by Kant after his demotion of the empirical. Kant’s fateful mistrust of sense and image (hence his unhappy resort to “pure” reason and noumena) had been encouraged by the skepticism of David Hume. Hume famously denied the truth value of “perceptions” even to the perceiving individual—ultimately to the point of denying his own existence. Looking inward “into what I call myself, I always stumble on some particular perception or other . . . . I never catch myself . . . and never observe anything but the perceptions.” For Hume it followed that this self that he cannot “catch” is nonexistent, a heroic inference but one congenial to the “nothing buts” who still gather in his name.

Hume’s insight had roots in the fourteenth-century revolt of nominalism against those Scholastics whose descendants inhabit my “Traditional” category. As revolutions tend to do, this one lost its bearings, becoming modernity’s hyperbolic devaluing of all experience, mental and physical, as “perceptions.” Hume himself might have drawn the line elsewhere; his own account discredited only the sorts of notions that the mind could “stumble on” or “observe” by the inward act that Hume inaccurately called “looking.” But what the mind can imagine in this primitive and picture—like manner does not exhaust our subjective repertoire; a neglected form, prominent within private consciousness, is of primary importance to our question.

Distinct from the vulnerable images of Hume is the experience of responsibility, a thing that is more than perception—and stabler, even, than the provenance of reason. Its singularity and portent make it a plausible candidate for the ground of the self. And here comes the point about Kant. One wonders whether he might have been more convincing had he reversed the order of his propositions about reason and freedom. He could have begun with the undeniable experience of free moral responsibility, taking reason not as premise but as inference. The logical relation, after all, is reciprocal; responsibility cannot be conceived apart from knowledge.

By the experience of responsibility I mean the awareness (of every rational human) that one is called to seek what, in his or her circumstances, is the correct choice of conduct. This experience includes the conviction that there is such an order of possible right deeds and outcomes (“Second Good”) and that one ought to direct whatever resources are at his or her disposal to their concrete realization. This experience is universally verified. Even Humeans inadvertently confirm it as privately they practice responsibility and complain when others fail to reciprocate.

In at least two basic respects our experience of responsibility differs profoundly from Hume’s “perceptions.” Given the modernists’ assumptions, the latter can be accounted for as products of our senses and brain: the neural resources of the blob generate image, sensation, and emotion, and these are consistent with reason understood as pure mechanism. It “catches” the descriptive content of our consciousness. By contrast, human responsibility is experienced, not because we “catch” it, but—to the contrary—because it catches us. Responsibility is unique as the experience, not of perception, but of being perceived—of standing in relation to an authoritative moral perceptor. Imperative (or invitational) in content, the experience cannot be explained descriptively as the product of mindless matter and neural energy. Hume himself should have been the first to observe that the experience of “ought” cannot emerge from the experience of “is.” If we find it part of us, we should look over our shoulder.

Second, unlike Hume’s “perceptions,” and unlike intellectual insight, responsibility is experienced not as transient and contingent but as a constant of human subjectivity. I do not mean that we cannot evade or suppress, but only that we can never eliminate, the free capacity to accept or reject responsibility. If the definition of self requires a stable element, here is a fresh candidate.

Note that in this view the self (itself) is a moral perceptor, but also that this is more than tautology. For as moral perceptor the self can be understood in either of two distinct ways. The first is Kant’s way. He would have each self standing autonomous as a moral actor. One literally legislates his own responsibility to deploy reason to deal correctly with the worlds presented to each of us in our perceptions and empirical encounters. For Kant the self is (simultaneously) moral perceptor, lawmaker, and executor.

The obvious alternative view open to Kant might have raised the ante precisely in the way that he wanted to avoid, by introducing God as primary perceptor and lawmaker. In truth, however, like Aquinas, Kant could have had it both ways, postulating a divine perceptor who would not displace but rather sustain, enlarge, and explain the liberty of the human self. Granted, this concession would demote the Kantian self from the role of author and legislator to that of a created vessel of free responsibility and (ideally) love; such demotion could not be confused with annihilation. The self remains—and now more firmly grounded.

Kant did not, of course, deny the reality of the Second Good; he merely wanted the self utterly free to seek it. However, by setting man so dramatically to his own devices, he invited his successors, step by historical step, to reinterpret the self as pure will with no proper object but the design of its own egotistic world. The emergent nineteenth-century philosophies of will and their contemporary remnants deserve their rotten reputation. But their worst and original sin, with Kant, was to detach themselves from the primordial experience of responsibility. In doing so they missed the opportunity to understand the self in the only way that could allow it to survive—in the way that Kant himself really hoped. If the self lives, it is not only as distinct from mere “perception” but as the reigning steward of all these perceptual resources of the individual, including reason. It constitutes our very capacity to deploy those resources responsibly in pursuit of the Second Good.

It is necessary to say a bit more about the self as it experiences and then appropriates responsibility. We need not conceptualize it as a ghost who attends the machine. Responsibility is an experience implicating the whole of whatever we are. And it includes the insight that this whole is more than the contingent content of the neuro-blob. The meaning of the self emerges as specific, limited in content, and quite traditional. The self is exactly the potency of the whole individual (hereafter “person”) to make (and to sustain or remake) the fundamental choice either to accept or reject the authority of a natural or supernatural order of correct conduct. The self is that power or capacity of each of us to commit freely to seek and, with luck, discover and attempt whatever would be truly right under the circumstances. It is in the exercise of this option to seek that each of us realizes or forfeits his or her own self-perfection (First Good).

I have now casually introduced the term “person” as the word suited to describe the whole of the human individual. This allows me to claim that the “self” is a potency or capacity of that whole, the ability of every person to achieve or to forfeit a specific and important good. As for the rest—the leftover or net content of that whole person—I will happily allow something like the Rad-con view of it. In addition to the self, each person is, indeed, a great cluster of his or her own contingencies, a package that until this point I have called the “blob.” Associated now with the self in a manner that constitutes the knowable person, this blob deserves the dignity of a Latin name. I prefer “persona,” a portmanteau word intended to capture every vulnerable aspect of “me”—from my material mass to my manners and mind. It is the total of my ephemera consisting of accumulated knowledge, intellectual powers, beliefs, phantasms, pretensions, health, wealth, gifts, looks, tastes, social relations, hopes, memories. These aspects of my persona constitute the whole of whatever of me can be experienced by me or others and is distinct from my self. It is my entire repertoire of potentials other than my capacity to commit them all to the pursuit of the Second Good.

I say “repertoire” because the persona is not some unitary faculty. Indeed, many of its constituent elements could not, by themselves, be described as “capacities” at all—bad habits are an example. But, taken as the whole of its content, the persona constitutes the capacity of the person apart from the peculiar capacity that is the self. The persona is exactly those resources, intellectual and other, that constitute the net inventory of means available to execute whichever basic commitment one has undertaken. The manifold of the persona is the instrument of the self in the execution of its chosen vocation.

The plainer way of saying this may focus on the two different goods; one is realized by the self, the other by the persona acting under the prompting of the self. The self is the potency for the First Good—the moral (or spiritual) perfection of the person. This good stands distinct from every conception of “success” or “happiness.” To be sure, goodness is a reality of the whole person, but it obtains independently of “what happens” to us in life. This is not said to disparage the achievement of happiness (our own or others’), only to distinguish it. When we do gain a measure of happiness—or manage to make the rest of the world a bit better for our presence—we have achieved the Second Good. There are many subcategories of the Second Good; some, like health, are merely natural; others, like justice, are strictly moral. All enrich our persona in diverse ways. As John Finnis would put it, they are “basic human goods,” and each of us is obliged to use his own present resources to seek them for ourselves and others. They are the diverse and contingent fruit of correct ideas and conduct and not the simple perfection that is accomplished by the self when it effects the First Good. The menagerie of possible Second Goods is infinite.

Note that the capacity that is the self does not will the performance of specific behaviors. What it wills is a subordination to (or defiance of) one’s sovereign obligation to seek and attempt the Second Good. By setting the orientation of the whole person the self fixes the array of possible specific purposes toward which the resources of the persona will be deployed. Again, the ephemera of the persona—its net of resources—are the instruments available to identify and undertake the specific deeds that, if accomplished, appear best suited to fulfill the self’s chosen commitment. When the self is committed to the Second Good, its stock of memory, intelligence, relationships, health, and wealth becomes the instrument for the concrete achievement of security, justice, liberty, or fraternity within the domain of personal relations and in the larger world of human institutions.

Elsewhere, Patrick Brennan and I have dubbed the self’s commitment to seek the Second Good the act of “obtension,” a word that is not quite a neologism. To obtend is to give a reason. And the specific act that self-perfects—that secures the First Good—is the choice of the Second Good as the reason for specific behavior. This view is Traditional in its affirmation both of a real self and of an order of correct conduct. But it is not the tradition of Aristotle, Aquinas, or most of the natural lawyers I know. For the obtending self is fulfilled, not by finding and realizing what is a correct outcome, but by using whatever resources it can enlist from within the persona to attempt that very same end. That is what obtension means. It entails the reversal of intellectualism in an affirmation of the saving potential of the will. Obviously the will cannot function apart from knowledge, but the only item of knowledge that is necessary to achieve the perfection called the First Good is the self’s awareness that there is an order of correct conduct that we are both free to seek and responsible for seeking. The concept of the responsible self thus entails a magnification of the potency of will and an abridgment of the role of the intellect. (This version of First and Second Goods and their respective modes of fulfillment is adumbrated in the moral theology of Alfonsus Liguori and the practical philosophy of Bernard Lonergan, each of whom understood himself as part of the natural law tradition.)

Grounded in the very earthy consciousness of responsibility, this elevation of the will as the core of the self avoids the charge of mysticism that has plagued Kant. It would be a corruption of language to label responsibility a mere “perception,” as if it could be the somatic outcome of tonight’s dinner. This is not to deny that responsibility entails both emotional and cognitive (and maybe somatic) effects. But its core is neither specific knowledge nor reason nor sensibility; it is, rather, the recognition of the binary freedom to commit to or to avoid obligation. The self chooses its response—yea or nay. Here is the paradox of a freedom that is also my necessity. To know obligation is to “catch” my own reality.

The positivist is not necessarily convinced. Like the rest of us, he knows this “experience” of responsibility, but he insists that it is merely another neurological event, perhaps of an especially uniform and durable sort, yet exactly what one would expect. For Darwin knows that this talent of our race for responsibility is an adaptive survival mechanism. To a point he might be right; our individual choices to be responsible probably tend overall to preserve the race, even if an indeterminate number of benign decisions can turn out to be hurtful to the Second Good of the choosers and even to their intended beneficiaries. Indeed, this is almost the definition of responsibility; it is exercised at the risk of a cost that may be recouped only in the form of self-perfection. But the First Good is not part of Darwin’s calculus. Indeed, for him there would in any event be no one to achieve it. We are “nothing buts,” and the self evaporates. This is an answer required by Darwinism, and it is every bit as spooky in its own way as Kant’s “noumenal” self. In any case, it is the Rad-con premise and a pure reductionism; it is not a point to be argued but a dogma to be rejected.

The more nearly scientific criticism of the responsible self is that of the experimental behaviorist. He notes that all of us blobs are carefully educated; and society’s incessant litanies of do’s and don’ts have their intended Pavlovian effect. From the moment that an adult first gets his hands on us we are neurologically wired to learn that deeds P, Q, and R are correct and required, while X, Y, and Z are incorrect and to be avoided. It is all a simple case of behavioral modification. He adds that, were it otherwise—were responsibility the experience of anything real—one could, in any case, never know that this is so. For none of us can unpeel the integument of custom and examine the natural state; by the time we can answer the question about the self, it is already too late.

This, I think, is based on a misunderstanding. It is, nonetheless, welcome, because its exposure helps to clarify the idea of the self as the capacity for responsibility. The behaviorist criticism assumes that responsibility comes to us as a litany of rules to be learned. But, to the contrary, the experience that betrays the self is not at all concerned with specific correct behaviors except to affirm their reality and authority. The imperative (or invitation) of responsibility is precisely the call to the self (whether divinely or autonomously generated) to commit the persona to the search for specific rules and judgments; it has no behavioral content of its own. As a child, I eventually become aware that stealing is misconduct. Before I can even grasp such a rule, I need the insight that my choice to seek will issue in moral specifics just like this interesting rule about theft.

The behaviorist would also need to recall that people learn rather different rules of conduct depending upon who is delivering the behavioral modification and when. One has only to live a while to recognize the menagerie of “correct” conducts that are taught simultaneously in different houses (and, over time, in the same houses). What is enduring through all this diversity is the child’s (indeed, everyone’s) recognition of responsibility. Whatever rules of conduct the particular adult may specify, one primary effect for the child is to confirm the capacity that is already at work in him or her. Parents do, indeed, disagree about what is correct conduct, and that disagreement can in the long run be a problem for the ideals of justice and happiness that any society hopes to realize in the realm of the Second Good; but it is a problem in that realm only. What all but a fringe of parents do agree on is that (whatever its content) there is, indeed, an order of right behavior; and in that reassurance the child is provided his practical opportunity to commit to and undertake the search within his own world. The self is invited to obtend and thereby to achieve the specific good that is its proper object.

To see this is virtually to eliminate the relevance of human memory to our question. As earlier observed, many who deny the self—and some who would defend it—take memory as its criterion. To them we are selves only if we know our own narrative. It is this reliance on memory that invites disproof of the self by exotic hypotheticals involving bodies with multiple memories and the like. But now we see that memory is relevant only insofar as the self needs knowledge for its specific perfection. And once human will is accorded primacy in reaching that perfection—once obtension is sufficient to realize the First Good—the responsible self requires no memory at all beyond its recognition of the imperative to seek the content of correct conduct (the Second Good). The immediate responsibility of the self is to move the persona to acquire whatever new information will serve in that search.

Amnesia is a fitting illustration of this point. A man forgets his place in the world; or, put in my terms, through some accident he loses the content of his persona. What he retains is the experience of responsibility; he knows that there is correct conduct to be sought, and that he has the capacity to commit to its search. Here is a self fully capable of perfecting the whole person by doing the best that can be done toward the Second Good with this impoverished persona. It is, in short, not necessary to the perfection of the self that one know anything beyond the experience of responsibility and the reality of an order of the Second Good.

I should think the same to hold for the putative cases of “multiple personality.” If two selves actually were to share the same body, each would be confronted with the experience of responsibility. And each could commit to discharge its obligation to seek the specific terms of correct conduct by deploying whatever intellectual and other resources are at its command—awkwardly shared though these would be in respect of this eccentric body.

The adoption of a convention favoring the terms “person,” “self,” and “persona” would greatly facilitate discourse among and within our three clusters of theorists. The Traditionalist takes the idea of First and Second Goods as truth; he can distinguish them not only in their meanings but in the peculiar modes of their achievement through activity of the self and persona respectively. He may be an intellectualist who believes that incorrect behavior by itself prevents the achievement of the First Good; if so, he can now disagree intelligibly with his dissenting Traditionalist brother who supposes that the self does its perfecting work by obtending—that is, by moving the persona to seek right answers as best it can, given its particular resources and circumstances.

Likewise, the Rad-con who would deny humanity everything but its flesh can rely upon the terms person, self, and persona to say exactly what he thinks without equivocation. For him the self can cease to be “nothing but” and become, just as he really supposes, nothing at all. He may at the same time find “persona” surprisingly useful as a positive catchword for the neuro-blob that he conceives to constitute the whole. As it did for Carl Jung, “persona” may suggest the entirety of what one seems to be—both subjectively and to others. If I were a Rad-con, I might prefer the term “individual” over “persona”; for the latter is historically contaminated with notions of real value and human vocation. But, in any case, persona is available as a clear concept, once its boundary with “self” and its relation to “person” have been agreed upon.

Meanwhile the Christians may find a niche even for that neglected word “soul.” The peculiar perfection that is the First Good can be conceived in natural or theological terms—or both, as Aquinas insisted. And—if both–then a further distinction would be important and would need its own label. The First Good, when it is understood as fulfillment of a natural potential, seems adequately suggested in the word “self”—hence my own preference for “self-perfection” as the nontheological version of this human fulfillment. But, if one also believed that God first empowers then elects the person who self-perfects, we are invited to call this transcendence the eternal perfection of the soul—or, simply, salvation.

This moves me in conclusion to revisit my starting point—namely, the experience of the responsibility to seek. The word responsibility here is exactly correct; what we experience is an obligation to go in quest of the right way. But I wonder that this moral calling of ours may be as much invitation as it is responsibility. One can read too much Kant, emerging brainwashed and overborne by “duty.” There is a bright side to duty—a very bright side—that Kant chose to avoid. He wanted to manage with reason alone, and this required that the self autonomously legislate its own obligation. One can (barely) conceive of the self doing this; what is quite impossible to imagine is the self inviting itself. If in fact it experiences invitation along with responsibility, we seem to have the implication of another. That implication will not be pursued here. I’m content that this vocation of ours, heard either as responsibility or as invitation, could be a friendly call. But the original question here was, “Are we?” It seems we are; I leave it at that.

John E. Coons is Professor of Law Emeritus at the University of California, Berkeley, and author (with Patrick Brennan) of By Nature Equal (Princeton University Press, 1999).