Support First Things by turning your adblocker off or by making a  donation. Thanks!

There I was, quietly chuckling over Bryan Caplan and Robin Hanson’s back and forth (and forth ) on the reasonableness of cryonics, when somebody decided to bring Derek Parfit into things .

Says Julian:

In reality, our ordinary way of talking about this leads to a serious mistake that Robin implicitly points out: We imagine that there’s some deep, independent, and binary natural fact of the matter about whether “personal identity” is preserved—whether Julian(t1) is “the same person” as Julian(t2)—and then a separate normative question of how we feel about that fact.  Moreover, we’re tempted to say that in a sci-fi hypothetical like Bryan’s, we can be sure identity is not preserved, because logical identity (whose constraints we selectively import) is by definition inconsistent with there being two, with different properties, at the same time. And this is just a mistake. The properties in virtue of which we say that I am “the same person” I was yesterday reflect no unitary natural fact; we assert identity as a shorthand that serves a set of pragmatic and moral purposes.

In fact, pace Julian, there exists such a binary property which I would consider to be the only property that matters — in fact I suspect that it’s the one that most of our pragmatic and moral determinations end up piggy-backing off of — namely the property of it being me. Yes, I’m being cute; but I’m also making a serious point.

I’ve found that the favorite rhetorical trick of reductionists when they are confronted with the brute fact that the only question anybody actually cares about in the philosophy of personal identity, the question raised when somebody waves a gun in my face, the question of my survival, of whether I will experience the experiences of some hypothetical future entity, is to declare the question illegitimate in some way. This can take a number of forms. I’ve seen people claim (amazingly, with a straight face) that the only questions that have meaning are those which can be posited in a de-personalized vocabulary. I’ve seen people claim that since the concept of personal identity is poorly defined, we can ignore such questions, since there is no binary further fact in question, which leads to:

“Why is personal identity poorly defined?”

“Because reductionism is true.”

“Congratulations. Your position is self-consistent. Now were you trying to convince me of something?”

I am forever running across people who tell me that they were “convinced” by Reasons and Persons . Convinced of what? R&P convinced me of exactly one thing: you can’t half-ass reductionism . If you share a subset of Parfit’s premises, Parfit does a very good job of convincing you that you need to subscribe to some extremely counter-intuitive conclusions. I wouldn’t count the result as terribly surprising, but it’s nice to see it laid out in a well-organized fashion. What utterly baffles me, however, is that anyone could even conceive of this as a strategy for convincing somebody who is not a reductionist to become a reductionist. In fact, if you do not have a prior commitment to reductionism, Parfit should drive you away from reductionism. He shows us what the price of reductionism is, and part of that price is having to believe, as Julian appears to, that contrary to what every fiber of my being tells me the question of my survival is not binary. Julian’s prior commitment to reductionism must be very deep indeed.

In fairness, Parfit does make a token attempt at winning over people who do not already agree with him, though his heart is clearly not in it. This generally takes the form of thought experiments in which the question of my survival is very difficult, perhaps even physically impossible to know with certainty. Parfit then seeks to parlay this epistemological shakiness into ontological shakiness: “if you’re not sure whether you’d be alive or dead in this situation, perhaps this suggests that the question is meaningless”, goes the seductive whisper. In fact, it suggests the exact opposite to me: namely that while the question is of ultimate importance, the answers are not forthcoming, and therefore when I live in the universe of philosophy of mind thought experiments, I should act conservatively and avoid transporters and replicators even if by doing so I inconvenience myself.

Getting back to Bryan and Robin, I find Bryan’s position to be the more reasonable. Since we have no way of knowing whether we would survive cryonic freezing and unfreezing, we have no reason to seek out cryonics, and no reason to avoid it either. I do have a question for Robin however.

Question for Robin (and anybody else who wants to upload): Suppose that I tell you that I have perfected a method of uploading your brain onto a computer, destroying your original brain, and then writing the computer brain data into a fresh biological brain in a manner that achieves arbitrarily fine physical accuracy. My method is foolproof in the sense that the newly “printed” copy of you can be guaranteed to respond in an identical manner to the old you when presented with identical stimuli. If you are certain that you would survive uploading, then the compensation I would have to provide you to test my machine should approach the value of your time, plus a fair price for any pain and suffering incurred. If the process is painless, and I conduct it during a time when you are asleep, extremely bored, or otherwise unable to accomplish anything useful; then you should accept any amount of compensation to try out my machine.

Would you accept any amount? I suspect that you would demand significant compensation, and that this is related to the fact that you are uncertain as to whether you would survive uploading.


Comments are visible to subscribers only. Log in or subscribe to join the conversation.

Tags

Loading...

Filter First Thoughts Posts

Related Articles