Virtualization. I think if you are virtualized (uploaded to a computer, or copied into a new brain), you still die. I keep running into people on here who seem to think that if you copy someone, this prevents them from dying. It seems that I am in the minority on this one. Am I? Has this been thoroughly debated before? I would like to start a discussion on this. Good idea / bad idea tips on presentation?
I think the LW consensus is that the copy is also you, and personal identity as we think of it today will have to undergo significant change once uploads and copies become a thing.
Contemporary people are more or less completely bamboozled by the whole topic of minds, brains, and computers. It’s like in the early days of language, when some people thought that reality was created by a divine breath speaking the true names of things, or that the alphabet existed before the universe alongside God, and so on. Language was the original information technology that was made into an idol and treated like magic because it seemed like magic. The current attitudes to computers and computation are analogous, except that we really can culture neurons and simulate them, so we are going to be creating hybrid entities even more novel, in evolutionary terms, than a primate with a verbalizing stream of consciousness (which was a hybrid of biology and language).
What is the computational paradigm of mind? Often this paradigm floats free of any material description at all, focusing solely on algorithms and information. But if we ask for a physical description of computation, it is as follows: There is an intricate physical object—a brain, a computer. Mostly it is scaffolding. There are also non-computational processes happening in it—blood circulating, fan spinning. But among all the physical events which happen inside this object, there are special localized events which are the elementary computations. A wave of depolarization travels along a cell membrane. The electrons in a transistor rearrange themselves in response to small voltages. In the intricate physical object, billions of these special events occur, in intricate trains of cause and effect. The computational paradigm of mind is that thought, self, experience, identity are all, in some sense, nothing but the pattern of these events.
These days it is commonly acknowledged that this supposed identity is somewhat mysterious or unobvious. I would go much further and say that almost everything that is believed and said about this topic is wrong, just like the language mysticism of an earlier age, but it has a hold on people’s minds because the facts seem so obvious and they don’t have any other way of conceiving of their own relationship to those facts. Yes, it’s mysterious that mere ink on a page has such power over our minds and such practical utility, but the reality of that power and that utility are self-evident, therefore, in the beginning was the word, and the word was with God, and the word was God. Yes, it’s mysterious that a billion separate little events of particles in motion could feel like being a person and being alive, but we know that the brain is made of neural circuitry and that we could in principle simulate it on any computing mechanism, therefore you are a program in your brain, and if we ran that program somewhere new, you would live again.
People try with varying degrees of self-awareness and epistemic modesty to be rational about their beliefs here, but mostly it’s the equivalent of different schools of language mysticism, clashing over whether the meaning-essence only inhabits the voice, or whether it can be found in the written word too. In my estimation, what people say about consciousness, uploads, and personal identity, is similarly far from the reality of how anything works and of what we really are.
If we ever extend human understanding far enough to grasp the truth, it’s going to be something bizarre—that you are a perspective vortex in your cortical quantum fields, something like that, something strange and hardly expressible with our current concepts. And meanwhile, we continue to develop our abilities to analyze the brain materially, to shape it and modify it, and to make computer hardware and software. Those abilities are like riding a bicycle, we can pick them up without really knowing what we are doing or why it works, and we’re in a hurry to use those abilities too.
So most likely, that biolinguistic hybrid, the primate who thinks in words, is going to create its evolutionary successor without really understanding what it’s doing, and perhaps even while it is possessed with a false understanding of what it is doing, a fundamentally untrue image of reality. That’s what I see at work in these discussions of mind uploading and artificial intelligence: computational superstition coupled to material power. The power means that something will be done, this isn’t just talk, there will be new beings; but the superstition means that there will be a false image of what is happening as it happens.
If you use the concepts of “dying” or “personal identity” in this context, you risk committing the noncentral fallacy, since uploading is an atypical case of their application, and their standard properties won’t automatically carry over.
For example, concluding that an instance of you “actually dies” when there is also a recent copy doesn’t necessarily imply that something bad took place, since even if you do in some sense decide that this event is an example of the concept of “dying”, this is such an atypical example that its membership in that concept provides only very weak evidence for sharing the property of being bad with the more typical examples. Locating this example in the standard concepts is both difficult and useless, a wrong question.
The only way out seems to be to taboo the ideas of “dying”, “personal identity”, etc., and fall back on the arguments that show in what way typical dying is bad, and non-dying is good, by generalizing these arguments about badness of typical destruction of a person to badness of the less typical destruction of a copy, and goodness of not destroying a person to goodness of having a spare copy when another copy is destroyed.
It seems to me that the valuable things about a living person (we’ve tabooed the “essence of personal identity”, and are only talking about value) are all about their abstract properties, their mind, their algorithm of cognition, and not about the low-level details of how these abstract properties are implemented. Since destruction of a copied person preserves these properties (implemented in the copy), the value implemented by them is retained. Similarly, one of the bad things about typical dying (apart from the loss of a mind discussed above) seems to be the event of terminating a mind. To the extent this event is bad in itself, copying and later destroying the original will be bad. If this is so, destructive uploading will be better than uploading followed by destruction of the conscious original, but possibly worse than pure copying without any destruction.
Almost everybody starts with the intuitive notion that uploading will kill the “real you”. The discussion seems to have been treading the same ground since at least the 1990s, so I don’t really expect anything new to come out of yet another armchair rehash.
Chapters 9 and 10 in David Chalmers’ singularity paper are a resonably good overview of the discussion. Chalmers end up finding both stances convincing given different setups for a thought experiment, and remains puzzled about the question.
Almost everybody starts with the intuitive notion that uploading will kill the “real you”.
Really? I started with the assumption that uploading wouldn’t necessarily be destructive, but people chose to discuss destructive uploading because it simplifies some of the philosophical questions. On second thought, there may also be a bias from science fiction, where promising developments are likely to have a horrific downside.
Yeah, assuming some sort of destructive upload in my comment there, naturally. My assumptions for the initial stance most people will have for the various scenarios are basically:
Non-destructive upload, the initial person remains intact, an exact upload copy is made: The “real you” is the original human, all that matters is whether real you lives or dies.
Destructive upload, the initial person gets knocked out and ground to pieces to make the exact upload copy: “Real you” dies from being ground to pieces, end of story.
Moravec transfer, the initial person’s brain gets converted to a machine substrate one neuron at a time: People other than John Searle seem to be OK with personal continuity remaining in this scenario.
Also, embracing the possibility of nondestructive uploads requires us to think about our identities as potentially non-uniquely instantiated, which for a lot of people is emotionally challenging.
There are a lot of elements to dying and if technology progresses far enough I think we could have incidents where some but not all of them happen. However, depending on what exactly happens, some of these should still be regarded as being just as bad as death.
Death of experience
Your experience of the world stops permanently.
This is important because you will never experience pleasure again if you stop experiencing permanently.
Death of self
Your personality, memories, etc, your “software pattern” cease to exist.
This is important because other people are attached to them and will be upset if they can’t interact.
Death of genes
Your genetic material, your “hardware pattern”, is lost. Your genetic line may die out.
This is unacceptable if you feel that it’s an important purpose in life to reproduce.
Death of influence
It becomes impossible for you to consciously influence the world.
This is important because of things like the necessity of taking care of children or a goal to make a difference.
Death of body
Your body, or the current copy of your “hardware” becomes unusable.
This is important if your brain isn’t somewhere else when it happens but may not be important otherwise.
I am uploaded. A copy of my “self” is made (I believe this is the definition of “you” people are using when they’re talking about uploading themselves) and the original is disassembled or dies of natural causes. That’s all that was done. I’m assuming no other steps were taken to preserve any other element of me because it was believed that uploading me means I wouldn’t die. I’ll call the original Epiphany and the copy I’ll call Valorie.
Epiphany:
Death of body—Check. Brain was in it? Check.
Death of experience—Check. (See previous note about my brain.)
Death of genes—Check. Pregnancy is impossible while dead. Genes were not copied.
Death of influence—Check. Upload was not incarnated.
Death of self—No. There is a copy.
Valorie:
Death of body—No body. It’s just a copy.
Death of experience—Doesn’t experience, it isn’t being run, it’s just a copy.
Death of genes—Doesn’t have genes, a copy of my “self” is being stored in some type of memory instead of a body.
Death of influence—Cannot influence anything as a copy, especially if it is not being run.
Death of self—No. It’s preserved.
Conclusion:
I am dead.
Of course it’s not hard to imagine other scenarios where everything possible is copied and the copy is incarnated, but Epiphany would still stop experiencing, which is unacceptable, so I would still call this “dead”.
I’m perfectly willing to accept that if you get uploaded and then nobody ever runs the upload then that’s death. But if you’re trying to give the idea a fair chance, I’m not sure why you’re assuming this.
There’s one really important detail here. If you get uploaded, even if the copy is put into a body exactly like yours and your genes are fully preserved and everything goes right, you still stop experiencing as soon as you die.
Okay, I was pretty sure that was your real point, so I just wanted to confirm that and separate away everything else.
But to be honest, I don’t have a real answer. It’s definitely not obvious to me that I will stop experiencing in any real way, but I have a hard time dismissing this as well. One traditional answer is that “you will stop experiencing” is incoherent, and that continuity of experience is an illusion based on being aware of what you were thinking about a split second ago, among other things.
I decided that being transformed would probably maintain continuity of experience, and being re-assembled out of the same particles in the exact same locations would probably result in continuity of experience (because I can’t see that as a second instance), but I am not sure about it (because the same particles in different locations might not qualify as the same instance, which brings into question whether same instance guarantees continuous experience) and I’m having a hard time thinking of a clarifying question or hypothetical scenario to use for working it out. (It’s all in the link right there).
One traditional answer is that “you will stop experiencing” is incoherent, and that continuity of experience is an illusion based on being aware of what you were thinking about a split second ago, among other things.
What’s not incoherent, though, is looking forward to experiencing something in the future, yet knowing you’re going to be disassembled by a transporter and a copy of you will experience it instead. That, in no uncertain terms, is death. We can tell ourselves all day that having a continuous experience relies on you being able to connect your current thought and previous thought, but the real question we need to ask is “Will I have any thoughts at all?” so the connected thoughts question is a red herring (as it relates only to your second instance, not your first one) and is a poor clarifying question for telling whether you (the original) survived.
What’s not incoherent, though, is looking forward to experiencing something in the future, yet knowing you’re going to be disassembled by a transporter and a copy of you will experience it instead. That, in no uncertain terms, is death.
Either way, only a copy of you will experience it, because the non-copy of you is trapped in the present and has no way to experience the future. The copy can be made artificially, using a transporter, or naturally as time passes. Why is there a difference?
Why is the time-copy even a copy though? If we call some A a copy of some original B, then we have to have reason to associate A with B (if A and B are paintings, the one is a copy of the other if it closely resembles it, say). What association does EpiphanyA at t0 have with EpiphanyB at t1?
Well, I think I persist through time. But you’re saying that time makes copies of me, and I’m curious to know why you think those things are copies and not just new (very short lived) people.
Wait, wait, wait. I’m still confused as to why you think that time is copying me. By what mechanism does time create new instances of me and destroy the old ones? At what interval does this happen? Has anyone actually observed this phenomenon or is it just a theory?
I could reverse the question. Why do you think you’re the same person at different times, as opposed to being a copy? By what mechanism is a single person carried forward through time? Has anyone actually observed this phenomenon, or is it just a theory?
It’s not clear to me that those are fair questions, but then it’s not clear to me that their reversals are fair, either.
Occam’s razor. The theory that I’m being copied and destroyed over and over again doesn’t explain anything additional that I can think of, so it’s more likely the simpler idea (that I am not being copied and destroyed over and over) is true.
Also, not believing that I am being copied does not qualify as a belief. That’s just lack of belief in a theory.
If you guys believe I’m being copied over and over again, that IS a belief though, and if you want me to agree with it, the burden of proof lies on you.
The theory that I’m being copied and destroyed over and over again doesn’t explain anything additional that I can think of,
I think both of you are sorta failing to address (or not addressing clearly enough) the point that objects being “copied” “destroyed” or “persisted” is not really meaningful at the level of physics at all—like envisioning electrons as billiard balls, it’s mapping a concept that’s intuitive in one’s mind onto the physical world where it does not apply.
At the bottommost level of quantum physics that we know of, electrons have no identity. From what I gather to “destroy” an electron from here and “copy” it there is indistinguishable physically (even in principle) from “moving” it from here to there. Those are concepts which are differentiatable in our adapted-via-evolution minds, not in reality.
That having been said I don’t dismiss your concerns about uploading altogether because we still aren’t unconfused enough about consciousness to be able to clarify to ourselves what the fuck it’s supposed to do… I would really like to be unconfused about qualia and the nature of existence before I do any uploading of myself.
Yup. Which is why I say it’s not clear to me those are fair questions.
That said… if in the future two entities exist that are physically and behaviorally indistinguishable from one another, and one of them is me, it follows that either both of them are me, or one and only one of them is me. In the latter case, it seems “me-ness” depends some physically and behaviorally undetectable attribute which only one of them has.
Occam’s razor also seems to suggest that both of them are me, since the alternative posits an additional unnecessary entity in the system.
Yup. Which is why I say it’s not clear to me those are fair questions.
I’m interpreting this as difficulty figuring out who the burden of proof belongs to. I think it helps to realize that with each theory there are at least three options:
Believe it’s true.
Believe it’s false.
Not believe anything.
If you say “There’s a dragon in my garage.” and I say “I don’t believe this.” I am not saying “I believe there is no dragon in your garage.” I’m saying “I don’t have a belief about this.”
Now, I could go in there and inspect everything and conclude that there’s no dragon, at which point I’d have a belief that there isn’t a dragon. But why should I do this? You might claim next that there’s a God in your garage. Then I’d have to go to all sorts of work trying to prove there is no God in your garage. Then you could claim that there’s a pink elephant, and on and on.
This is why, if you want people to believe something the burden of proof lies on you—you can’t just turn it around and say “Well prove that it’s NOT this way!”—if that were the rule, people would troll the crap out of us with dragons and Gods and pink elephants and such.
Does that give you any clarity in whose burden it is to offer evidence regarding time copying people?
Occam’s razor also seems to suggest that both of them are me, since the alternative posits an additional unnecessary entity in the system.
No. The additional entity is not unnecessary. The second instance is absolutely required to explain the way you reacted to my teleporter with technical failure argument.
I am surprised you didn’t update after that by recognizing that there were two separate instances, and I don’t know what to do about it. I’m stumped as to why you aren’t seeing it this way.
If you say “There’s a dragon in my garage.” and I say “I don’t believe this.” I am not saying “I believe there is no dragon in your garage.” I’m saying “I don’t have a belief about this.”
Perhaps you are. That’s certainly not what I would be saying if someone said that to me and I gave that reply.
This is why, if you want people to believe something the burden of proof lies on you—you can’t just turn it around and say “Well prove that it’s NOT this way!”
Proof in the sense you are discussing here is mostly useful when trying to win debates. I have no particular desire for you to believe anything in particular.
The second instance is absolutely required to explain the way you reacted to my teleporter with technical failure argument.
The unnecessary entity in the second case is the physically and behaviorally undetectable attribute which only the “real me” has. I don’t see any need for it, and I have no idea why you think it’s necessary to explain any part of my reaction to any of your hypotheticals.
I’ll call the original Epiphany and the copy I’ll call Valorie.
So your definition of self stops at the physical body? Presumably mostly your brain? Would a partial brain prosthesis (say, to save someone’s life after a head trauma) mimicking the function of the removed part make the recipient less of herself? Does it apply to the spinal cord? How about some of the limbic system? Maybe everything but the neocortex can be replaced without affecting “self”? Where do you put the boundary and why?
So your definition of self stops at the physical body?
No. As I mentioned, “This (referring to Death of Body) is important if your brain isn’t somewhere else when it happens but may not be important otherwise.”
If you get into a good replacement body before the one you’re in dies, you’re fine.
Presumably mostly your brain?
If you want to live, a continuation of your experience is required. Not the creation of a new instance of the experience. But the continuation of my (this copy’s) experience. That experience is happening in this brain, and if this brain goes away, this instance of the experience goes away, too. If there is a way to transfer this experience into something else (like by transforming it slowly, as Saturn and I got into) then Epiphany1′s experience would be continued.
Would a partial brain prosthesis (say, to save someone’s life after a head trauma) mimicking the function of the removed part make the recipient less of herself?
If Epiphany1′s experience continues and my “self” is not significantly changed, no. That is not really a new instance. That’s more like Epiphany1.2.
Does it apply to the spinal cord? How about some of the limbic system?
Not sure why these are relevant. Ok limbic system is sort of relevant. I’d still be me with a new spinal cord or limbic system, at least according to my understanding of them. Why do you ask? Maybe there’s some complexity here I missed?
Maybe everything but the neocortex can be replaced without affecting “self”?
Hmmm. If my whole brain were replaced all at once, I’d definitely stop experiencing. If it were replaced one thing at a time, I may have a continuation of experience on Epiphany1, and my pattern may be preserved (there would be a transformation of the hardware that the pattern is in, but I expect my “self” to transform anyway, that pattern is not static).
I am not my hardware, but I am not my software either. I think we are both.
If my hardware were transformed over time such that my continuation of experience was not interrupted, then even if I were completely replaced with a different set of particles (or enhanced neurons or something) that as long as my “self pattern” wasn’t damaged, I would not die.
I can’t think of a way in which I could qualify that as “death”. Losing my brain might be a cause of death, but just because something can cause something else doesn’t mean it does in every instance. Heat applied to glass causes it to become brittle or melt and change form, destroying it. But we also apply heat to iron to get steel.
I’m trying to think of a metaphor that works for similar transformations… larva turns into a butterfly. A zygote turns into a baby, and a baby, into an adult. No physical parts are lost in those processes that I am aware of. I do vaguely remember something about a lot of neural connections being lost in early childhood… but I don’t remember enough about that to go anywhere with it. The chemicals in my brain are probably replaced quite frequently, if the requirements for ingesting things like tryptophan are any indicator. Things like sugar, water and nutrients are being taken in, and byproducts are being removed. But I don’t know what amount of the stuff in my skull is temporary. Hmm…
I want to challenge my theory in some way, but this is turning out to be difficult.
Maybe I will find something that invalidates this line of reasoning later.
Hmmm. If my whole brain were replaced all at once, I’d definitely stop experiencing. If it were replaced one thing at a time, I may have a continuation of experience on Epiphany1, and my pattern may be preserved
If my hardware were transformed over time such that my continuation of experience was not interrupted, then even if I were completely replaced with a different set of particles (or enhanced neurons or something) that as long as my “self pattern” wasn’t damaged, I would not die.
So the “continuity of experience” is what you find essential for not-death? Presumably you would make exceptions for loss of consciousness and coma? Dreamless sleep? Anesthesia? Is it the loss of conscious experience that matters or what? Would a surgery (which requires putting you under) replacing some amount of your brain with prosthetics qualify as life-preserving? How much at once? Would “all of it” be too much?
Does the prosthetic part have to reside inside your brain, or can it be a machine (say, like a dialysis machine) that is wirelessly and seamlessly connected to the rest of your brain?
If it helps, Epiphany has implied elsewhere, I think, that when they talk about continuity of experience they don’t mean to exclude experience interrupted by sleep, coma, and other periods of unconsciousness, as long as there’s experience on the other end (and as long as the person doing that experiencing is the same person, rather than merely an identical person).
Yeah that has gotten tricky. I’ve worded the question as “Same instance or different instance?”. I’ve also discovered a stickier problem—just because a re-assembled me might qualify, in all ways, as “the same instance” I am not sure that guarantees the continuation of my experience. I explore that here, in two examples being re-assembled from the same particles both in the same arrangement and in a different arrangement. (scroll to “Scenarios meant to explore instance differentiation and the relation to continuous experience”—I labeled it to make it easy to find.)
As TheOtherDave pointed out, the question is what is, in your opinion, the essence of “self”. Clearly it cannot just be all the same “particles” (molecules?), since particles in our bodies change all the time. You seem to be relating self with consciousness, but not identifying the two. That’s why I’m asking questions aimed to nail the difference. That’s why I asked these questions earlier:
So the “continuity of experience” is what you find essential for not-death? Presumably you would make exceptions for loss of consciousness and coma? Dreamless sleep? Anesthesia? Is it the loss of conscious experience that matters or what? Would a surgery (which requires putting you under) replacing some amount of your brain with prosthetics qualify as life-preserving? How much at once? Would “all of it” be too much?
“The essence of self” seems like the wrong question to me. That sounds too much like “What is the essence of your personality?” and that’s irrelevant here.
What I’m talking about is my ability to experience. We all have an ability to experience (I assume) that, although it may be shaped by our personalities, it is not our personalities. Example:
A Christian sees a Satanic ritual.
A Satanist sees the same ritual.
The Christian is horrified. The Satanist thinks it’s great.
The reason one was horrified and the other thought it was great is because they have different beliefs, possibly different personality types, different life experiences and possibly even different neurological wiring.
What did they have in common?
They both saw a Satanic ritual.
THAT is the part I am trying to point out here. The part that experiences. It’s not one’s personality, or beliefs, or experiences or neurological traits.
I am saying essentially “Even if personality, beliefs, experiences and neurological differences are copied, this does nothing to guarantee that the part of you that experiences is going to survive.” Asking to define the essence of self is not relevant since I’m saying to you “Even if self is copied, this thing that I am talking about may not survive”.
How would you convince someone who thinks instants of experience are real and memories that give instants of experience historical context are real, but doesn’t believe in any meaningful process of forward continuity from one instant of experience to another beyond the similarity of memories, to believe otherwise? There’s no difference between blinking, taking a nap and being destructively teleported in this stance. It’s all just someone experiencing something now, and someone else with very similar memories that include the present experience moment experiencing something else in the future.
I’ve noted to self that this seems like a pattern with us, as you have complained about a question being ignored a few times now. Not sure what I should be doing about it when I don’t see a question as relevant but maybe I should just be like “I don’t see how this is relevant.”
Don’t know how I got the habit of ignoring things that seem irrelevant and moving on to whatever seems relevant but I can see why it would be annoying so I will be thinking about that. Thanks for getting me to see the pattern.
Questions to consider: Would you feel the same way about using a Star Trek transporter? What if you replaced neurons with computer chips one at a time over a long period instead of the entire brain at once? Is everyone in a constant state of “death” as the proteins that make up their brain degrade and get replaced?
The million dollar question: Do I stop experiencing?
If I were to be disassembled by a Star Trek transporter, I’d stop experiencing. That’s death. If some other particles elsewhere are reassembled in my pattern, that’s not me. That’s a copy of me. Yes, I think a Star Trek transporter would kill me. Consider this: If it can assemble a new copy of me, it is essentially a copier. Why is it deleting the original version? That’s a murderous copier.
I remember researching whether the brain is replaced with new cells over the course of one’s life and I believe the answer to that is no. I forgot where I read that, so I can’t cite it, but due to that, I’m not going to operate from the assumption that all of the cells in my brain are replaced over time.
However, if one brain cell were replaced in such a way that the new cell became part of me, and I did not notice the switch, my experiencing would continue, so that wouldn’t be death. Even if that happened 100,000,000,000 times (or however many times would equate to a complete replacement of my brain cells) that wouldn’t stop me from experiencing. Therefore, it’s not a death—it’s a transformation.
If my brain cells were transformed over time into upgraded versions, so long as my experience did not end, it would not be death. Though, it could be said to be a transformation—the old me no longer exists. Epiphany 2012 is not the same as Epiphany 1985 because I was a child then, but my neural connections are completely different now and I didn’t experience that as death. Epiphany 2040 will be completely different from Epiphany 2012 in any case, just because I aged. If I decide to become a transhuman and the reason I am different at that time is because I’ve had my brain cells replaced one at a time in order to experience the transformation and result of it, then I have merely changed, not died.
It could be argued that if the previous you no longer exists, you’re dead, but the me that I was when I was two years old or ten years old or the me I was when I was a zygote no longer exists—yet I am not dead. So the arguer would have to distinguish an intentional transformation from a natural one in a way that sets it apart as having some important element in common with death. All of my brain cells would be gone, in that scenario, but I’d say that’s not a property of death, just a cause of death, and that not everything that could cause death always will cause death. Also, it is possible to replace brain cells as they die, in which case, the more appropriate perspective is that I was being continued, not replaced. Doing it that way would be a prevention of death, not a cause of death. I would not technically be human afterward, but my experience would continue, and the pattern known as me would continue (it is assumed that this pattern will transform in any case, so I don’t see the transformation of the pattern as a definite loss—I’d only see it that way if I were damaged) so I would not consider it a death.
The litmus test question is not “Would the copy of me continue experiencing as if nothing had happened.” the litmus test question is “Will I, the original, continue experiencing?”
Here are two more clarifying questions:
Imagine there’s a copy of you. You are not experiencing what the copy is experiencing. It’s consciousness is inaccessible to you the same way that a twin’s consciousness would be. Now they want to disassemble you because there is a copy. Is that murder?
Imagine there’s a copy of you. You’ve been connected to it via a wireless implant in your head. You experience everything it experiences. Now they want to disassemble you and let the copy take over. If all the particles in your head are disassembled except for the wireless implant, will you continue experiencing what it experiences, or quit experiencing all together?
I used to think this way. I stopped thinking this way when I realized that there are discontinuities in consciousness even in bog-standard meat bodies—about one a day at minimum, and possibly more since no one I’m aware of has conclusively established that subjective conscious experience is continuous. (It feels continuous, but your Star Trek transporter-clone would feel continuity as well—and I certainly don’t have a subjective record of every distinct microinstant.)
These are accompanied by changes in physical and neurological state as well (not as dramatic as complete disassembly or mind uploading, but nonzero), and I can’t point to a threshold where a change in physical state necessitates subjective death. I can’t even demonstrate that subjective death is a coherent concept. Since all the ways I can think of of getting around this require ascribing some pretty sketchy nonphysical properties to the organization of matter that makes up your body, I’m forced to assume in the absence of further evidence that there’s nothing in particular that privileges one discontinuity in consciousness over another. Which is an existentially frightening idea, but what can one do about it?
Sleep, total anesthesia, getting knocked on the head in the right way, possibly things like zoning out. Any time your subjective experience stops for a while.
Actually, I expect that our normal waking experience is also discontinuous, in much the same sense that our perception of our visual field is massively discontinuous. Human consciousness is not a plenum.
Temporarily going unconscious is not the same as permanently going unconscious. Whether we temporarily go unconscious or not does not entail permanent unconsciousness being or not being death.
Now, some questions of mine: you said “If I were to be disassembled by a Star Trek transporter, I’d stop experiencing. That’s death.”
When you fall asleep, do you stop experiencing? If so, is that death? If it isn’t death, is it possible that other things that involve stopping experiencing, like the transporter, are also not death?
We need to focus on the word “I” to see my point. I’m going to switch that out with something else to highlight this difference. For the original, I will use the word “Dave”. As tempting as it is to use “TheOtherDave” for the copy, I am going to use something completely different. I’ll use “Bob”. And for our control, I will use myself, Epiphany.
Epiphany takes a nap. Her brain is still active but it’s not conscious.
Dave decides to use a teleporter. He stands inside and presses the button.
The teleporter scans him and constructs a copy of him on a space ship a mile away.
The copy of Dave is called Bob.
The teleporter checks the copy of Bob before deleting Dave to make sure he was copied successfully.
Dave still exists, for a fraction of a second, just after Bob is created.
Both of them COULD go on existing, if the teleporter does not delete Dave. However, Dave is under the impression that he will become Bob once Bob exists. This isn’t true—Bob is having a separate set of experiences. Dave doesn’t get a chance to notice this because in only fractions of a second, the teleporter deletes Dave by disassembling his particles.
Dave’s experience goes black. That’s it. Dave doesn’t even know he’s dead because he has stopped experiencing. Dave will never experience again. Bob will experience, but he is not Dave.
Epiphany wakes up from her nap. She is still Epiphany. Her consciousness did not stop permanently like Dave’s. She was not erased like Dave.
Epiphany still exists. Bob still exists. Dave does not.
The problem here is that Dave stopped experiencing permanently. Unlike Epiphany who can pick up where Epiphany left off after her nap because she is still Epiphany and was never disassembled, Bob cannot pick up where Dave left off because Bob never was Dave. Bob is a copy of Dave. Now that Dave is gone, Dave is gone. Dave stopped experiencing. He is dead.
Ah! So when you say “If I were to be disassembled by a Star Trek transporter, I’d stop experiencing” you mean “I’d [permanently] stop experiencing.” I understand you now, thanks.
So, OK. Suppose Dave decides to go to sleep.
He gets into bed, closes his eyes, etc. The next morning, someone opens their eyes. How would I go about figuring out whether the person who opens their eyes is Dave or Bob?
This is exactly backwards. I recognize a copier because it makes copies. That’s how I know something is a copier. If I need to know whether something is a copier before I can decide whether what it creates is a copy or not, there’s something wrong with my thinking.
If you had stepped into a teleporter and pressed the button, how would you know that it killed you?
I wouldn’t, naturally.
Of course, if Dave steps into an incinerator and presses the button, Dave also doesn’t know that the incinerator killed Dave. Dave is just dead, and knows nothing.
OTOH, if Dave steps into a non-incinerator and presses the button, Dave knows it didn’t kill Dave.
And the way that Dave knows this is that something is standing there, not-dead, after pressing the button, and that something identifies as Dave, and resembles Dave closely enough.
This happens all the time… I have pressed many buttons in my life, and I know they haven’t killed me, because here I am, still alive.
And I expect this is exactly what happens with a properly functioning teleporter. I press the button, and in the next moment something is aware of being Dave, and therefore not dead. It just happens to be in a different location.
If I need to know whether something is a copier before I can decide whether what it creates is a copy or not, there’s something wrong with my thinking.
Okay, so would you recommend I check under my bed tonight for anything that might make a copy of me and disassemble the original? I need something more to go on. I’m having a hard time not equating this with worrying about boogeymen.
if Dave steps into an incinerator and presses the button, Dave also doesn’t know that the incinerator killed Dave.
Actually, for at least a few seconds, possibly a few minutes, Dave would be screaming in agony and he would most certainly notice that he is experiencing death by incineration.
OTOH, if Dave steps into a non-incinerator and presses the button, Dave knows it didn’t kill Dave.
Unless the non-incinerator happens to be a human copier, and Dave did not recognize it at first.
something is aware of being Dave, and...
Yes, exactly. The original Dave has died in such a way that he didn’t even notice. Dave2 definitely doesn’t want to think that an exact copy of himself died just a moment ago, and really definitely doesn’t want to have to worry that he will need to cease experiencing in order to “go back” to where he came from, so due to normalcy bias, Dave2 declares that the fact that Dave2 exists means that Dave1 never died, and enjoys the confirmation bias that this non-sequitur gives him until he ceases to experience when “loaded” back onto his space ship.
Okay, so would you recommend I check under my bed tonight for anything that might make a copy of me and disassemble the original? I need something more to go on. I’m having a hard time not equating this with worrying about boogeymen.
Indeed! And you should equate it with worrying about boogeymen. It’s a silly thing to worry about.
The question is why it’s silly.
I would say it’s silly, not because I haven’t noticed any boxes marked “human copier” under my bed, because every time in the past that I’ve woken up I’ve resembled the person who went to bed so closely that it’s been ridiculous to worry that I might not be the same person.
Dave would be screaming in agony and he would most certainly notice that he is experiencing death by incineration.
Nope.
Dave would notice that he’s experiencing being incinerated, certainly, if the incinerator were as slow as you describe. But he would not experience death by incineration. He wouldn’t experience death at all. Here’s how I know: as long as Dave is experiencing anything, Dave isn’t yet dead. And if he’s not dead, he certainly can’t be experiencing death.
The original Dave has died … due to normalcy bias, Dave2 declares that the fact that Dave2 exists means that Dave1 never died … enjoys the confirmation bias that this non-sequitur gives him
(nods) Just like his predecessor did the night before when he went to bed, and Dave woke up in his place.
But of course, as above, that was too silly to worry about, just like boogiemen.
Indeed! And you should equate it with worrying about boogeymen. It’s a silly thing to worry about.
Okay, I guess you were trying to say that my concern about being disassembled after being copied as a method of “transportation” is the equivalent of worrying about boogeymen?
But he would not experience death by incineration.
“OH GOD I’M DYING AHHH!” < I call this experiencing death. Different definitions, I guess. If you want to get technical about it, and talk about death in a solely tangible way, sure Dave isn’t dead when he’s thinking about that. But Dave is experiencing death emotionally and intellectually. He knows he’s in the process of dying, that death is inevitable. He also feels emotional (and, well, physical) pain that amount to an experience worthy of symbolizing death. Maybe it would be more grammatically correct though if I said he is experiencing dying. In any case, I meant to differentiate this from transporter death because with transporter death, Dave believes that he is going to survive the “transportation” and doesn’t feel any emotional or physical pain, so there’s no knowledge of or suffering about his death.
But of course, as above, that was too silly to worry about, just like boogiemen. So is this.
If I offered you the free use of a device that could make a copy of you and put it anywhere you want and cause the current you to be disassembled and dispersed in the surrounding environment, (2-way trip) would you use it?
I call this experiencing death. Different definitions, I guess.
(shrug) OK, sure. Incidentally, by your definition, many many people walking around today have experienced death. Hell, I’ve experienced death myself.
Anyway, using your definition, if I stepped into what I thought was a molecular disassembler that would kill me, and it disassembled me slowly enough that I experienced the process of being disassembled, I would “experience death” by your definition, and I would know I’d experienced it the same way I know I experience the taste of cheese when I experience the taste of cheese. Later, I would look around the teleport receiver booth and say “Huh. I’m not dead? Cool” and go on with my life.
That is, I would have “experienced death” but not actually died, just as many many people do in real life when they wake up after heart attacks, accidents, etc.
If I offered you the free use of a device that could make a copy of you and put it anywhere you want and cause the current you to be disassembled and dispersed in the surrounding environment, (2-way trip) would you use it?
Assuming that it reliably creates that copy? Absolutely. Far more convenient than airplanes.
(By “reliably” here I just mean that I trust it to actually create a close-enough copy, and not to instead create some imperfect copy that does not resemble me closely enough to satisfy my preferences regarding consistency over time.)
If I offered you the free use of a device that could make a copy of you and put it anywhere you want and cause the current you to be disassembled and dispersed in the surrounding environment, (2-way trip) would you use it?
Assuming that it reliably creates that copy? Absolutely. Far more convenient than airplanes.
Yes.
I already know what your bumper sticker in the future is going to say:
I break (down) for transporters!
Now, say the transporter has a malfunction at the exact fraction of a second between the time when Dave2 has been verified as a complete copy and the time when Dave1 is going to be disassembled.
The technician says it’s going to take three hours to fix. You go out and catch a movie. After the movie, you go outside and stretch, and you see that it’s a beautiful day. You have two options:
More than that… if I arrive at the transporter complex and am told that this is an option, that I can duplicate myself and send one copy to my destination while the other one stays here, I absolutely prefer to be duplicated… no reason for a conveniently timed technical failure.
Indeed, I might postpone the trip altogether and spend the next week right here hanging out with myself and having threesomes with our husband and meeting with lawyers to figure out what we do with our funds and material goods.
Relatedly, given a button that I know creates two perfect copies and then picks one of the resulting three Daves at random to destroy an hour later, I press it. At the time of pressing the button, I’m indifferent as to which of the three copies gets selected for destruction… they are all me. After pressing the button, one of me goes “Crap! I’m going to die in an hour!” and is unhappy about it, and the other two of me go “Whew! Dodged that bullet!” but feel bad for the third of me. On my account it does not matter in the least which one of the three “was the original me,” assuming there’s even any way to tell, which there may not be.
Now, a question for you.
I enter a spaceship traveling to Alpha Centauri in suspended animation, along with all my friends and loved ones. We could have teleported instead, but we’ve been convinced by your account that this would be suicidal, so we opted for the slower but safer route. While we lie in frozen sleep, the spaceship has a technical failure in mid-flight which reduces the ship and everything in it to constituent atoms. The ship’s captain has the option of using the ship’s transporter to beam us from the doomed ship to the surface of Alpha Centauri.
As far as I can tell, on your account, there’s no particular reason why she should do so… either way, we’re all going to die. Sure, if she does so some complete strangers will pop into existence on Alpha Centauri, but what has that got to do with her? The birthrate on Alpha Centauri is more than high enough already, creating more new people isn’t particularly valuable. Is that right?
Suppose she does so, though, for whatever reason. So someone identical to me (but who on your account is not me, since I died on the ship) wakes up in a thawing chamber on Alpha Centauri, alongside a bunch of thawed people who are identical to my friends and loved ones, and all of us are under the (on your account deluded) belief that we are the same people who entered coldsleep. We throw a big party to celebrate our safe arrival on a new world.
During that party, we turn on the news and learn for the first time about the ship’s actual fate. We are presumably horrified at the sudden discovery that we’re not who we thought we were. The person with my memories looks at the man whom, a moment earlier, he’d thought was his husband, and becomes convinced it’s actually a complete stranger… that they never actually got married. Indeed, they just met a few minutes ago, at the beginning of this party. He’s been making out for the last five minutes with a complete stranger! All around the room, similar realizations are being made, as what had previously been a celebration of safe arrival becomes a wake for me and my friends, who are on your account irretrievably and tragically dead.
Scenario meant to discover whether the experience of life is valued
Relatedly, given a button that I know creates two perfect copies and then picks one of the resulting three Daves at random to destroy an hour later, I press it. At the time of pressing the button, I’m indifferent as to which of the three copies gets selected for destruction… they are all me.
Okay, so I guess what you’re saying here is that what you value about being alive is NOT the experience of life.
How do you feel about this scenario:
You and your husband are planning to go to a really awesome event soon. Maybe it’s the Singularity summit, maybe your favorite rock star is having a concert, maybe it’s the birth of a new baby you guys have been wanting for a long time. Imagine whatever sort of event you’d enjoy most.
You’re really looking forward to it!
Then work calls and says “Dave, two days from now, we need you to do this really important job 3,000 miles away from your ordinary work site. We couldn’t get you a plane ticket on such short notice, but fortunately we have a transporter.”
You agree, as it is your job.
Now you hang up the phone and your husband comes over, saying “I can’t believe we’re actually going to have this event soon! Isn’t it exciting!”
“Yeah, of course!” You say. But something feels wrong.
You realize that you are going to be disassembled by the transporter BEFORE the event happens.
YOU won’t experience the event whatsoever. A copy of you will be there instead.
Is this acceptable?
I certainly don’t want to live a lifestyle where we use transporters to go everywhere and each instance of me only experiences until the next transport. My life would never be long enough to experience any satisfaction. That’s reminiscent of Alice in Wonderland’s absurd circumstance: “Jam tomorrow, jam yesterday, but never ever jam today.”
A new instance of me can experience a future event I’ve been planning for tomorrow, and a past me may have experienced a continuous life before transporters, but most instances of me would just be slaving away during the few hours or days in which they experience, doing things like working or buying groceries, so that other temporary instances of myself can reap the rewards. The instances that do get a reward still wouldn’t get to experience the fulfillment of planning out a goal and following through—this is really important to me for satisfaction.
Scenarios meant to explore instance differentiation and the relation to continuous experience
While we lie in frozen sleep, the spaceship has a technical failure in mid-flight which reduces the ship and everything in it to constituent atoms. The ship’s captain has the option of using the ship’s transporter to beam us from the doomed ship to the surface of Alpha Centauri.
Okay, so (just ignoring for a moment the fact that the transporter itself has just been vaporized, I guess I’ll assume it’s intact) I assume you’re saying the option is to reassemble those people out of their original particles. (Because if not, it isn’t any different from the transporter with technical failure argument, and I’d say that their experiencing ceased when they were disassembled, which is unacceptable, so they’re dead.).
First, I’d like to say that re-assembling the people, no matter what with, may be better than letting them die because that still saves them from four out of the five elements of death above.
So what we’re arguing about is not whether this rescues their genes, their influence in the world, their selves, or their bodies (that’s inconsequential in this case), but whether it saved their ability to experience.
I’m seeing several ways for this to go. The transporter could re-assemble them by putting the exact same particles into the exact same relative locations, or by putting the mass of particles from the accident into whatever locations (mostly not the same locations).
Putting the same particles into the same relative locations:
This, I think, would be the same as turning a computer on and off. I don’t have any reason to think I have a “soul” that would “escape” in this case, and I see no reason to differentiate a me made of the exact same particles as me from a me made from the exact same particles as me. In other words, a copy was never made. The re-assembled me is not a new instance—it is the original. I theorize that me1′s experience would continue.
Putting the mass of particles into different locations:
This is sticky. If I have some of the same particles, but not all of them, is it me1? What if I have all of the same particles but they’re in different locations? That’s really, really sticky. This calls into question: What is experience? To answer this question, I have to ask “What is consciousness?”
I have an idea. If we had enough technology to send a person’s entire pattern to a new location, surely it would require less bandwidth to send only their thoughts or commands to the remote location. Also there would be no risk of being damaged due to copying errors. A brainless body could be constructed there (either in the exact likeness of the person, or in a form designed to make optimal use of resources), and the original person could control it using a mind reading interface such that they experience what the remote avatar is experiencing.
This would be more efficient and less risky, don’t you think?
It still doesn’t answer the sticky question of “Would my experience be continuous if my particles were disassembled and re-arranged?” but I think it addresses the practical transportation problem behind this (also, you’d likely get to inhabit a variety of avatars, which would be cool) but back to the original question:
If all of my particles were disassembled and re-arranged, would I have a continuous experience or not? I had been basing this on whether there would be a new instance or not. But this confuses me as to whether there’s a new instance, and makes me ask whether being disassembled and re-assembled exactly the same way might mean I lose continuous experience even if I am the same instance.
So I have to answer the question of “What is continuous experience?” and “How does it work?”
Unfortunately, I see no way of testing for whether a consciousness is having a continuous experience, since it follows that new instances will pick up where previous instances left off, causing them to have the illusion of continuous experience, and disassembled instances will be dead and therefore incapable of responding about whether they’re having an experience. Not that I could test it anyway without a transporter, but this means I can’t imagine a scenario and reason out whether a disassembled instance of me would experience or not after being put back together exactly the same way.
Do you see a way to reason that out, or do you have a clarifying question we could ask?
Okay, so I guess what you’re saying here is that what you value about being alive is NOT the experience of life.
Nope, that’s not what I’m saying at all. All of the Daves have the experience of life, and I absolutely do value it, which is why I press the button that I expect to create more of it.
YOU won’t experience the event whatsoever. A copy of you will be there instead. Is this acceptable?
No, that simply isn’t true. I will in fact experience the event (assuming I can get back from my work assignment in time, or assuming that my employer uses a nondestructive teleporter such that I can both experience the event and do my job).
Okay, so (just ignoring for a moment the fact that the transporter itself has just been vaporized, I guess I’ll assume it’s intact)
No, sorry, I was unclear. The engine is going to overload in ten minutes, say, and the captain has the choice of transporting us off the ship before it explodes. Which, on your account, is not worth bothering with, since we’re going to be just as dead whether she does or not.
This [teleoperating remote bodies] would be more efficient and less risky, don’t you think?
Sure. Given the choice of telecommuting this way, rather than teleporting my body back and forth, I would probably choose tele-operating a remote body, assuming the experience was comparable.
Do you see a way to reason that out, or do you have a clarifying question we could ask?
No, not really, especially since you’re in the habit of not answering the questions I do ask. Either way, though, no: I think you’ve created a confusion here that is unresolvable as long as you hold on to your belief that there is some essence of selfness (continuous experience, identity, real-me-ness, whatever) that is undetectable and unduplicatable but somehow still important.
Your model creates the possibility that I am not the person I was a moment ago and there’s simply no fact about the world that would resolve the question of whether that possibility is actual or not. This seems absurd to me: if nothing depends on it, I simply don’t care whether it’s true or not; if we insist that that is what it means to be “really me”, then I must accept that maybe I’m not “really me” and I’m OK with that.
I haven’t touched on personal identity—for clarity I’m not equating that with continuous experience nor am I even equating continuous instance distinctions with continuous experience at this point. (I guess I’m interpreting personal identity either like “self” or identity the way it’s used in “identity theft”—like a group of accounts and things like SSNs that places use to distinguish one person from another. I’m not using that term here and I’m not sure what you mean by it.).
I’m not trying to figure out whether my “self” maps to certain particles. I feel sure that “self” is copy-able (though I haven’t formally defined self yet). However, I am separating self from continuous experience (like you can see in my Elements of Death comment).
What I am trying to do is to figure out whether the continuous experience of my current instance is linked to specific particles. The reason I am asking that question is made apparent in my transporter failure scenario.
No, temporary unconsciousness is not the same thing as permanent unconsciousness; you perceive yourself to return to consciousness. The tricky part is unpacking the “you” in that sentence. Conventionally it unpacks to a conscious entity, but that clearly isn’t useful here because you (by any definition) aren’t continuously conscious for the duration. It could also unpack to about fifty to a hundred kilos of meat, but whether we’re talking about a transporter-clone or an ordinary eight hours of sleep, the meat that wakes up is not exactly the meat that goes unconscious. In any case, I’m having a hard time thinking of ways of binding a particular chunk of meat to a particular consciousness that end up being ontologically privileged without invoking something like a soul, which would strike me as wild speculation at best. So what does it unpack to?
It’s actually very tricky to pin down the circumstances which constitute death, i.e. permanent cessation of a conscious process, once you start thinking about things like Star Trek transporters and mind uploading. I don’t claim to have a perfect answer, but I strongly suspect that the question needs dissolving rather than answering as such.
I think there no such mystery about pattern continuation. People just keep confused when the word “identity” come. If you really bother about these things, think in normal cases like you now and tomorrow, and find a flaw in the argument.
Virtualization. I think if you are virtualized (uploaded to a computer, or copied into a new brain), you still die. I keep running into people on here who seem to think that if you copy someone, this prevents them from dying. It seems that I am in the minority on this one. Am I? Has this been thoroughly debated before? I would like to start a discussion on this. Good idea / bad idea tips on presentation?
I think the LW consensus is that the copy is also you, and personal identity as we think of it today will have to undergo significant change once uploads and copies become a thing.
Contemporary people are more or less completely bamboozled by the whole topic of minds, brains, and computers. It’s like in the early days of language, when some people thought that reality was created by a divine breath speaking the true names of things, or that the alphabet existed before the universe alongside God, and so on. Language was the original information technology that was made into an idol and treated like magic because it seemed like magic. The current attitudes to computers and computation are analogous, except that we really can culture neurons and simulate them, so we are going to be creating hybrid entities even more novel, in evolutionary terms, than a primate with a verbalizing stream of consciousness (which was a hybrid of biology and language).
What is the computational paradigm of mind? Often this paradigm floats free of any material description at all, focusing solely on algorithms and information. But if we ask for a physical description of computation, it is as follows: There is an intricate physical object—a brain, a computer. Mostly it is scaffolding. There are also non-computational processes happening in it—blood circulating, fan spinning. But among all the physical events which happen inside this object, there are special localized events which are the elementary computations. A wave of depolarization travels along a cell membrane. The electrons in a transistor rearrange themselves in response to small voltages. In the intricate physical object, billions of these special events occur, in intricate trains of cause and effect. The computational paradigm of mind is that thought, self, experience, identity are all, in some sense, nothing but the pattern of these events.
These days it is commonly acknowledged that this supposed identity is somewhat mysterious or unobvious. I would go much further and say that almost everything that is believed and said about this topic is wrong, just like the language mysticism of an earlier age, but it has a hold on people’s minds because the facts seem so obvious and they don’t have any other way of conceiving of their own relationship to those facts. Yes, it’s mysterious that mere ink on a page has such power over our minds and such practical utility, but the reality of that power and that utility are self-evident, therefore, in the beginning was the word, and the word was with God, and the word was God. Yes, it’s mysterious that a billion separate little events of particles in motion could feel like being a person and being alive, but we know that the brain is made of neural circuitry and that we could in principle simulate it on any computing mechanism, therefore you are a program in your brain, and if we ran that program somewhere new, you would live again.
People try with varying degrees of self-awareness and epistemic modesty to be rational about their beliefs here, but mostly it’s the equivalent of different schools of language mysticism, clashing over whether the meaning-essence only inhabits the voice, or whether it can be found in the written word too. In my estimation, what people say about consciousness, uploads, and personal identity, is similarly far from the reality of how anything works and of what we really are.
If we ever extend human understanding far enough to grasp the truth, it’s going to be something bizarre—that you are a perspective vortex in your cortical quantum fields, something like that, something strange and hardly expressible with our current concepts. And meanwhile, we continue to develop our abilities to analyze the brain materially, to shape it and modify it, and to make computer hardware and software. Those abilities are like riding a bicycle, we can pick them up without really knowing what we are doing or why it works, and we’re in a hurry to use those abilities too.
So most likely, that biolinguistic hybrid, the primate who thinks in words, is going to create its evolutionary successor without really understanding what it’s doing, and perhaps even while it is possessed with a false understanding of what it is doing, a fundamentally untrue image of reality. That’s what I see at work in these discussions of mind uploading and artificial intelligence: computational superstition coupled to material power. The power means that something will be done, this isn’t just talk, there will be new beings; but the superstition means that there will be a false image of what is happening as it happens.
If you use the concepts of “dying” or “personal identity” in this context, you risk committing the noncentral fallacy, since uploading is an atypical case of their application, and their standard properties won’t automatically carry over.
For example, concluding that an instance of you “actually dies” when there is also a recent copy doesn’t necessarily imply that something bad took place, since even if you do in some sense decide that this event is an example of the concept of “dying”, this is such an atypical example that its membership in that concept provides only very weak evidence for sharing the property of being bad with the more typical examples. Locating this example in the standard concepts is both difficult and useless, a wrong question.
The only way out seems to be to taboo the ideas of “dying”, “personal identity”, etc., and fall back on the arguments that show in what way typical dying is bad, and non-dying is good, by generalizing these arguments about badness of typical destruction of a person to badness of the less typical destruction of a copy, and goodness of not destroying a person to goodness of having a spare copy when another copy is destroyed.
It seems to me that the valuable things about a living person (we’ve tabooed the “essence of personal identity”, and are only talking about value) are all about their abstract properties, their mind, their algorithm of cognition, and not about the low-level details of how these abstract properties are implemented. Since destruction of a copied person preserves these properties (implemented in the copy), the value implemented by them is retained. Similarly, one of the bad things about typical dying (apart from the loss of a mind discussed above) seems to be the event of terminating a mind. To the extent this event is bad in itself, copying and later destroying the original will be bad. If this is so, destructive uploading will be better than uploading followed by destruction of the conscious original, but possibly worse than pure copying without any destruction.
Almost everybody starts with the intuitive notion that uploading will kill the “real you”. The discussion seems to have been treading the same ground since at least the 1990s, so I don’t really expect anything new to come out of yet another armchair rehash.
Chapters 9 and 10 in David Chalmers’ singularity paper are a resonably good overview of the discussion. Chalmers end up finding both stances convincing given different setups for a thought experiment, and remains puzzled about the question.
Really? I started with the assumption that uploading wouldn’t necessarily be destructive, but people chose to discuss destructive uploading because it simplifies some of the philosophical questions. On second thought, there may also be a bias from science fiction, where promising developments are likely to have a horrific downside.
Yeah, assuming some sort of destructive upload in my comment there, naturally. My assumptions for the initial stance most people will have for the various scenarios are basically:
Non-destructive upload, the initial person remains intact, an exact upload copy is made: The “real you” is the original human, all that matters is whether real you lives or dies.
Destructive upload, the initial person gets knocked out and ground to pieces to make the exact upload copy: “Real you” dies from being ground to pieces, end of story.
Moravec transfer, the initial person’s brain gets converted to a machine substrate one neuron at a time: People other than John Searle seem to be OK with personal continuity remaining in this scenario.
Also, embracing the possibility of nondestructive uploads requires us to think about our identities as potentially non-uniquely instantiated, which for a lot of people is emotionally challenging.
Define “dying”.
Elements of death:
There are a lot of elements to dying and if technology progresses far enough I think we could have incidents where some but not all of them happen. However, depending on what exactly happens, some of these should still be regarded as being just as bad as death.
Death of experience
Your experience of the world stops permanently.
This is important because you will never experience pleasure again if you stop experiencing permanently.
Death of self
Your personality, memories, etc, your “software pattern” cease to exist.
This is important because other people are attached to them and will be upset if they can’t interact.
Death of genes
Your genetic material, your “hardware pattern”, is lost. Your genetic line may die out.
This is unacceptable if you feel that it’s an important purpose in life to reproduce.
Death of influence
It becomes impossible for you to consciously influence the world.
This is important because of things like the necessity of taking care of children or a goal to make a difference.
Death of body
Your body, or the current copy of your “hardware” becomes unusable.
This is important if your brain isn’t somewhere else when it happens but may not be important otherwise.
There may be others. Can you think of more?
It’s a good list. Now to define “you” and see if an upload fits into the definition and if so, how much of your list applies.
I am uploaded. A copy of my “self” is made (I believe this is the definition of “you” people are using when they’re talking about uploading themselves) and the original is disassembled or dies of natural causes. That’s all that was done. I’m assuming no other steps were taken to preserve any other element of me because it was believed that uploading me means I wouldn’t die. I’ll call the original Epiphany and the copy I’ll call Valorie.
Epiphany:
Death of body—Check. Brain was in it? Check.
Death of experience—Check. (See previous note about my brain.)
Death of genes—Check. Pregnancy is impossible while dead. Genes were not copied.
Death of influence—Check. Upload was not incarnated.
Death of self—No. There is a copy.
Valorie:
Death of body—No body. It’s just a copy.
Death of experience—Doesn’t experience, it isn’t being run, it’s just a copy.
Death of genes—Doesn’t have genes, a copy of my “self” is being stored in some type of memory instead of a body.
Death of influence—Cannot influence anything as a copy, especially if it is not being run.
Death of self—No. It’s preserved.
Conclusion:
I am dead.
Of course it’s not hard to imagine other scenarios where everything possible is copied and the copy is incarnated, but Epiphany would still stop experiencing, which is unacceptable, so I would still call this “dead”.
I’m perfectly willing to accept that if you get uploaded and then nobody ever runs the upload then that’s death. But if you’re trying to give the idea a fair chance, I’m not sure why you’re assuming this.
There’s one really important detail here. If you get uploaded, even if the copy is put into a body exactly like yours and your genes are fully preserved and everything goes right, you still stop experiencing as soon as you die.
Is that acceptable to you?
Okay, I was pretty sure that was your real point, so I just wanted to confirm that and separate away everything else.
But to be honest, I don’t have a real answer. It’s definitely not obvious to me that I will stop experiencing in any real way, but I have a hard time dismissing this as well. One traditional answer is that “you will stop experiencing” is incoherent, and that continuity of experience is an illusion based on being aware of what you were thinking about a split second ago, among other things.
The continuation of experience argument is compelling if you consider my transporter malfunction scenario.
That is one situation that would definitely result in a discontinuation of experience.
Others which I have discussed with Saturn and TheOtherDave (a wonderfully ironic handle for this discussion) have resulted in my considering other possibilities like being re-assembled with the exact same particles in the same or different locations and being transformed over time via neuron replacement or similar.
I decided that being transformed would probably maintain continuity of experience, and being re-assembled out of the same particles in the exact same locations would probably result in continuity of experience (because I can’t see that as a second instance), but I am not sure about it (because the same particles in different locations might not qualify as the same instance, which brings into question whether same instance guarantees continuous experience) and I’m having a hard time thinking of a clarifying question or hypothetical scenario to use for working it out. (It’s all in the link right there).
What’s not incoherent, though, is looking forward to experiencing something in the future, yet knowing you’re going to be disassembled by a transporter and a copy of you will experience it instead. That, in no uncertain terms, is death. We can tell ourselves all day that having a continuous experience relies on you being able to connect your current thought and previous thought, but the real question we need to ask is “Will I have any thoughts at all?” so the connected thoughts question is a red herring (as it relates only to your second instance, not your first one) and is a poor clarifying question for telling whether you (the original) survived.
In coherent terms, what we should avoid is this:
Either way, only a copy of you will experience it, because the non-copy of you is trapped in the present and has no way to experience the future. The copy can be made artificially, using a transporter, or naturally as time passes. Why is there a difference?
Why do you think that time copies you?
Well, it doesn’t even perfectly preserve the original, so I fail to see what else it could be but a copy.
You might argue that for some reason the time-derived copy is more important than an artificial copy, of course, but why?
Why is the time-copy even a copy though? If we call some A a copy of some original B, then we have to have reason to associate A with B (if A and B are paintings, the one is a copy of the other if it closely resembles it, say). What association does EpiphanyA at t0 have with EpiphanyB at t1?
You… don’t see a reason to associate future-you with present-you?
Well, I think I persist through time. But you’re saying that time makes copies of me, and I’m curious to know why you think those things are copies and not just new (very short lived) people.
I don’t think the distinction is meaningful. Possibly we just mean different things by the word “copy”?
I think I should, at this point, just ask for some elaboration on the theory.
Wait, wait, wait. I’m still confused as to why you think that time is copying me. By what mechanism does time create new instances of me and destroy the old ones? At what interval does this happen? Has anyone actually observed this phenomenon or is it just a theory?
I could reverse the question. Why do you think you’re the same person at different times, as opposed to being a copy? By what mechanism is a single person carried forward through time? Has anyone actually observed this phenomenon, or is it just a theory?
It’s not clear to me that those are fair questions, but then it’s not clear to me that their reversals are fair, either.
Occam’s razor. The theory that I’m being copied and destroyed over and over again doesn’t explain anything additional that I can think of, so it’s more likely the simpler idea (that I am not being copied and destroyed over and over) is true.
Also, not believing that I am being copied does not qualify as a belief. That’s just lack of belief in a theory.
If you guys believe I’m being copied over and over again, that IS a belief though, and if you want me to agree with it, the burden of proof lies on you.
I think both of you are sorta failing to address (or not addressing clearly enough) the point that objects being “copied” “destroyed” or “persisted” is not really meaningful at the level of physics at all—like envisioning electrons as billiard balls, it’s mapping a concept that’s intuitive in one’s mind onto the physical world where it does not apply.
At the bottommost level of quantum physics that we know of, electrons have no identity. From what I gather to “destroy” an electron from here and “copy” it there is indistinguishable physically (even in principle) from “moving” it from here to there. Those are concepts which are differentiatable in our adapted-via-evolution minds, not in reality.
That having been said I don’t dismiss your concerns about uploading altogether because we still aren’t unconfused enough about consciousness to be able to clarify to ourselves what the fuck it’s supposed to do… I would really like to be unconfused about qualia and the nature of existence before I do any uploading of myself.
Yup. Which is why I say it’s not clear to me those are fair questions.
That said… if in the future two entities exist that are physically and behaviorally indistinguishable from one another, and one of them is me, it follows that either both of them are me, or one and only one of them is me. In the latter case, it seems “me-ness” depends some physically and behaviorally undetectable attribute which only one of them has.
Occam’s razor also seems to suggest that both of them are me, since the alternative posits an additional unnecessary entity in the system.
I’m interpreting this as difficulty figuring out who the burden of proof belongs to. I think it helps to realize that with each theory there are at least three options:
Believe it’s true. Believe it’s false. Not believe anything.
If you say “There’s a dragon in my garage.” and I say “I don’t believe this.” I am not saying “I believe there is no dragon in your garage.” I’m saying “I don’t have a belief about this.”
Now, I could go in there and inspect everything and conclude that there’s no dragon, at which point I’d have a belief that there isn’t a dragon. But why should I do this? You might claim next that there’s a God in your garage. Then I’d have to go to all sorts of work trying to prove there is no God in your garage. Then you could claim that there’s a pink elephant, and on and on.
This is why, if you want people to believe something the burden of proof lies on you—you can’t just turn it around and say “Well prove that it’s NOT this way!”—if that were the rule, people would troll the crap out of us with dragons and Gods and pink elephants and such.
Does that give you any clarity in whose burden it is to offer evidence regarding time copying people?
No. The additional entity is not unnecessary. The second instance is absolutely required to explain the way you reacted to my teleporter with technical failure argument.
I am surprised you didn’t update after that by recognizing that there were two separate instances, and I don’t know what to do about it. I’m stumped as to why you aren’t seeing it this way.
Perhaps you are. That’s certainly not what I would be saying if someone said that to me and I gave that reply.
Proof in the sense you are discussing here is mostly useful when trying to win debates. I have no particular desire for you to believe anything in particular.
The unnecessary entity in the second case is the physically and behaviorally undetectable attribute which only the “real me” has. I don’t see any need for it, and I have no idea why you think it’s necessary to explain any part of my reaction to any of your hypotheticals.
So your definition of self stops at the physical body? Presumably mostly your brain? Would a partial brain prosthesis (say, to save someone’s life after a head trauma) mimicking the function of the removed part make the recipient less of herself? Does it apply to the spinal cord? How about some of the limbic system? Maybe everything but the neocortex can be replaced without affecting “self”? Where do you put the boundary and why?
No. As I mentioned, “This (referring to Death of Body) is important if your brain isn’t somewhere else when it happens but may not be important otherwise.”
If you get into a good replacement body before the one you’re in dies, you’re fine.
If you want to live, a continuation of your experience is required. Not the creation of a new instance of the experience. But the continuation of my (this copy’s) experience. That experience is happening in this brain, and if this brain goes away, this instance of the experience goes away, too. If there is a way to transfer this experience into something else (like by transforming it slowly, as Saturn and I got into) then Epiphany1′s experience would be continued.
If Epiphany1′s experience continues and my “self” is not significantly changed, no. That is not really a new instance. That’s more like Epiphany1.2.
Not sure why these are relevant. Ok limbic system is sort of relevant. I’d still be me with a new spinal cord or limbic system, at least according to my understanding of them. Why do you ask? Maybe there’s some complexity here I missed?
Hmmm. If my whole brain were replaced all at once, I’d definitely stop experiencing. If it were replaced one thing at a time, I may have a continuation of experience on Epiphany1, and my pattern may be preserved (there would be a transformation of the hardware that the pattern is in, but I expect my “self” to transform anyway, that pattern is not static).
I am not my hardware, but I am not my software either. I think we are both.
If my hardware were transformed over time such that my continuation of experience was not interrupted, then even if I were completely replaced with a different set of particles (or enhanced neurons or something) that as long as my “self pattern” wasn’t damaged, I would not die.
I can’t think of a way in which I could qualify that as “death”. Losing my brain might be a cause of death, but just because something can cause something else doesn’t mean it does in every instance. Heat applied to glass causes it to become brittle or melt and change form, destroying it. But we also apply heat to iron to get steel.
I’m trying to think of a metaphor that works for similar transformations… larva turns into a butterfly. A zygote turns into a baby, and a baby, into an adult. No physical parts are lost in those processes that I am aware of. I do vaguely remember something about a lot of neural connections being lost in early childhood… but I don’t remember enough about that to go anywhere with it. The chemicals in my brain are probably replaced quite frequently, if the requirements for ingesting things like tryptophan are any indicator. Things like sugar, water and nutrients are being taken in, and byproducts are being removed. But I don’t know what amount of the stuff in my skull is temporary. Hmm…
I want to challenge my theory in some way, but this is turning out to be difficult.
Maybe I will find something that invalidates this line of reasoning later.
You got anything?
So the “continuity of experience” is what you find essential for not-death? Presumably you would make exceptions for loss of consciousness and coma? Dreamless sleep? Anesthesia? Is it the loss of conscious experience that matters or what? Would a surgery (which requires putting you under) replacing some amount of your brain with prosthetics qualify as life-preserving? How much at once? Would “all of it” be too much?
Does the prosthetic part have to reside inside your brain, or can it be a machine (say, like a dialysis machine) that is wirelessly and seamlessly connected to the rest of your brain?
If it helps, Epiphany has implied elsewhere, I think, that when they talk about continuity of experience they don’t mean to exclude experience interrupted by sleep, coma, and other periods of unconsciousness, as long as there’s experience on the other end (and as long as the person doing that experiencing is the same person, rather than merely an identical person).
Right, it’s her definition of “same” vs “identical” that I am trying to tease out. Well, the boundary between the two.
Yeah that has gotten tricky. I’ve worded the question as “Same instance or different instance?”. I’ve also discovered a stickier problem—just because a re-assembled me might qualify, in all ways, as “the same instance” I am not sure that guarantees the continuation of my experience. I explore that here, in two examples being re-assembled from the same particles both in the same arrangement and in a different arrangement. (scroll to “Scenarios meant to explore instance differentiation and the relation to continuous experience”—I labeled it to make it easy to find.)
As TheOtherDave pointed out, the question is what is, in your opinion, the essence of “self”. Clearly it cannot just be all the same “particles” (molecules?), since particles in our bodies change all the time. You seem to be relating self with consciousness, but not identifying the two. That’s why I’m asking questions aimed to nail the difference. That’s why I asked these questions earlier:
“The essence of self” seems like the wrong question to me. That sounds too much like “What is the essence of your personality?” and that’s irrelevant here.
What I’m talking about is my ability to experience. We all have an ability to experience (I assume) that, although it may be shaped by our personalities, it is not our personalities. Example:
A Christian sees a Satanic ritual. A Satanist sees the same ritual.
The Christian is horrified. The Satanist thinks it’s great.
The reason one was horrified and the other thought it was great is because they have different beliefs, possibly different personality types, different life experiences and possibly even different neurological wiring.
What did they have in common?
They both saw a Satanic ritual.
THAT is the part I am trying to point out here. The part that experiences. It’s not one’s personality, or beliefs, or experiences or neurological traits.
I am saying essentially “Even if personality, beliefs, experiences and neurological differences are copied, this does nothing to guarantee that the part of you that experiences is going to survive.” Asking to define the essence of self is not relevant since I’m saying to you “Even if self is copied, this thing that I am talking about may not survive”.
Here is a clarifying example:
Transporter Malfunction Scenario
Note to self: Thinking about motion might be the key to this.
How would you convince someone who thinks instants of experience are real and memories that give instants of experience historical context are real, but doesn’t believe in any meaningful process of forward continuity from one instant of experience to another beyond the similarity of memories, to believe otherwise? There’s no difference between blinking, taking a nap and being destructively teleported in this stance. It’s all just someone experiencing something now, and someone else with very similar memories that include the present experience moment experiencing something else in the future.
Well, that makes the second time you ignored my questions, so I will tap out.
I’ve noted to self that this seems like a pattern with us, as you have complained about a question being ignored a few times now. Not sure what I should be doing about it when I don’t see a question as relevant but maybe I should just be like “I don’t see how this is relevant.”
Don’t know how I got the habit of ignoring things that seem irrelevant and moving on to whatever seems relevant but I can see why it would be annoying so I will be thinking about that. Thanks for getting me to see the pattern.
Questions to consider: Would you feel the same way about using a Star Trek transporter? What if you replaced neurons with computer chips one at a time over a long period instead of the entire brain at once? Is everyone in a constant state of “death” as the proteins that make up their brain degrade and get replaced?
The million dollar question: Do I stop experiencing?
If I were to be disassembled by a Star Trek transporter, I’d stop experiencing. That’s death. If some other particles elsewhere are reassembled in my pattern, that’s not me. That’s a copy of me. Yes, I think a Star Trek transporter would kill me. Consider this: If it can assemble a new copy of me, it is essentially a copier. Why is it deleting the original version? That’s a murderous copier.
I remember researching whether the brain is replaced with new cells over the course of one’s life and I believe the answer to that is no. I forgot where I read that, so I can’t cite it, but due to that, I’m not going to operate from the assumption that all of the cells in my brain are replaced over time.
However, if one brain cell were replaced in such a way that the new cell became part of me, and I did not notice the switch, my experiencing would continue, so that wouldn’t be death. Even if that happened 100,000,000,000 times (or however many times would equate to a complete replacement of my brain cells) that wouldn’t stop me from experiencing. Therefore, it’s not a death—it’s a transformation.
If my brain cells were transformed over time into upgraded versions, so long as my experience did not end, it would not be death. Though, it could be said to be a transformation—the old me no longer exists. Epiphany 2012 is not the same as Epiphany 1985 because I was a child then, but my neural connections are completely different now and I didn’t experience that as death. Epiphany 2040 will be completely different from Epiphany 2012 in any case, just because I aged. If I decide to become a transhuman and the reason I am different at that time is because I’ve had my brain cells replaced one at a time in order to experience the transformation and result of it, then I have merely changed, not died.
It could be argued that if the previous you no longer exists, you’re dead, but the me that I was when I was two years old or ten years old or the me I was when I was a zygote no longer exists—yet I am not dead. So the arguer would have to distinguish an intentional transformation from a natural one in a way that sets it apart as having some important element in common with death. All of my brain cells would be gone, in that scenario, but I’d say that’s not a property of death, just a cause of death, and that not everything that could cause death always will cause death. Also, it is possible to replace brain cells as they die, in which case, the more appropriate perspective is that I was being continued, not replaced. Doing it that way would be a prevention of death, not a cause of death. I would not technically be human afterward, but my experience would continue, and the pattern known as me would continue (it is assumed that this pattern will transform in any case, so I don’t see the transformation of the pattern as a definite loss—I’d only see it that way if I were damaged) so I would not consider it a death.
The litmus test question is not “Would the copy of me continue experiencing as if nothing had happened.” the litmus test question is “Will I, the original, continue experiencing?”
Here are two more clarifying questions:
Imagine there’s a copy of you. You are not experiencing what the copy is experiencing. It’s consciousness is inaccessible to you the same way that a twin’s consciousness would be. Now they want to disassemble you because there is a copy. Is that murder?
Imagine there’s a copy of you. You’ve been connected to it via a wireless implant in your head. You experience everything it experiences. Now they want to disassemble you and let the copy take over. If all the particles in your head are disassembled except for the wireless implant, will you continue experiencing what it experiences, or quit experiencing all together?
I used to think this way. I stopped thinking this way when I realized that there are discontinuities in consciousness even in bog-standard meat bodies—about one a day at minimum, and possibly more since no one I’m aware of has conclusively established that subjective conscious experience is continuous. (It feels continuous, but your Star Trek transporter-clone would feel continuity as well—and I certainly don’t have a subjective record of every distinct microinstant.)
These are accompanied by changes in physical and neurological state as well (not as dramatic as complete disassembly or mind uploading, but nonzero), and I can’t point to a threshold where a change in physical state necessitates subjective death. I can’t even demonstrate that subjective death is a coherent concept. Since all the ways I can think of of getting around this require ascribing some pretty sketchy nonphysical properties to the organization of matter that makes up your body, I’m forced to assume in the absence of further evidence that there’s nothing in particular that privileges one discontinuity in consciousness over another. Which is an existentially frightening idea, but what can one do about it?
(SMBC touched on this once, too.)
What do you mean by discontinuities? I have not heard about this.
Sleep, total anesthesia, getting knocked on the head in the right way, possibly things like zoning out. Any time your subjective experience stops for a while.
Actually, I expect that our normal waking experience is also discontinuous, in much the same sense that our perception of our visual field is massively discontinuous. Human consciousness is not a plenum.
Yeah, I was trying to get at that with the parenthetical bit in my first paragraph. Could probably have been a bit more explicit.
Ok are you saying that temporarily going unconscious is the same as permanently going unconscious?
Would you assert that because we temporarily go unconscious that permanent unconsciousness is not death?
Temporarily going unconscious is not the same as permanently going unconscious.
Whether we temporarily go unconscious or not does not entail permanent unconsciousness being or not being death.
Now, some questions of mine: you said “If I were to be disassembled by a Star Trek transporter, I’d stop experiencing. That’s death.”
When you fall asleep, do you stop experiencing?
If so, is that death?
If it isn’t death, is it possible that other things that involve stopping experiencing, like the transporter, are also not death?
We need to focus on the word “I” to see my point. I’m going to switch that out with something else to highlight this difference. For the original, I will use the word “Dave”. As tempting as it is to use “TheOtherDave” for the copy, I am going to use something completely different. I’ll use “Bob”. And for our control, I will use myself, Epiphany.
Epiphany takes a nap. Her brain is still active but it’s not conscious.
Dave decides to use a teleporter. He stands inside and presses the button.
The teleporter scans him and constructs a copy of him on a space ship a mile away.
The copy of Dave is called Bob.
The teleporter checks the copy of Bob before deleting Dave to make sure he was copied successfully.
Dave still exists, for a fraction of a second, just after Bob is created.
Both of them COULD go on existing, if the teleporter does not delete Dave. However, Dave is under the impression that he will become Bob once Bob exists. This isn’t true—Bob is having a separate set of experiences. Dave doesn’t get a chance to notice this because in only fractions of a second, the teleporter deletes Dave by disassembling his particles.
Dave’s experience goes black. That’s it. Dave doesn’t even know he’s dead because he has stopped experiencing. Dave will never experience again. Bob will experience, but he is not Dave.
Epiphany wakes up from her nap. She is still Epiphany. Her consciousness did not stop permanently like Dave’s. She was not erased like Dave.
Epiphany still exists. Bob still exists. Dave does not.
The problem here is that Dave stopped experiencing permanently. Unlike Epiphany who can pick up where Epiphany left off after her nap because she is still Epiphany and was never disassembled, Bob cannot pick up where Dave left off because Bob never was Dave. Bob is a copy of Dave. Now that Dave is gone, Dave is gone. Dave stopped experiencing. He is dead.
Ah! So when you say “If I were to be disassembled by a Star Trek transporter, I’d stop experiencing” you mean “I’d [permanently] stop experiencing.” I understand you now, thanks.
So, OK.
Suppose Dave decides to go to sleep. He gets into bed, closes his eyes, etc.
The next morning, someone opens their eyes.
How would I go about figuring out whether the person who opens their eyes is Dave or Bob?
Well, first, is there a human copier nearby? If not, you’re probably Dave.
How about this: If you had stepped into a teleporter and pressed the button, how would you know that it killed you?
This is exactly backwards.
I recognize a copier because it makes copies. That’s how I know something is a copier.
If I need to know whether something is a copier before I can decide whether what it creates is a copy or not, there’s something wrong with my thinking.
I wouldn’t, naturally.
Of course, if Dave steps into an incinerator and presses the button, Dave also doesn’t know that the incinerator killed Dave.
Dave is just dead, and knows nothing.
OTOH, if Dave steps into a non-incinerator and presses the button, Dave knows it didn’t kill Dave.
And the way that Dave knows this is that something is standing there, not-dead, after pressing the button, and that something identifies as Dave, and resembles Dave closely enough.
This happens all the time… I have pressed many buttons in my life, and I know they haven’t killed me, because here I am, still alive.
And I expect this is exactly what happens with a properly functioning teleporter. I press the button, and in the next moment something is aware of being Dave, and therefore not dead. It just happens to be in a different location.
Okay, so would you recommend I check under my bed tonight for anything that might make a copy of me and disassemble the original? I need something more to go on. I’m having a hard time not equating this with worrying about boogeymen.
Actually, for at least a few seconds, possibly a few minutes, Dave would be screaming in agony and he would most certainly notice that he is experiencing death by incineration.
Unless the non-incinerator happens to be a human copier, and Dave did not recognize it at first.
Yes, exactly. The original Dave has died in such a way that he didn’t even notice. Dave2 definitely doesn’t want to think that an exact copy of himself died just a moment ago, and really definitely doesn’t want to have to worry that he will need to cease experiencing in order to “go back” to where he came from, so due to normalcy bias, Dave2 declares that the fact that Dave2 exists means that Dave1 never died, and enjoys the confirmation bias that this non-sequitur gives him until he ceases to experience when “loaded” back onto his space ship.
That’s one insidious death.
Two, actually. :p
Indeed! And you should equate it with worrying about boogeymen. It’s a silly thing to worry about.
The question is why it’s silly.
I would say it’s silly, not because I haven’t noticed any boxes marked “human copier” under my bed, because every time in the past that I’ve woken up I’ve resembled the person who went to bed so closely that it’s been ridiculous to worry that I might not be the same person.
Nope.
Dave would notice that he’s experiencing being incinerated, certainly, if the incinerator were as slow as you describe. But he would not experience death by incineration. He wouldn’t experience death at all. Here’s how I know: as long as Dave is experiencing anything, Dave isn’t yet dead. And if he’s not dead, he certainly can’t be experiencing death.
(nods) Just like his predecessor did the night before when he went to bed, and Dave woke up in his place.
But of course, as above, that was too silly to worry about, just like boogiemen.
So is this.
Okay, I guess you were trying to say that my concern about being disassembled after being copied as a method of “transportation” is the equivalent of worrying about boogeymen?
“OH GOD I’M DYING AHHH!” < I call this experiencing death. Different definitions, I guess. If you want to get technical about it, and talk about death in a solely tangible way, sure Dave isn’t dead when he’s thinking about that. But Dave is experiencing death emotionally and intellectually. He knows he’s in the process of dying, that death is inevitable. He also feels emotional (and, well, physical) pain that amount to an experience worthy of symbolizing death. Maybe it would be more grammatically correct though if I said he is experiencing dying. In any case, I meant to differentiate this from transporter death because with transporter death, Dave believes that he is going to survive the “transportation” and doesn’t feel any emotional or physical pain, so there’s no knowledge of or suffering about his death.
If I offered you the free use of a device that could make a copy of you and put it anywhere you want and cause the current you to be disassembled and dispersed in the surrounding environment, (2-way trip) would you use it?
(shrug) OK, sure. Incidentally, by your definition, many many people walking around today have experienced death. Hell, I’ve experienced death myself.
Anyway, using your definition, if I stepped into what I thought was a molecular disassembler that would kill me, and it disassembled me slowly enough that I experienced the process of being disassembled, I would “experience death” by your definition, and I would know I’d experienced it the same way I know I experience the taste of cheese when I experience the taste of cheese. Later, I would look around the teleport receiver booth and say “Huh. I’m not dead? Cool” and go on with my life.
That is, I would have “experienced death” but not actually died, just as many many people do in real life when they wake up after heart attacks, accidents, etc.
Assuming that it reliably creates that copy? Absolutely. Far more convenient than airplanes.
(By “reliably” here I just mean that I trust it to actually create a close-enough copy, and not to instead create some imperfect copy that does not resemble me closely enough to satisfy my preferences regarding consistency over time.)
Yes.
I already know what your bumper sticker in the future is going to say:
I break (down) for transporters!
Now, say the transporter has a malfunction at the exact fraction of a second between the time when Dave2 has been verified as a complete copy and the time when Dave1 is going to be disassembled.
The technician says it’s going to take three hours to fix. You go out and catch a movie. After the movie, you go outside and stretch, and you see that it’s a beautiful day. You have two options:
Go to the transporter and get disassembled.
Avoid getting disassembled by the transporter.
What do you choose?
I choose #2, of course.
More than that… if I arrive at the transporter complex and am told that this is an option, that I can duplicate myself and send one copy to my destination while the other one stays here, I absolutely prefer to be duplicated… no reason for a conveniently timed technical failure.
Indeed, I might postpone the trip altogether and spend the next week right here hanging out with myself and having threesomes with our husband and meeting with lawyers to figure out what we do with our funds and material goods.
Relatedly, given a button that I know creates two perfect copies and then picks one of the resulting three Daves at random to destroy an hour later, I press it.
At the time of pressing the button, I’m indifferent as to which of the three copies gets selected for destruction… they are all me.
After pressing the button, one of me goes “Crap! I’m going to die in an hour!” and is unhappy about it, and the other two of me go “Whew! Dodged that bullet!” but feel bad for the third of me.
On my account it does not matter in the least which one of the three “was the original me,” assuming there’s even any way to tell, which there may not be.
Now, a question for you.
I enter a spaceship traveling to Alpha Centauri in suspended animation, along with all my friends and loved ones. We could have teleported instead, but we’ve been convinced by your account that this would be suicidal, so we opted for the slower but safer route.
While we lie in frozen sleep, the spaceship has a technical failure in mid-flight which reduces the ship and everything in it to constituent atoms. The ship’s captain has the option of using the ship’s transporter to beam us from the doomed ship to the surface of Alpha Centauri.
As far as I can tell, on your account, there’s no particular reason why she should do so… either way, we’re all going to die. Sure, if she does so some complete strangers will pop into existence on Alpha Centauri, but what has that got to do with her? The birthrate on Alpha Centauri is more than high enough already, creating more new people isn’t particularly valuable.
Is that right?
Suppose she does so, though, for whatever reason.
So someone identical to me (but who on your account is not me, since I died on the ship) wakes up in a thawing chamber on Alpha Centauri, alongside a bunch of thawed people who are identical to my friends and loved ones, and all of us are under the (on your account deluded) belief that we are the same people who entered coldsleep. We throw a big party to celebrate our safe arrival on a new world.
During that party, we turn on the news and learn for the first time about the ship’s actual fate.
We are presumably horrified at the sudden discovery that we’re not who we thought we were.
The person with my memories looks at the man whom, a moment earlier, he’d thought was his husband, and becomes convinced it’s actually a complete stranger… that they never actually got married. Indeed, they just met a few minutes ago, at the beginning of this party. He’s been making out for the last five minutes with a complete stranger!
All around the room, similar realizations are being made, as what had previously been a celebration of safe arrival becomes a wake for me and my friends, who are on your account irretrievably and tragically dead.
Yes? Is this how you envision the situation?
Scenario meant to discover whether the experience of life is valued
Okay, so I guess what you’re saying here is that what you value about being alive is NOT the experience of life.
How do you feel about this scenario:
You and your husband are planning to go to a really awesome event soon. Maybe it’s the Singularity summit, maybe your favorite rock star is having a concert, maybe it’s the birth of a new baby you guys have been wanting for a long time. Imagine whatever sort of event you’d enjoy most.
You’re really looking forward to it!
Then work calls and says “Dave, two days from now, we need you to do this really important job 3,000 miles away from your ordinary work site. We couldn’t get you a plane ticket on such short notice, but fortunately we have a transporter.”
You agree, as it is your job.
Now you hang up the phone and your husband comes over, saying “I can’t believe we’re actually going to have this event soon! Isn’t it exciting!”
“Yeah, of course!” You say. But something feels wrong.
You realize that you are going to be disassembled by the transporter BEFORE the event happens.
YOU won’t experience the event whatsoever. A copy of you will be there instead.
Is this acceptable?
I certainly don’t want to live a lifestyle where we use transporters to go everywhere and each instance of me only experiences until the next transport. My life would never be long enough to experience any satisfaction. That’s reminiscent of Alice in Wonderland’s absurd circumstance: “Jam tomorrow, jam yesterday, but never ever jam today.”
A new instance of me can experience a future event I’ve been planning for tomorrow, and a past me may have experienced a continuous life before transporters, but most instances of me would just be slaving away during the few hours or days in which they experience, doing things like working or buying groceries, so that other temporary instances of myself can reap the rewards. The instances that do get a reward still wouldn’t get to experience the fulfillment of planning out a goal and following through—this is really important to me for satisfaction.
Scenarios meant to explore instance differentiation and the relation to continuous experience
Okay, so (just ignoring for a moment the fact that the transporter itself has just been vaporized, I guess I’ll assume it’s intact) I assume you’re saying the option is to reassemble those people out of their original particles. (Because if not, it isn’t any different from the transporter with technical failure argument, and I’d say that their experiencing ceased when they were disassembled, which is unacceptable, so they’re dead.).
First, I’d like to say that re-assembling the people, no matter what with, may be better than letting them die because that still saves them from four out of the five elements of death above.
So what we’re arguing about is not whether this rescues their genes, their influence in the world, their selves, or their bodies (that’s inconsequential in this case), but whether it saved their ability to experience.
I’m seeing several ways for this to go. The transporter could re-assemble them by putting the exact same particles into the exact same relative locations, or by putting the mass of particles from the accident into whatever locations (mostly not the same locations).
Putting the same particles into the same relative locations:
This, I think, would be the same as turning a computer on and off. I don’t have any reason to think I have a “soul” that would “escape” in this case, and I see no reason to differentiate a me made of the exact same particles as me from a me made from the exact same particles as me. In other words, a copy was never made. The re-assembled me is not a new instance—it is the original. I theorize that me1′s experience would continue.
Putting the mass of particles into different locations:
This is sticky. If I have some of the same particles, but not all of them, is it me1? What if I have all of the same particles but they’re in different locations? That’s really, really sticky. This calls into question: What is experience? To answer this question, I have to ask “What is consciousness?”
I have an idea. If we had enough technology to send a person’s entire pattern to a new location, surely it would require less bandwidth to send only their thoughts or commands to the remote location. Also there would be no risk of being damaged due to copying errors. A brainless body could be constructed there (either in the exact likeness of the person, or in a form designed to make optimal use of resources), and the original person could control it using a mind reading interface such that they experience what the remote avatar is experiencing.
This would be more efficient and less risky, don’t you think?
It still doesn’t answer the sticky question of “Would my experience be continuous if my particles were disassembled and re-arranged?” but I think it addresses the practical transportation problem behind this (also, you’d likely get to inhabit a variety of avatars, which would be cool) but back to the original question:
If all of my particles were disassembled and re-arranged, would I have a continuous experience or not? I had been basing this on whether there would be a new instance or not. But this confuses me as to whether there’s a new instance, and makes me ask whether being disassembled and re-assembled exactly the same way might mean I lose continuous experience even if I am the same instance.
Maybe continuous instance != continuous experience.
So I have to answer the question of “What is continuous experience?” and “How does it work?”
Unfortunately, I see no way of testing for whether a consciousness is having a continuous experience, since it follows that new instances will pick up where previous instances left off, causing them to have the illusion of continuous experience, and disassembled instances will be dead and therefore incapable of responding about whether they’re having an experience. Not that I could test it anyway without a transporter, but this means I can’t imagine a scenario and reason out whether a disassembled instance of me would experience or not after being put back together exactly the same way.
Do you see a way to reason that out, or do you have a clarifying question we could ask?
Nope, that’s not what I’m saying at all. All of the Daves have the experience of life, and I absolutely do value it, which is why I press the button that I expect to create more of it.
No, that simply isn’t true. I will in fact experience the event (assuming I can get back from my work assignment in time, or assuming that my employer uses a nondestructive teleporter such that I can both experience the event and do my job).
No, sorry, I was unclear. The engine is going to overload in ten minutes, say, and the captain has the choice of transporting us off the ship before it explodes. Which, on your account, is not worth bothering with, since we’re going to be just as dead whether she does or not.
Sure. Given the choice of telecommuting this way, rather than teleporting my body back and forth, I would probably choose tele-operating a remote body, assuming the experience was comparable.
No, not really, especially since you’re in the habit of not answering the questions I do ask. Either way, though, no: I think you’ve created a confusion here that is unresolvable as long as you hold on to your belief that there is some essence of selfness (continuous experience, identity, real-me-ness, whatever) that is undetectable and unduplicatable but somehow still important.
Your model creates the possibility that I am not the person I was a moment ago and there’s simply no fact about the world that would resolve the question of whether that possibility is actual or not. This seems absurd to me: if nothing depends on it, I simply don’t care whether it’s true or not; if we insist that that is what it means to be “really me”, then I must accept that maybe I’m not “really me” and I’m OK with that.
What motivates you to link personal identity to your specific particles? Any two atoms of the same type are perfectly indistinguishable.
I haven’t touched on personal identity—for clarity I’m not equating that with continuous experience nor am I even equating continuous instance distinctions with continuous experience at this point. (I guess I’m interpreting personal identity either like “self” or identity the way it’s used in “identity theft”—like a group of accounts and things like SSNs that places use to distinguish one person from another. I’m not using that term here and I’m not sure what you mean by it.).
I’m not trying to figure out whether my “self” maps to certain particles. I feel sure that “self” is copy-able (though I haven’t formally defined self yet). However, I am separating self from continuous experience (like you can see in my Elements of Death comment).
What I am trying to do is to figure out whether the continuous experience of my current instance is linked to specific particles. The reason I am asking that question is made apparent in my transporter failure scenario.
Note to self: “I break (down) / break down / breakdown / brake down / brakedown for transporters!” all get zero Google results. Yay.
Now they don’t.
No, temporary unconsciousness is not the same thing as permanent unconsciousness; you perceive yourself to return to consciousness. The tricky part is unpacking the “you” in that sentence. Conventionally it unpacks to a conscious entity, but that clearly isn’t useful here because you (by any definition) aren’t continuously conscious for the duration. It could also unpack to about fifty to a hundred kilos of meat, but whether we’re talking about a transporter-clone or an ordinary eight hours of sleep, the meat that wakes up is not exactly the meat that goes unconscious. In any case, I’m having a hard time thinking of ways of binding a particular chunk of meat to a particular consciousness that end up being ontologically privileged without invoking something like a soul, which would strike me as wild speculation at best. So what does it unpack to?
It’s actually very tricky to pin down the circumstances which constitute death, i.e. permanent cessation of a conscious process, once you start thinking about things like Star Trek transporters and mind uploading. I don’t claim to have a perfect answer, but I strongly suspect that the question needs dissolving rather than answering as such.
It seems like you both die and live. It also seems like there become two different versions of you.
If the original is deleted immediately; I don’t think you die.
I think there no such mystery about pattern continuation. People just keep confused when the word “identity” come. If you really bother about these things, think in normal cases like you now and tomorrow, and find a flaw in the argument.