Curious—would you retain this belief if uploading actually happened, the uploaded consciousnesses felt continuity, and external observers could tell no difference between the uploaded consciousnesses and the original consciousnesses?
(Because if so, you can just have an “only if it works for others may you upload me” clause)
To whom are you asking the question? I’d be dead. That computer program running a simulation of me would be a real person, yes, with all associated moral implications. It’d even think and behave like me. But it wouldn’t be me—a direct continuation of my personal identity—anymore than my twin brother or any of the multiverse copies of “me” are actually me. If my brain was still functioning at all I’d be cursing the technicians as they ferry my useless body from the uploader to the crematorium. Then I’d be dead while some digital doppelgänger takes over my life.
Do you see? This isn’t about whether uploading works or not. Uploading when it works creates a copy of me. It will not continue my personal existence. We can be sure of this, right now.
On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?
I mean, I assume that the person reading this response thinks and behaves like the same person (more or less), and that it remembers having been the person who wrote the comment, but that’s just thought and behavior and memory, and on your account those things don’t determine identity.
So, on your account, what does determine identity? What observations actually constitute evidence that you’re the same person who wrote that comment? How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?
On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?
By examining the history of interactions which occured between the two states.
How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?
Because it is very easy to construct thought experiments which show that thought, behavior, and memory are not sufficient for making a determination. For example, imagine a non-destructive sci-fi teleporter. The version of you I’m talking to right now walks into the machine, sees some flashing lights, and then walks out. Some time later another Dave out of a similar machine on Mars. Now step back a moment in time. Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?
By examining the history of interactions which occured between the two states.
Well, yes, but what are you looking for when you do the examination?
That is, OK, you examine the history, and you think “Well, I observe X, and I don’t observe Y, and therefore I conclude identity was preserved.” What I’m trying to figure out is what X and Y are.
Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?
With 50% probability, I expect to walk back out, and with 50% probability I expect to wake up on mars. Both copies will feel like and believe that they are the original, and both copies will believe they are the ‘original’.
But you expect one or the other, right? In other words, you don’t expect to experience both futures, correct?
Now what if the replicator on Mars gets stuck, and starts continuously outputting Dentins. What is your probability of staying on Earth now?
Further, doesn’t it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?
What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?
Not Dentin, but since I gave the same answer above I figured I’d weigh in here.
you expect one or the other, right? In other words, you don’t expect to experience both futures, correct?
I expect to experience both futures, but not simultaneously.
Somewhat similarly, if you show me a Necker cube, do I expect to see a cube whose front face points down and to the left? Or a cube whose front face points up and to the right? Well, I expect to see both. But I don’t expect to see both at once… I’m not capable of that.
(Of course, the two situations are not the same. I can switch between views of a Necker cube, whereas after the duplication there are two mes each tied to their own body.)
what if the replicator on Mars gets stuck [..] What is your probability of staying on Earth now?
I will stay on Earth, with a probability that doesn’t change. I will also appear repeatedly on Mars.
doesn’t it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars,
Well, sure, in the real world it seems very odd to take this possibility seriously. And, indeed, it never seems to happen, so I don’t take it seriously… I don’t in fact expect to wake up on Mars.
But in the hypothetical you’ve constructed, it doesn’t seem odd at all… that’s what a nondestructive teleporter does.
and someone else takes over your life on Earth?
(shrug) In ten minutes, someone will take over my life on Earth. They will resemble me extremely closely, though there will be some small differences. I, as I am now, will no longer exist. This is the normal, ordinary course of events; it has always been like this.
I’m comfortable describing that person as me, and I’m comfortable describing the person I was ten minutes ago as me, so I’m comfortable saying that I continue to exist throughout that 20-minute period. I expect me in 10 minutes to be comfortable describing me as him.
If in the course of those ten minutes, I am nondestructively teleported to Mars, someone will still take over my life on Earth. Someone else, also very similar but not identical, will take over my life on Mars. I’m comfortable describing all of us as me. I expect both of me in 10 minutes to be comfortable describing me as them.
That certainly seems odd, but again, what’s odd about it is the nondestructively teleported to Mars part, which the thought experiment presupposes.
What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?
It will travel along with my body, via whatever mechanism allows that to be transferred. (Much as my subjective experience travels along with my body when I drive a car or fly cross-country.)
No, I would never expect to simultaneously experience being on both Mars and Earth. If you find anyone who believes that, they are severely confused, or are trolling you.
If I know the replicator will get stuck and output 99 dentins on Mars, I would only expect a 1% chance of waking up on earth. If I’m told that it will only output one copy, I would expect a 50% chance of waking up on earth, only to find out later that the actual probability was 1%. The map is not the territory.
Further, doesn’t it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?
Not at all. In fact, it seems odd to me that anyone would be surprised to end up on Mars.
What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?
Because conciousness is how information processing feels from the inside, and ‘information processing’ has no intrinsic requirement that the substrate or cycle times be continuous.
If I pause a playing wave file, copy the remainder to another machine, and start playing it out, it still plays music. It doesn’t matter that the machine is different, that the decoder software is different, that the audio transducers are different—the music is still there.
Another, closer analogy is that of the common VM: it is possible to stop a VPS (virtual private server), including operating system, virtual disk, and all running programs, take a snapshot, copy it entirely to another machine halfway around the planet, and restart it on that other machine as though there were no interruption in processing. The VPS may not even know that anything has happened, other than suddenly its clock is wrong compared to external sources. The fact that it spent half an hour ‘suspended’ doesn’t affect its ability to process information one whit.
There were two ways to interpret your statement—that uploaded won’t be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).
I was just wondering which one it was. I’m interested in hearing arguments against uploading
-How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?
-Would you ever step into a sci-fi style teleporter?
-cryonics constitutes “pausing” and “resuming” yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?
There were two ways to interpret your statement—that uploaded won’t be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).
The latter, but they are both empirical questions. The former deals with comparing informational configurations at two points in time, whereas the latter is concerned with the history of how we went from state A to state B (both having real-world implications).
How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?
We need more research on the physical basis for consciousness to understand this better such that we can properly answer the question. Right now all we have is the fleeting experience of continued identity moment to moment, and the induction principle which is invalid to apply over singular events like destructive uploading.
My guess as to the underlying nature of the problem is that consciousness exists in any complex interaction of particles—not the pattern itself, but the instantiation of the computation. And so long as this interaction is continuous and ongoing we have a physical basis for the continuation of subjective experience.
Would you ever step into a sci-fi style teleporter?
Never, for the same reasons.
Cryonics constitutes “pausing” and “resuming” yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?
Pausing is a metaphor. You can’t freeze time and chemistry never stops entirely. The particles in a cryonic patient’s brain keep interacting in complex, albeit much slowed down ways. Recall that the point of pumping the brain full of anti-freeze is that it remains intact and structurally unmolested even at liquid nitrogen temperatures. It is likely that some portion of biological activity is ongoing in cryostatasis albeit at a glacial pace. This may or may not be sufficient for continuity of experience, but unlike uploading the probability is at least not zero.
BTW the problem with teleporting is not spatial or temporal. The problem is that the computational process which is the subjective experience of the person being teleported is interrupted. The machine violently disassembles them and they die, then somewhere else a clone/copy is created. If you have trouble seeing that, imagine that the process is not destructive. You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you’re still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to “wake up” as at the other end of the process?
You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you’re still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to “wake up” as at the other end of the process?
My current thought on the matter is that Ishaan0 stepped into the elevator, Ishaan1a stepped out of the elevator, and Ishaan1b was replicated by the elevator.
At time 2, Ishaan2a was shot, and Ishaan2b survived.
Ishaan0 → ishaan1a --> ishaan2a just died.
Ishaan0 → ishaan1b--->ishaan2b--->ishaan3b --->… gets to live on.
So Ishaan0 can be said to have survived, whereas ishaan1a has died.
Right now all we have is the fleeting experience of continued identity moment to moment
The way I see it, my past self is “dead” in every respect other than that my current self exists and contains memories of that past self.
I don’t think there is anything fundamental saying we aught to be able to have “expectations” about our future subjective experiences, only “predictions” about the future.
Meaning, if ishaan0 had a blindfold on, then at time1 when I step out of the teleporter I would have memories which indicate that my current qualia qualify me to be in the position of either Ishaan1a or Ishaan1b. When I take my blindfold off, I find out which one I am.
The problem is that the computational process which is the subjective experience of the person being teleported is interrupted.
It sounds to me like you’re ascribing some critical, necessary aspect of consciousness to the ‘computation’ that occurs between states, as opposed to the presence of the states themselves.
It strikes me as similar to the ‘sampling fallacy’ of analog audio enthusiasts, who constantly claim that digitization of a recording is by definition lossy because a discrete stream can not contain all the data needed to reconstruct a continuous waveform.
It sounds to me like you’re ascribing some critical, necessary aspect of consciousness to the ‘computation’ that occurs between states, as opposed to the presence of the states themselves.
Absolutely (although I don’t see the connection to analog audio). Is a frozen brain conscious? No. It is the dynamic response of brain from which the subjective experience of consciousness arises.
The connection to analog audio seems obvious to me: a digitized audio file contains no music, it contains only discrete samples taken at various times, samples which when played out properly generate music. An upload file containing the recording of a digital brain contains no conciousness, but is concious when run, one cycle at a time.
A sample is a snapshot of an instant of music; an upload is a snapshot of conciousness. Playing out a large number of samples creates music; running an upload forward in time creates conciousness. In the same way that a frozen brain isn’t concious but an unfrozen, running brain is—an uploaded copy isn’t concious, but a running, uploaded copy is.
That’s the point I was trying to get across. The discussion of samples and states is important because you seem to have this need for transitions to be ‘continuous’ for conciousness to be preserved—but the sampling theorem explicitly says that’s not necessary. There’s no ‘continuous’ transition between two samples in a wave file, yet the original can still be reconstructed perfectly. There may not be a continous transition between a brain and its destructively uploaded copy—but the original and ‘continuous transition’ can still be reconstructed perfectly. It’s simple math.
As a direct result of this, it seems pretty obvious to me that conciousness doesn’t go away because there’s a time gap between states or because the states happen to be recorded on different media, any more than breaking a wave file into five thousand non-contiguous sectors on a hard disk platter destroys the music in the recording. Pretty much the only escape from this is to use a mangled definition of conciousness which requires ‘continuous transition’ for no obvious good reason.
I’m not saying it goes away, I’m saying the uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, but it is not me in the sense that if I walk into an uploader I expect to walk out again in my fleshy body. Maybe that scan is then used to start a simulation from which arises a fully conscious copy of me, but I don’t expect to directly experience what that copy experiences.
The uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, and it is me in the sense that I expect with high probability to wake up as an upload and watch my fleshy body walk out of the scanner under its own power.
Of course I wouldn’t expect the simulation to experience the exact same things as the meat version, or expect to experience both copies at the same time. Frankly, that’s an idiotic belief; I would prefer you not bring it into the conversation in the future, as it makes me feel like you’re intentionally trolling me. I may not believe what you believe, but even I’m not that stupid.
Uploading when it works creates a copy of me. It will not continue my personal existence.
I honestly don’t know how “copy” is distinct from “continuation” on a physical level and/or in regards to ‘consciousness’/‘personal existence’.
If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it’s wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as ‘continuations’ rather than ‘copies’. Why would uploading be different?
Mind you, I’m not saying it necessary isn’t—but I understand too little about consciousness to argue about it definitively and with the certainty you claim one way or another.
If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it’s wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as ‘continuations’ rather than ‘copies’. Why would uploading be different?
It’s not any different, and that’s precisely the point. Do you get to experience what your MWI copies are doing? Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all? So it goes with uploading. That person created by uploading will not be you any more than some alternate dimension copy is you. From the outside I wouldn’t be able to tell the difference, but for you it would be very real: you, the person I am talking to right now, will die, and some other sentient being with your implanted memories will take over your life. Personally I don’t see the benefit of that, especially when it is plausible that other choices (e.g. revival) might lead to continuation of my existence in the way that uploading does not.
Do you get to experience what your MWI copies are doing?
Uh, the present me is experiencing none of the future. I will “get to experience” the future, only via all the future copies of me that have a remembered history that leads back to the present me.
Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all?
If none of the future mes exist, then that means I’m dead. So of course I care because I don’t want to die?
I think we’re suffering from a misunderstanding here. The MWI future copy versions of me are not something that exist in addition to the ordinary future me, they are the ordinary future me. All of them are, though each of them has only one remembered timeline.
That person created by uploading will not be you any more than some alternate dimension copy is you.
Or “that person created by uploading will be as much me as any future version of me is me”.
I’m a physicist, I understand perfectly well MWI. Each time we decohere we end up on one branch and not the others. Do you care at all what happens on the others? If you do, fine, that’s very altruistic of you.
First example: Let’s say that tomorrow I’ll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B?
No, you can’t. Because you don’t know which branch I’ll “end up on” (in fact I don’t consider that statement meaningful, but even if it was meaningful, we wouldn’t know which branch I’d end up on). So now I have to care about those two future branches equally. Until I know which one of these I’ll “end up on”, I have no way to judge between them.
Second example. Let’s say that tomorrow instead of decohering via MWI physics, I’ll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?
Seems to me that like in the first example I can’t know which of the two branches “I’ll end up on”. So now I must care about the two future versions equally.
Let’s say that tomorrow instead of decohering via MWI physics, I’ll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?
Yes, you’d care about P and not U, because there’s a chance you’d end up on P. There’s zero chance you’d end up as U.
Seems to me that like in the first example I can’t know which of the two branches “I’ll end up on”. So now I must care about the two future versions equally.
Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
In the case of MWI physics, I don’t care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.
Actually, I think the burden of proof lies in the other direction. By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain while the upload diverges as a separate individual? That’s the more extraordinary belief.
I think this is at least partially a bogus question/description. Let me break it up into pieces:
By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain …
This postulates an ‘either/or’ scenario, which in my mind isn’t valid. A subjective experience carries over into the upload, and a subjective experience also stays in the biological brain. There isn’t a need for the subjective experience to have a ‘home’. It’s ok for there to be two subjective experiences, one in each location.
… rather than stay with your biological brain while the upload diverges as a separate individual?
Of course the upload diverges from the biological. Or rather, the biological diverges from the upload. This was never a question. Of course the two subjective experiences diverge over time.
And lastly:
By what mechanism might you think that your subjective experience would carry over into the upload …
By the sampling theorem, which separates the content from the substrate.
You are talking about something completely different. Can you describe to me what it feels like for someone to be nondestructively scanned for upload? What should someone walking into the clinic expect?
Sample scenario 1: I go to an upload clinic. They give me a coma inducing drug and tell me that it will wear off in approximately 8 hours, after the scan is complete. As I drift off, I expect a 50% chance that I will awake to find myself an upload, and a 50% chance that I will awake to find myself still stuck in a meat body.
Sample scenario 2: I go to an upload clinic. They tell me the machine is instantaneous and I will be conscious for the scan, and that the uploaded copy will be fully tested and operational in virtual form in about an hour. I step into the machine. I expect with 50% probability that I will step out of the machine after the scan, not feeling particularly different, and that an hour later I’ll be able to talk to my virtual upload in the machine. I also expect with 50% probability that I will find myself spontaneously in virtual form the instant after the scan completes, and that when I check the block, an hour or more of real time will have passed even though it felt instantaneous to me.
(Waking up as an upload in scenario 2 doesn’t seem much different from being put under for surgery to me, at least based on my experiences. You’re talking, then suddenly everything is in a different place and the anaestheseologist is asking ‘can you tell me your name’, interrupting your train of thought and half an hour has passed and the doctor has totally lost track of the conversation right when it was getting interesting.)
Ok, I understand your position. It is not impossible that what you describe is reality. However I believe that it depends on a model of consciousness / subjective experience / personal identity as I have been using those terms which has not definitely been shown to be true. There are other plausible models which would predict with certainty that you would walk out of the machine and not wake up in the simulator. Since (I believe) we do not yet know enough to say with certainty which theory is correct, the conservative, dare I say rational way to proceed is to make choices which come out favorably under both models.
However in the case of destructive uploading vs. revival in cryonics we can go further. Under no model is it better to upload than to revive. This is analogous to scenario #2 - where the patient has (in your model) only a 50% chance of ending up in the simulation vs. the morgue. If I’m right he or she has a 0% chance of success. If you are right then that same person has a 50% chance of success. Personally I’d take revival with a 100% chance of success in both models (modulo chance of losing identity anyway during the vitrification process).
Nothing I said implied a ’50% chance of ending up in the simulation vs. the morgue’. In the scenario where destructive uploading is used, I would expect to walk into the uploading booth, and wake up as an upload with ~100% probability, not 50%. Are you sure you understand my position? Signs point to no.
Yes, you’d care about P and not U, because there’s a chance you’d end up on P. There’s zero chance you’d end up as U.
Why are you saying that? If you don’t answer this question, of why you believe there’s no chance of ending up as the upload, what’s the point of writing a single other word in response?
I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there’s no chance I’ll end up as version U.
So? So all the other Everett branches distinct from me. So would some random person implanted with my memories. I don’t care what it thinks or feels, what I care about is whether it actually is a direct continuation of me.
I’m sorry to hear that. It’s unfortunate for you, and really limits your options.
In my case, uploading does continue my personal existence, and uploading in my case is a critical aspect of getting enough redundancy in my self to survive black swan random events.
Regarding your last sentence, “We can be sure of this, right now”, what are you talking about exactly?
Curious—would you retain this belief if uploading actually happened, the uploaded consciousnesses felt continuity, and external observers could tell no difference between the uploaded consciousnesses and the original consciousnesses?
(Because if so, you can just have an “only if it works for others may you upload me” clause)
To whom are you asking the question? I’d be dead. That computer program running a simulation of me would be a real person, yes, with all associated moral implications. It’d even think and behave like me. But it wouldn’t be me—a direct continuation of my personal identity—anymore than my twin brother or any of the multiverse copies of “me” are actually me. If my brain was still functioning at all I’d be cursing the technicians as they ferry my useless body from the uploader to the crematorium. Then I’d be dead while some digital doppelgänger takes over my life.
Do you see? This isn’t about whether uploading works or not. Uploading when it works creates a copy of me. It will not continue my personal existence. We can be sure of this, right now.
On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?
I mean, I assume that the person reading this response thinks and behaves like the same person (more or less), and that it remembers having been the person who wrote the comment, but that’s just thought and behavior and memory, and on your account those things don’t determine identity.
So, on your account, what does determine identity? What observations actually constitute evidence that you’re the same person who wrote that comment? How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?
By examining the history of interactions which occured between the two states.
Because it is very easy to construct thought experiments which show that thought, behavior, and memory are not sufficient for making a determination. For example, imagine a non-destructive sci-fi teleporter. The version of you I’m talking to right now walks into the machine, sees some flashing lights, and then walks out. Some time later another Dave out of a similar machine on Mars. Now step back a moment in time. Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?
Well, yes, but what are you looking for when you do the examination?
That is, OK, you examine the history, and you think “Well, I observe X, and I don’t observe Y, and therefore I conclude identity was preserved.” What I’m trying to figure out is what X and Y are.
Both.
With 50% probability, I expect to walk back out, and with 50% probability I expect to wake up on mars. Both copies will feel like and believe that they are the original, and both copies will believe they are the ‘original’.
But you expect one or the other, right? In other words, you don’t expect to experience both futures, correct?
Now what if the replicator on Mars gets stuck, and starts continuously outputting Dentins. What is your probability of staying on Earth now?
Further, doesn’t it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?
What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?
Not Dentin, but since I gave the same answer above I figured I’d weigh in here.
I expect to experience both futures, but not simultaneously.
Somewhat similarly, if you show me a Necker cube, do I expect to see a cube whose front face points down and to the left? Or a cube whose front face points up and to the right? Well, I expect to see both. But I don’t expect to see both at once… I’m not capable of that.
(Of course, the two situations are not the same. I can switch between views of a Necker cube, whereas after the duplication there are two mes each tied to their own body.)
I will stay on Earth, with a probability that doesn’t change.
I will also appear repeatedly on Mars.
Well, sure, in the real world it seems very odd to take this possibility seriously. And, indeed, it never seems to happen, so I don’t take it seriously… I don’t in fact expect to wake up on Mars.
But in the hypothetical you’ve constructed, it doesn’t seem odd at all… that’s what a nondestructive teleporter does.
(shrug) In ten minutes, someone will take over my life on Earth. They will resemble me extremely closely, though there will be some small differences. I, as I am now, will no longer exist. This is the normal, ordinary course of events; it has always been like this.
I’m comfortable describing that person as me, and I’m comfortable describing the person I was ten minutes ago as me, so I’m comfortable saying that I continue to exist throughout that 20-minute period. I expect me in 10 minutes to be comfortable describing me as him.
If in the course of those ten minutes, I am nondestructively teleported to Mars, someone will still take over my life on Earth. Someone else, also very similar but not identical, will take over my life on Mars. I’m comfortable describing all of us as me. I expect both of me in 10 minutes to be comfortable describing me as them.
That certainly seems odd, but again, what’s odd about it is the nondestructively teleported to Mars part, which the thought experiment presupposes.
It will travel along with my body, via whatever mechanism allows that to be transferred. (Much as my subjective experience travels along with my body when I drive a car or fly cross-country.)
It would be odd if it did anything else.
No, I would never expect to simultaneously experience being on both Mars and Earth. If you find anyone who believes that, they are severely confused, or are trolling you.
If I know the replicator will get stuck and output 99 dentins on Mars, I would only expect a 1% chance of waking up on earth. If I’m told that it will only output one copy, I would expect a 50% chance of waking up on earth, only to find out later that the actual probability was 1%. The map is not the territory.
Not at all. In fact, it seems odd to me that anyone would be surprised to end up on Mars.
Because conciousness is how information processing feels from the inside, and ‘information processing’ has no intrinsic requirement that the substrate or cycle times be continuous.
If I pause a playing wave file, copy the remainder to another machine, and start playing it out, it still plays music. It doesn’t matter that the machine is different, that the decoder software is different, that the audio transducers are different—the music is still there.
Another, closer analogy is that of the common VM: it is possible to stop a VPS (virtual private server), including operating system, virtual disk, and all running programs, take a snapshot, copy it entirely to another machine halfway around the planet, and restart it on that other machine as though there were no interruption in processing. The VPS may not even know that anything has happened, other than suddenly its clock is wrong compared to external sources. The fact that it spent half an hour ‘suspended’ doesn’t affect its ability to process information one whit.
OK, I was just checking.
There were two ways to interpret your statement—that uploaded won’t be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).
I was just wondering which one it was. I’m interested in hearing arguments against uploading
-How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?
-Would you ever step into a sci-fi style teleporter?
-cryonics constitutes “pausing” and “resuming” yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?
The latter, but they are both empirical questions. The former deals with comparing informational configurations at two points in time, whereas the latter is concerned with the history of how we went from state A to state B (both having real-world implications).
We need more research on the physical basis for consciousness to understand this better such that we can properly answer the question. Right now all we have is the fleeting experience of continued identity moment to moment, and the induction principle which is invalid to apply over singular events like destructive uploading.
My guess as to the underlying nature of the problem is that consciousness exists in any complex interaction of particles—not the pattern itself, but the instantiation of the computation. And so long as this interaction is continuous and ongoing we have a physical basis for the continuation of subjective experience.
Never, for the same reasons.
Pausing is a metaphor. You can’t freeze time and chemistry never stops entirely. The particles in a cryonic patient’s brain keep interacting in complex, albeit much slowed down ways. Recall that the point of pumping the brain full of anti-freeze is that it remains intact and structurally unmolested even at liquid nitrogen temperatures. It is likely that some portion of biological activity is ongoing in cryostatasis albeit at a glacial pace. This may or may not be sufficient for continuity of experience, but unlike uploading the probability is at least not zero.
BTW the problem with teleporting is not spatial or temporal. The problem is that the computational process which is the subjective experience of the person being teleported is interrupted. The machine violently disassembles them and they die, then somewhere else a clone/copy is created. If you have trouble seeing that, imagine that the process is not destructive. You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you’re still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to “wake up” as at the other end of the process?
My current thought on the matter is that Ishaan0 stepped into the elevator, Ishaan1a stepped out of the elevator, and Ishaan1b was replicated by the elevator.
At time 2, Ishaan2a was shot, and Ishaan2b survived.
Ishaan0 → ishaan1a --> ishaan2a just died.
Ishaan0 → ishaan1b--->ishaan2b--->ishaan3b --->… gets to live on.
So Ishaan0 can be said to have survived, whereas ishaan1a has died.
The way I see it, my past self is “dead” in every respect other than that my current self exists and contains memories of that past self.
I don’t think there is anything fundamental saying we aught to be able to have “expectations” about our future subjective experiences, only “predictions” about the future.
Meaning, if ishaan0 had a blindfold on, then at time1 when I step out of the teleporter I would have memories which indicate that my current qualia qualify me to be in the position of either Ishaan1a or Ishaan1b. When I take my blindfold off, I find out which one I am.
It sounds to me like you’re ascribing some critical, necessary aspect of consciousness to the ‘computation’ that occurs between states, as opposed to the presence of the states themselves.
It strikes me as similar to the ‘sampling fallacy’ of analog audio enthusiasts, who constantly claim that digitization of a recording is by definition lossy because a discrete stream can not contain all the data needed to reconstruct a continuous waveform.
Absolutely (although I don’t see the connection to analog audio). Is a frozen brain conscious? No. It is the dynamic response of brain from which the subjective experience of consciousness arises.
See a more physical explanation here.
The connection to analog audio seems obvious to me: a digitized audio file contains no music, it contains only discrete samples taken at various times, samples which when played out properly generate music. An upload file containing the recording of a digital brain contains no conciousness, but is concious when run, one cycle at a time.
A sample is a snapshot of an instant of music; an upload is a snapshot of conciousness. Playing out a large number of samples creates music; running an upload forward in time creates conciousness. In the same way that a frozen brain isn’t concious but an unfrozen, running brain is—an uploaded copy isn’t concious, but a running, uploaded copy is.
That’s the point I was trying to get across. The discussion of samples and states is important because you seem to have this need for transitions to be ‘continuous’ for conciousness to be preserved—but the sampling theorem explicitly says that’s not necessary. There’s no ‘continuous’ transition between two samples in a wave file, yet the original can still be reconstructed perfectly. There may not be a continous transition between a brain and its destructively uploaded copy—but the original and ‘continuous transition’ can still be reconstructed perfectly. It’s simple math.
As a direct result of this, it seems pretty obvious to me that conciousness doesn’t go away because there’s a time gap between states or because the states happen to be recorded on different media, any more than breaking a wave file into five thousand non-contiguous sectors on a hard disk platter destroys the music in the recording. Pretty much the only escape from this is to use a mangled definition of conciousness which requires ‘continuous transition’ for no obvious good reason.
I’m not saying it goes away, I’m saying the uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, but it is not me in the sense that if I walk into an uploader I expect to walk out again in my fleshy body. Maybe that scan is then used to start a simulation from which arises a fully conscious copy of me, but I don’t expect to directly experience what that copy experiences.
The uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, and it is me in the sense that I expect with high probability to wake up as an upload and watch my fleshy body walk out of the scanner under its own power.
Of course I wouldn’t expect the simulation to experience the exact same things as the meat version, or expect to experience both copies at the same time. Frankly, that’s an idiotic belief; I would prefer you not bring it into the conversation in the future, as it makes me feel like you’re intentionally trolling me. I may not believe what you believe, but even I’m not that stupid.
I honestly don’t know how “copy” is distinct from “continuation” on a physical level and/or in regards to ‘consciousness’/‘personal existence’.
If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it’s wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as ‘continuations’ rather than ‘copies’. Why would uploading be different?
Mind you, I’m not saying it necessary isn’t—but I understand too little about consciousness to argue about it definitively and with the certainty you claim one way or another.
It’s not any different, and that’s precisely the point. Do you get to experience what your MWI copies are doing? Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all? So it goes with uploading. That person created by uploading will not be you any more than some alternate dimension copy is you. From the outside I wouldn’t be able to tell the difference, but for you it would be very real: you, the person I am talking to right now, will die, and some other sentient being with your implanted memories will take over your life. Personally I don’t see the benefit of that, especially when it is plausible that other choices (e.g. revival) might lead to continuation of my existence in the way that uploading does not.
Uh, the present me is experiencing none of the future. I will “get to experience” the future, only via all the future copies of me that have a remembered history that leads back to the present me.
If none of the future mes exist, then that means I’m dead. So of course I care because I don’t want to die?
I think we’re suffering from a misunderstanding here. The MWI future copy versions of me are not something that exist in addition to the ordinary future me, they are the ordinary future me. All of them are, though each of them has only one remembered timeline.
Or “that person created by uploading will be as much me as any future version of me is me”.
I’m a physicist, I understand perfectly well MWI. Each time we decohere we end up on one branch and not the others. Do you care at all what happens on the others? If you do, fine, that’s very altruistic of you.
Let me try again.
First example: Let’s say that tomorrow I’ll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B?
No, you can’t. Because you don’t know which branch I’ll “end up on” (in fact I don’t consider that statement meaningful, but even if it was meaningful, we wouldn’t know which branch I’d end up on). So now I have to care about those two future branches equally. Until I know which one of these I’ll “end up on”, I have no way to judge between them.
Second example. Let’s say that tomorrow instead of decohering via MWI physics, I’ll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?
Seems to me that like in the first example I can’t know which of the two branches “I’ll end up on”. So now I must care about the two future versions equally.
Yes, you’d care about P and not U, because there’s a chance you’d end up on P. There’s zero chance you’d end up as U.
Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
In the case of MWI physics, I don’t care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.
This statement requires evidence or at least a coherent argument.
Actually, I think the burden of proof lies in the other direction. By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain while the upload diverges as a separate individual? That’s the more extraordinary belief.
I think this is at least partially a bogus question/description. Let me break it up into pieces:
This postulates an ‘either/or’ scenario, which in my mind isn’t valid. A subjective experience carries over into the upload, and a subjective experience also stays in the biological brain. There isn’t a need for the subjective experience to have a ‘home’. It’s ok for there to be two subjective experiences, one in each location.
Of course the upload diverges from the biological. Or rather, the biological diverges from the upload. This was never a question. Of course the two subjective experiences diverge over time.
And lastly:
By the sampling theorem, which separates the content from the substrate.
You are talking about something completely different. Can you describe to me what it feels like for someone to be nondestructively scanned for upload? What should someone walking into the clinic expect?
Sample scenario 1: I go to an upload clinic. They give me a coma inducing drug and tell me that it will wear off in approximately 8 hours, after the scan is complete. As I drift off, I expect a 50% chance that I will awake to find myself an upload, and a 50% chance that I will awake to find myself still stuck in a meat body.
Sample scenario 2: I go to an upload clinic. They tell me the machine is instantaneous and I will be conscious for the scan, and that the uploaded copy will be fully tested and operational in virtual form in about an hour. I step into the machine. I expect with 50% probability that I will step out of the machine after the scan, not feeling particularly different, and that an hour later I’ll be able to talk to my virtual upload in the machine. I also expect with 50% probability that I will find myself spontaneously in virtual form the instant after the scan completes, and that when I check the block, an hour or more of real time will have passed even though it felt instantaneous to me.
(Waking up as an upload in scenario 2 doesn’t seem much different from being put under for surgery to me, at least based on my experiences. You’re talking, then suddenly everything is in a different place and the anaestheseologist is asking ‘can you tell me your name’, interrupting your train of thought and half an hour has passed and the doctor has totally lost track of the conversation right when it was getting interesting.)
Ok, I understand your position. It is not impossible that what you describe is reality. However I believe that it depends on a model of consciousness / subjective experience / personal identity as I have been using those terms which has not definitely been shown to be true. There are other plausible models which would predict with certainty that you would walk out of the machine and not wake up in the simulator. Since (I believe) we do not yet know enough to say with certainty which theory is correct, the conservative, dare I say rational way to proceed is to make choices which come out favorably under both models.
However in the case of destructive uploading vs. revival in cryonics we can go further. Under no model is it better to upload than to revive. This is analogous to scenario #2 - where the patient has (in your model) only a 50% chance of ending up in the simulation vs. the morgue. If I’m right he or she has a 0% chance of success. If you are right then that same person has a 50% chance of success. Personally I’d take revival with a 100% chance of success in both models (modulo chance of losing identity anyway during the vitrification process).
Nothing I said implied a ’50% chance of ending up in the simulation vs. the morgue’. In the scenario where destructive uploading is used, I would expect to walk into the uploading booth, and wake up as an upload with ~100% probability, not 50%. Are you sure you understand my position? Signs point to no.
Why are you saying that? If you don’t answer this question, of why you believe there’s no chance of ending up as the upload, what’s the point of writing a single other word in response?
I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there’s no chance I’ll end up as version U.
The copy will remember writing this, and will feel pretty strongly that it’s a continuation of you.
So? So all the other Everett branches distinct from me. So would some random person implanted with my memories. I don’t care what it thinks or feels, what I care about is whether it actually is a direct continuation of me.
I’m sorry to hear that. It’s unfortunate for you, and really limits your options.
In my case, uploading does continue my personal existence, and uploading in my case is a critical aspect of getting enough redundancy in my self to survive black swan random events.
Regarding your last sentence, “We can be sure of this, right now”, what are you talking about exactly?
I mean we can do thought experiments which show prettying convincingly that I should not expect to experience the other end of uploading.
What might those thought experiments be? I have yet to hear any convincing ones.
The teleporter arguments we’ve already been discussing, and variants.