First example: Let’s say that tomorrow I’ll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B?
No, you can’t. Because you don’t know which branch I’ll “end up on” (in fact I don’t consider that statement meaningful, but even if it was meaningful, we wouldn’t know which branch I’d end up on). So now I have to care about those two future branches equally. Until I know which one of these I’ll “end up on”, I have no way to judge between them.
Second example. Let’s say that tomorrow instead of decohering via MWI physics, I’ll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?
Seems to me that like in the first example I can’t know which of the two branches “I’ll end up on”. So now I must care about the two future versions equally.
Let’s say that tomorrow instead of decohering via MWI physics, I’ll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?
Yes, you’d care about P and not U, because there’s a chance you’d end up on P. There’s zero chance you’d end up as U.
Seems to me that like in the first example I can’t know which of the two branches “I’ll end up on”. So now I must care about the two future versions equally.
Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
In the case of MWI physics, I don’t care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.
Actually, I think the burden of proof lies in the other direction. By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain while the upload diverges as a separate individual? That’s the more extraordinary belief.
I think this is at least partially a bogus question/description. Let me break it up into pieces:
By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain …
This postulates an ‘either/or’ scenario, which in my mind isn’t valid. A subjective experience carries over into the upload, and a subjective experience also stays in the biological brain. There isn’t a need for the subjective experience to have a ‘home’. It’s ok for there to be two subjective experiences, one in each location.
… rather than stay with your biological brain while the upload diverges as a separate individual?
Of course the upload diverges from the biological. Or rather, the biological diverges from the upload. This was never a question. Of course the two subjective experiences diverge over time.
And lastly:
By what mechanism might you think that your subjective experience would carry over into the upload …
By the sampling theorem, which separates the content from the substrate.
You are talking about something completely different. Can you describe to me what it feels like for someone to be nondestructively scanned for upload? What should someone walking into the clinic expect?
Sample scenario 1: I go to an upload clinic. They give me a coma inducing drug and tell me that it will wear off in approximately 8 hours, after the scan is complete. As I drift off, I expect a 50% chance that I will awake to find myself an upload, and a 50% chance that I will awake to find myself still stuck in a meat body.
Sample scenario 2: I go to an upload clinic. They tell me the machine is instantaneous and I will be conscious for the scan, and that the uploaded copy will be fully tested and operational in virtual form in about an hour. I step into the machine. I expect with 50% probability that I will step out of the machine after the scan, not feeling particularly different, and that an hour later I’ll be able to talk to my virtual upload in the machine. I also expect with 50% probability that I will find myself spontaneously in virtual form the instant after the scan completes, and that when I check the block, an hour or more of real time will have passed even though it felt instantaneous to me.
(Waking up as an upload in scenario 2 doesn’t seem much different from being put under for surgery to me, at least based on my experiences. You’re talking, then suddenly everything is in a different place and the anaestheseologist is asking ‘can you tell me your name’, interrupting your train of thought and half an hour has passed and the doctor has totally lost track of the conversation right when it was getting interesting.)
Ok, I understand your position. It is not impossible that what you describe is reality. However I believe that it depends on a model of consciousness / subjective experience / personal identity as I have been using those terms which has not definitely been shown to be true. There are other plausible models which would predict with certainty that you would walk out of the machine and not wake up in the simulator. Since (I believe) we do not yet know enough to say with certainty which theory is correct, the conservative, dare I say rational way to proceed is to make choices which come out favorably under both models.
However in the case of destructive uploading vs. revival in cryonics we can go further. Under no model is it better to upload than to revive. This is analogous to scenario #2 - where the patient has (in your model) only a 50% chance of ending up in the simulation vs. the morgue. If I’m right he or she has a 0% chance of success. If you are right then that same person has a 50% chance of success. Personally I’d take revival with a 100% chance of success in both models (modulo chance of losing identity anyway during the vitrification process).
Nothing I said implied a ’50% chance of ending up in the simulation vs. the morgue’. In the scenario where destructive uploading is used, I would expect to walk into the uploading booth, and wake up as an upload with ~100% probability, not 50%. Are you sure you understand my position? Signs point to no.
Yes, you’d care about P and not U, because there’s a chance you’d end up on P. There’s zero chance you’d end up as U.
Why are you saying that? If you don’t answer this question, of why you believe there’s no chance of ending up as the upload, what’s the point of writing a single other word in response?
I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there’s no chance I’ll end up as version U.
Let me try again.
First example: Let’s say that tomorrow I’ll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B?
No, you can’t. Because you don’t know which branch I’ll “end up on” (in fact I don’t consider that statement meaningful, but even if it was meaningful, we wouldn’t know which branch I’d end up on). So now I have to care about those two future branches equally. Until I know which one of these I’ll “end up on”, I have no way to judge between them.
Second example. Let’s say that tomorrow instead of decohering via MWI physics, I’ll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?
Seems to me that like in the first example I can’t know which of the two branches “I’ll end up on”. So now I must care about the two future versions equally.
Yes, you’d care about P and not U, because there’s a chance you’d end up on P. There’s zero chance you’d end up as U.
Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?
In the case of MWI physics, I don’t care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.
This statement requires evidence or at least a coherent argument.
Actually, I think the burden of proof lies in the other direction. By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain while the upload diverges as a separate individual? That’s the more extraordinary belief.
I think this is at least partially a bogus question/description. Let me break it up into pieces:
This postulates an ‘either/or’ scenario, which in my mind isn’t valid. A subjective experience carries over into the upload, and a subjective experience also stays in the biological brain. There isn’t a need for the subjective experience to have a ‘home’. It’s ok for there to be two subjective experiences, one in each location.
Of course the upload diverges from the biological. Or rather, the biological diverges from the upload. This was never a question. Of course the two subjective experiences diverge over time.
And lastly:
By the sampling theorem, which separates the content from the substrate.
You are talking about something completely different. Can you describe to me what it feels like for someone to be nondestructively scanned for upload? What should someone walking into the clinic expect?
Sample scenario 1: I go to an upload clinic. They give me a coma inducing drug and tell me that it will wear off in approximately 8 hours, after the scan is complete. As I drift off, I expect a 50% chance that I will awake to find myself an upload, and a 50% chance that I will awake to find myself still stuck in a meat body.
Sample scenario 2: I go to an upload clinic. They tell me the machine is instantaneous and I will be conscious for the scan, and that the uploaded copy will be fully tested and operational in virtual form in about an hour. I step into the machine. I expect with 50% probability that I will step out of the machine after the scan, not feeling particularly different, and that an hour later I’ll be able to talk to my virtual upload in the machine. I also expect with 50% probability that I will find myself spontaneously in virtual form the instant after the scan completes, and that when I check the block, an hour or more of real time will have passed even though it felt instantaneous to me.
(Waking up as an upload in scenario 2 doesn’t seem much different from being put under for surgery to me, at least based on my experiences. You’re talking, then suddenly everything is in a different place and the anaestheseologist is asking ‘can you tell me your name’, interrupting your train of thought and half an hour has passed and the doctor has totally lost track of the conversation right when it was getting interesting.)
Ok, I understand your position. It is not impossible that what you describe is reality. However I believe that it depends on a model of consciousness / subjective experience / personal identity as I have been using those terms which has not definitely been shown to be true. There are other plausible models which would predict with certainty that you would walk out of the machine and not wake up in the simulator. Since (I believe) we do not yet know enough to say with certainty which theory is correct, the conservative, dare I say rational way to proceed is to make choices which come out favorably under both models.
However in the case of destructive uploading vs. revival in cryonics we can go further. Under no model is it better to upload than to revive. This is analogous to scenario #2 - where the patient has (in your model) only a 50% chance of ending up in the simulation vs. the morgue. If I’m right he or she has a 0% chance of success. If you are right then that same person has a 50% chance of success. Personally I’d take revival with a 100% chance of success in both models (modulo chance of losing identity anyway during the vitrification process).
Nothing I said implied a ’50% chance of ending up in the simulation vs. the morgue’. In the scenario where destructive uploading is used, I would expect to walk into the uploading booth, and wake up as an upload with ~100% probability, not 50%. Are you sure you understand my position? Signs point to no.
Why are you saying that? If you don’t answer this question, of why you believe there’s no chance of ending up as the upload, what’s the point of writing a single other word in response?
I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there’s no chance I’ll end up as version U.