Short version: I adjusted my sense of “self” until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day.
It didn’t actually take much for me to take that leap when it came to cryonics. The trigger for me was “you don’t die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die”. I’m not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: “Yes, of course I want you to perform the last-ditch operation to save my life!”
If you’re curious:
My default self-view for a long time was basically “the continuity that led to me is me, and any forks or future copies/simulations aren’t me”, which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it “CBH Alpha”) as myself. If a copy of me was created; “I” was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn’t make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can’t pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves.
This doesn’t mean I’m not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn’t need to be optimized for. On the other hand, this does mean I’m not OK with the idea of a machine that duplicates myself for the purpose of the duplicate dying, unless it’s literally a matter of saving any instance of myself; I would optimize for the benefit of all of me, not just for the one who pushed the button.
I’m not yet sure how I’d feel about a “transporter” which offered the option of destroying the original, but didn’t have to. The utility of such a thing is obviously so high I would use it, and I’d probably default to destroying the original just because I don’t feel like I’m such a wonderful benefit to the world that there needs to be more of me (so long as there’s at least one), but when I reframe the question from “why would I want to not be transported (i.e. to go on experiencing life here instead of wherever I was being sent)” to “why would I want to have fewer experiences than I could (i.e. only experience the destination of the transporter, instead of simultaneously experiencing both), I feel like I’d want to keep the original. If we alter the scenario just slightly, such that the duplicate is created as a fork and the fork is then optionally destroyed, I don’t think I would ever choose destruction except if it was a scenario along the lines of “painless disintegration or death by torture” and the torture wasn’t going to last long (no rescue opportunity) but I’d still experience a lot of pain.
These ideas largely came about from various fiction I’ve read in the last few years. Some examples that come to mind:
The “Gavs” of Schlock Mercenary (big spoilers for a major plot arc if I talk about it in a lot of detail; just go read http://www.schlockmercenary.com/)
I remember going through a similar change in my sense of self after reading through particular sections of the sequences—specifically thinking that logically, I have to identify with spatially (or temporally) separated ‘copies’ of me. Unfortunately it doesn’t seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of ‘what if the teleporter malfunctions and you don’t get recreated at your destination? Is that a bad thing?’ is almost without meaning, as there would no-longer be a ‘me’ to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.
As pointed out by Richard, this is probably even more absurd than I realise, as I am not ‘conscious’ of all my desires at all times, and thus I cannot go on this road of ‘if I do not currently care about something, does it matter?’. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.
Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those ‘click’ moments...
The first is a short story that is basically a “garden path” toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won’t be harmed regardless. That world-view gets turned on its ear at the end of the story.
The second is longer, but still a pretty short story; I didn’t see a version of it online independent of the novel-length collection it’s published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do.
The third is a huuuuuuge comic archive (totally worth reading anyhow, but it’s been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less “thinky” but funnier / more approachable than the others.
Although with your example in particular it’s probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don’t understand how or why you’d want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there’s a true answer to the question of “which beings can accurately be labelled “me”?”, and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.
Short version: I adjusted my sense of “self” until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day.
It didn’t actually take much for me to take that leap when it came to cryonics. The trigger for me was “you don’t die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die”. I’m not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: “Yes, of course I want you to perform the last-ditch operation to save my life!”
If you’re curious: My default self-view for a long time was basically “the continuity that led to me is me, and any forks or future copies/simulations aren’t me”, which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it “CBH Alpha”) as myself. If a copy of me was created; “I” was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn’t make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can’t pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves.
This doesn’t mean I’m not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn’t need to be optimized for. On the other hand, this does mean I’m not OK with the idea of a machine that duplicates myself for the purpose of the duplicate dying, unless it’s literally a matter of saving any instance of myself; I would optimize for the benefit of all of me, not just for the one who pushed the button.
I’m not yet sure how I’d feel about a “transporter” which offered the option of destroying the original, but didn’t have to. The utility of such a thing is obviously so high I would use it, and I’d probably default to destroying the original just because I don’t feel like I’m such a wonderful benefit to the world that there needs to be more of me (so long as there’s at least one), but when I reframe the question from “why would I want to not be transported (i.e. to go on experiencing life here instead of wherever I was being sent)” to “why would I want to have fewer experiences than I could (i.e. only experience the destination of the transporter, instead of simultaneously experiencing both), I feel like I’d want to keep the original. If we alter the scenario just slightly, such that the duplicate is created as a fork and the fork is then optionally destroyed, I don’t think I would ever choose destruction except if it was a scenario along the lines of “painless disintegration or death by torture” and the torture wasn’t going to last long (no rescue opportunity) but I’d still experience a lot of pain.
These ideas largely came about from various fiction I’ve read in the last few years. Some examples that come to mind:
“Explorers” by Alicorn (http://alicorn.elcenia.com/stories/explorers.shtml ; her fiction first led me to discover LW, though this story is more recent than that)
Cory Doctorow’s short story To Go Boldly (http://www.amazon.com/exec/obidos/ASIN/0061562351/downandoutint-20)
The “Gavs” of Schlock Mercenary (big spoilers for a major plot arc if I talk about it in a lot of detail; just go read http://www.schlockmercenary.com/)
I remember going through a similar change in my sense of self after reading through particular sections of the sequences—specifically thinking that logically, I have to identify with spatially (or temporally) separated ‘copies’ of me. Unfortunately it doesn’t seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of ‘what if the teleporter malfunctions and you don’t get recreated at your destination? Is that a bad thing?’ is almost without meaning, as there would no-longer be a ‘me’ to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.
As pointed out by Richard, this is probably even more absurd than I realise, as I am not ‘conscious’ of all my desires at all times, and thus I cannot go on this road of ‘if I do not currently care about something, does it matter?’. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.
Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those ‘click’ moments...
The first is a short story that is basically a “garden path” toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won’t be harmed regardless. That world-view gets turned on its ear at the end of the story.
The second is longer, but still a pretty short story; I didn’t see a version of it online independent of the novel-length collection it’s published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do.
The third is a huuuuuuge comic archive (totally worth reading anyhow, but it’s been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less “thinky” but funnier / more approachable than the others.
Although with your example in particular it’s probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don’t understand how or why you’d want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there’s a true answer to the question of “which beings can accurately be labelled “me”?”, and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.