How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don’t know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.
Obviously, when I’m awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset—I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.
I’m definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?
I’m pretty sure I’d be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?
Say you’re undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you’re sedated, is there any moral reason to finish the surgery?
Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there’s power again. Once we’ve stopped someone, is there a moral reason to start them again?
My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.
Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I’m unconscious, it seems like the answer could be different—if I cease to exist, others might care, but I won’t (at the time!).
Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won’t exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time...
Short version: I adjusted my sense of “self” until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day.
It didn’t actually take much for me to take that leap when it came to cryonics. The trigger for me was “you don’t die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die”. I’m not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: “Yes, of course I want you to perform the last-ditch operation to save my life!”
If you’re curious:
My default self-view for a long time was basically “the continuity that led to me is me, and any forks or future copies/simulations aren’t me”, which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it “CBH Alpha”) as myself. If a copy of me was created; “I” was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn’t make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can’t pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves.
This doesn’t mean I’m not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn’t need to be optimized for. On the other hand, this does mean I’m not OK with the idea of a machine that duplicates myself for the purpose of the duplicate dying, unless it’s literally a matter of saving any instance of myself; I would optimize for the benefit of all of me, not just for the one who pushed the button.
I’m not yet sure how I’d feel about a “transporter” which offered the option of destroying the original, but didn’t have to. The utility of such a thing is obviously so high I would use it, and I’d probably default to destroying the original just because I don’t feel like I’m such a wonderful benefit to the world that there needs to be more of me (so long as there’s at least one), but when I reframe the question from “why would I want to not be transported (i.e. to go on experiencing life here instead of wherever I was being sent)” to “why would I want to have fewer experiences than I could (i.e. only experience the destination of the transporter, instead of simultaneously experiencing both), I feel like I’d want to keep the original. If we alter the scenario just slightly, such that the duplicate is created as a fork and the fork is then optionally destroyed, I don’t think I would ever choose destruction except if it was a scenario along the lines of “painless disintegration or death by torture” and the torture wasn’t going to last long (no rescue opportunity) but I’d still experience a lot of pain.
These ideas largely came about from various fiction I’ve read in the last few years. Some examples that come to mind:
The “Gavs” of Schlock Mercenary (big spoilers for a major plot arc if I talk about it in a lot of detail; just go read http://www.schlockmercenary.com/)
I remember going through a similar change in my sense of self after reading through particular sections of the sequences—specifically thinking that logically, I have to identify with spatially (or temporally) separated ‘copies’ of me. Unfortunately it doesn’t seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of ‘what if the teleporter malfunctions and you don’t get recreated at your destination? Is that a bad thing?’ is almost without meaning, as there would no-longer be a ‘me’ to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.
As pointed out by Richard, this is probably even more absurd than I realise, as I am not ‘conscious’ of all my desires at all times, and thus I cannot go on this road of ‘if I do not currently care about something, does it matter?’. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.
Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those ‘click’ moments...
The first is a short story that is basically a “garden path” toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won’t be harmed regardless. That world-view gets turned on its ear at the end of the story.
The second is longer, but still a pretty short story; I didn’t see a version of it online independent of the novel-length collection it’s published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do.
The third is a huuuuuuge comic archive (totally worth reading anyhow, but it’s been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less “thinky” but funnier / more approachable than the others.
Although with your example in particular it’s probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don’t understand how or why you’d want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there’s a true answer to the question of “which beings can accurately be labelled “me”?”, and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.
Obviously, when I’m awake, I enjoy life, and want to keep enjoying life.
Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time.
If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics?
Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word “want”. (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily “dying” on an operating table, or, if cryonics works, being frozen.
I think you’ve helped me see that I’m even more confused than I realised! It’s true that I can’t go down the road of ‘if I do not currently care about something, does it matter?’ since this applies when I am awake as well. I’m still not sure how to resolve this, though. Do I say to myself ‘the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness’?
I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn’t feel quite right. For now I’ll meditate on it and see if I can internalise that message. Thanks for the help!
I have trouble understanding why people think that something spooky happens if you could cryopreserve and revive a human brain in good shape that apparently doesn’t happen when you do that to other human organs.
He’s not talking about death spookily happening to him during cryopreservation, he explicitly compares it to sleep, his question is closer to “does being killed in your sleep matter? Is it actually worth taking extra measures to make sure it doesn’t happen, considering that it can’t have negative effects on you and only affects your potential future happiness?”, with a side of feeling (on an emotional level that makes him notice he’s confused) that the differences between cryopreservation and sleep affect the answer to the question.
I think I responded to what you actually said there, but I’m not actually entirely sure what you’re saying—what’s the “something spooky”? What is it that happens “when you do that to other human organs” that even meaningfully applies to brains and spleens alike?
How do people who sign up to cryonics, or want to sign up to cryonics, get over the fact that if they died, there would no-longer be a mind there to care about being revived at a later date? I don’t know how much of it is morbid rationalisation on my part just because signing up to cryonics in the UK seems not quite as reliable/easy as in the US somehow, but it still seems like a real issue to me.
Obviously, when I’m awake, I enjoy life, and want to keep enjoying life. I make plans for tomorrow, and want to be alive tomorrow, despite the fact that in between, there will be a time (during sleep) where I will no-longer care about being alive tomorrow. But if I were killed in my sleep, at no point would I be upset—I would be unaware of it beforehand, and my mind would no-longer be active to care about anything afterwards.
I’m definitely confused about this. I think the central confusion is something like: why should I be willing to spend effort and money at time A to ensure I am alive at time C, when I know that I will not care at all about this at an intermediate time B?
I’m pretty sure I’d be willing to pay a certain amount of money every evening to lower some artificial probability of being killed while I slept. So why am I not similarly willing to pay a certain amount to increase the chance I will awaken from the Dreamless Sleep? Does anyone else think about this before signing up for cryonics?
Say you’re undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you’re sedated, is there any moral reason to finish the surgery?
Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there’s power again. Once we’ve stopped someone, is there a moral reason to start them again?
My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.
(But I’m not signed up for cryonics because I don’t think the information would be preserved.)
Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I’m unconscious, it seems like the answer could be different—if I cease to exist, others might care, but I won’t (at the time!).
Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won’t exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time...
Thanks for the help!
Short version: I adjusted my sense of “self” until it included all my potential future selves. At that point, it becomes literally a matter of saving my life, rather than of being re-awakened one day.
It didn’t actually take much for me to take that leap when it came to cryonics. The trigger for me was “you don’t die and then get cryopreserved, you get cryopreserved as the last-ditch effort before you die”. I’m not suicidal; if you ask any hypothetical instance of me if they want to live, the answer is yes. By extending my sense of continuity into the not-quite-really-dead-yet instance of me, I can answer questions for that cryopreserved self: “Yes, of course I want you to perform the last-ditch operation to save my life!”
If you’re curious: My default self-view for a long time was basically “the continuity that led to me is me, and any forks or future copies/simulations aren’t me”, which tended toward a somewhat selfish view where I always viewed the hypothetical most in-control version (call it “CBH Alpha”) as myself. If a copy of me was created; “I” was simply whichever one I wanted to be (generally, the one responsible for choosing to create the new instance or doing the thing that the pre-fork copy wanted to be doing). It took me a while to realize how much sense that didn’t make; I always am the continuity that led to me, and am therefore whatever instance of CBH that you can hypothesize, and therefore I can’t pick and choose for myself. If anything that identifies itself as CBH can exist after any discontinuity from CBH Alpha, I am (and need to optimize for) all those selves.
This doesn’t mean I’m not OK with the idea of something like a transporter that causes me to cease to exist at one point and begin again at another point; the new instance still identifies as me, and therefore is me and I need to optimize for him. The old instance no longer exists and doesn’t need to be optimized for. On the other hand, this does mean I’m not OK with the idea of a machine that duplicates myself for the purpose of the duplicate dying, unless it’s literally a matter of saving any instance of myself; I would optimize for the benefit of all of me, not just for the one who pushed the button.
I’m not yet sure how I’d feel about a “transporter” which offered the option of destroying the original, but didn’t have to. The utility of such a thing is obviously so high I would use it, and I’d probably default to destroying the original just because I don’t feel like I’m such a wonderful benefit to the world that there needs to be more of me (so long as there’s at least one), but when I reframe the question from “why would I want to not be transported (i.e. to go on experiencing life here instead of wherever I was being sent)” to “why would I want to have fewer experiences than I could (i.e. only experience the destination of the transporter, instead of simultaneously experiencing both), I feel like I’d want to keep the original. If we alter the scenario just slightly, such that the duplicate is created as a fork and the fork is then optionally destroyed, I don’t think I would ever choose destruction except if it was a scenario along the lines of “painless disintegration or death by torture” and the torture wasn’t going to last long (no rescue opportunity) but I’d still experience a lot of pain.
These ideas largely came about from various fiction I’ve read in the last few years. Some examples that come to mind:
“Explorers” by Alicorn (http://alicorn.elcenia.com/stories/explorers.shtml ; her fiction first led me to discover LW, though this story is more recent than that)
Cory Doctorow’s short story To Go Boldly (http://www.amazon.com/exec/obidos/ASIN/0061562351/downandoutint-20)
The “Gavs” of Schlock Mercenary (big spoilers for a major plot arc if I talk about it in a lot of detail; just go read http://www.schlockmercenary.com/)
I remember going through a similar change in my sense of self after reading through particular sections of the sequences—specifically thinking that logically, I have to identify with spatially (or temporally) separated ‘copies’ of me. Unfortunately it doesn’t seem to help me in quite the same way it helps you deal with this dilemma. To me, it seems that if I am willing to press a button that will destroy me here and recreate me at my desired destination (which I believe I would be willing to do), the question of ‘what if the teleporter malfunctions and you don’t get recreated at your destination? Is that a bad thing?’ is almost without meaning, as there would no-longer be a ‘me’ to evaluate the utility of such an event. I guess the core confusion is that I find it hard to evaluate states of the universe where I am not conscious.
As pointed out by Richard, this is probably even more absurd than I realise, as I am not ‘conscious’ of all my desires at all times, and thus I cannot go on this road of ‘if I do not currently care about something, does it matter?’. I have to reflect on this some more and see if I can internalise a more useful sense of what matters and when.
Thanks a lot for the fiction examples, I hope to read them and see if the ideas therein cause me to have one of those ‘click’ moments...
The first is a short story that is basically a “garden path” toward this whole idea, and was a real jolt for me; you wonder why the narrator would be worried about this experiment going wrong, because she won’t be harmed regardless. That world-view gets turned on its ear at the end of the story.
The second is longer, but still a pretty short story; I didn’t see a version of it online independent of the novel-length collection it’s published in. It explores the Star Trek transporter idea, in greater detail and more rationally than Star Trek ever dared to do.
The third is a huuuuuuge comic archive (totally worth reading anyhow, but it’s been updating every single day for almost 15 years); the story arc in question is The Teraport Wars ( http://www.schlockmercenary.com/2002-04-15 ), and the specific part starts about here: http://www.schlockmercenary.com/2002-06-20 . Less “thinky” but funnier / more approachable than the others.
Although with your example in particular it’s probably justified by starting off with very confused beliefs on the subjects and noticing the mess they were in, at least as far as suggesting it to other people I don’t understand how or why you’d want to go change a sense of self like that. If identity is even a meaningful thing to talk about, then there’s a true answer to the question of “which beings can accurately be labelled “me”?”, and having the wrong belief about the answer to that question can mean you step on a transporter pad and are obliterated. If I believe that transporters are murder-and-clone machines, then I also believe that self-modifying to believe otherwise is suicidal.
Perhaps that is not so obvious. While you are awake, do you actually have that want while it is not in your attention? Which is surely most of the time.
If you are puzzled about where the want goes while you are asleep, should you also be puzzled about where it is while you are awake and oblivious to it? Or looking at it the other way, if the latter does not puzzle you, should the former? And if the former does not, should the Long Sleep of cryonics?
Perhaps this is a tree-falls-in-forest-does-it-make-a-sound question. There is (1) your experience of a want while you are contemplating it, and (2) the thing that you are contemplating at such moments. Both are blurred together by the word “want”. (1) is something that comes and goes even during wakefulness; (2) would seem to be a more enduring sort of thing that still exists while your attention is not on it, including during sleep, temporarily “dying” on an operating table, or, if cryonics works, being frozen.
I think you’ve helped me see that I’m even more confused than I realised! It’s true that I can’t go down the road of ‘if I do not currently care about something, does it matter?’ since this applies when I am awake as well. I’m still not sure how to resolve this, though. Do I say to myself ‘the thing I care about persists to exist/potentially exist even when I do not actively care about it, and I should therefore act right now as if I will still care about it even when I stop due to inattention/unconsciousness’?
I think that seems like a pretty solid thing to think, and is useful, but when I say it to myself right now, it doesn’t feel quite right. For now I’ll meditate on it and see if I can internalise that message. Thanks for the help!
I have trouble understanding why people think that something spooky happens if you could cryopreserve and revive a human brain in good shape that apparently doesn’t happen when you do that to other human organs.
He’s not talking about death spookily happening to him during cryopreservation, he explicitly compares it to sleep, his question is closer to “does being killed in your sleep matter? Is it actually worth taking extra measures to make sure it doesn’t happen, considering that it can’t have negative effects on you and only affects your potential future happiness?”, with a side of feeling (on an emotional level that makes him notice he’s confused) that the differences between cryopreservation and sleep affect the answer to the question.
I think I responded to what you actually said there, but I’m not actually entirely sure what you’re saying—what’s the “something spooky”? What is it that happens “when you do that to other human organs” that even meaningfully applies to brains and spleens alike?