There’s a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.
What are the relative sizes of those slivers, and how much more likely am I to be revived in the “better” futures than in the “worse” futures? I really can’t tell.
I don’t seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.
I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don’t sign up. I ask to be cremated instead.
Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.
So are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude? I’m having trouble imagining that. In my mind, you are most likely to be revived because the reviver feels some sort of moral obligation towards you, so the future in which this happens should, on the whole, be pretty decent. If it’s a future of eternal torture, it seems much less likely that something in it will care enough to revive some cryonics patients when it could, for example, design and make a person optimised for experiencing the maximal possible amount of misery. Or, to put it differently, the very fact that something wants to revive you suggests that that something cares about a very narrow set of objectives, and if it cares about that set of objects it’s likely because they were put there with the aim of achieving a “good” outcome.
(As an aside, I’m not very averse to “worse-than-death” outcomes, so my doubts definitely do arise partially from that, but at the same time I think they are reasonable in their own right.)
This seems strangely averse to bad outcomes to me. Are you taking into account that the ratio between the goodness of the best possible experiences and the badness of the worst possible experiences (per second, and per year) should be much closer to 1:1 than the ratio of the most intense per second experiences we observe today, for reasons discussed in this post?
Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don’t see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.
This does, however, put me into disagreement with both Robin Hanson (“More likely than not, most folks who die today didn’t have to die!”) and Eliezer Yudkowsky (“Not signing up for cryonics [says that] you’ve stopped believing that human life, and your own life, is something of value”).
I … don’t think it does, actually. Well, the bit about “most possible futures are empty” does put you in conflict with Robin Hanson (“More likely than not, most folks who die today didn’t have to die!”), I guess, but the actual thesis seems to fall into the category of Eliezer Yudkowsky’s “you’ve stopped believing that human life, and your own life, is something of value” (after a certain point in history.)
If there is a high probability of these bad futures happening before you retire, this belief reduces the cost of cryonics to you in terms of the opportunity cost of instead putting money into retirement accounts.
In the really bad futures you probably don’t experience extra suffering if you sign up for cryonics because all possible types of human minds get simulated.
Whoa. What? I notice that I am confused. Requesting additional information.
Most of the time, if I read something like that, I’d assume it was merely false—empty posturing from someone who didn’t understand the implications of what they were writing. In this case, though… everything else I’ve seen you write is coherent and precise. I’m inclined to believe your words literally, in which case either A) I’m missing some sort of context or qualifiers or B) you really ought to see a therapist or something.
Do you mean you’re not averse to death decades from now? Does that feel different from the possibility of getting hit by a bus next week?
(Only tangentially related, but I’m curious: what’s your order of magnitude probability estimate that cryonics would actually work?)
No, I’m sorry, but there are simply many atheists who really aren’t that scared of non-existence. We don’t seek it out, we do prefer continuation of our lives and its many joys, but dying doesn’t scare the hell out of us either.
This, in me at least, has nothing to do with depression or anything that requires therapy. I’m not suicidal in the least; even though I’d be scared of being trapped in an SF-style dystopia that didn’t allow me to so suicide.
“I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.”
― Mark Twain
No, I’m sorry, but there are simply many atheists who really aren’t that scared of non-existence.
The difference being that those are biased, whereas lukeprog would be expected to see through once the true rejection was addressed, which it has been.
I assume. I am not any of the participants in this conversation.
Sorry, I just meant that I seem to be less averse to death than other people. I’d be very sad to die, and not have the chance to achieve my goals, but I’m not as terrified as death as many people seem to be. I’ve clarified the original comment.
Why am I not signed up for cryonics?
Here’s my model.
In most futures, everyone is simply dead.
There’s a tiny sliver of futures that are better than that, and a tiny sliver of futures that are worse than that.
What are the relative sizes of those slivers, and how much more likely am I to be revived in the “better” futures than in the “worse” futures? I really can’t tell.
I don’t seem to be as terrified of death as many people are. A while back I read the Stoics to reduce my fear of death, and it worked. I am, however, very averse to being revived into a worse-than-death future and not being able to escape.
I bet the hassle and cost of cryonics disincentivizes me, too, but when I boot up my internal simulator and simulate a world where cryonics is free, and obtained via a 10-question Google form, I still don’t sign up. I ask to be cremated instead.
Cryonics may be reasonable for someone who is more averse to death and less averse to worse-than-death outcomes than I am. Cryonics may also be reasonable for someone who has strong reasons to believe they are more likely to be revived in better-than-death futures than in worse-than-death futures. Finally, there may be a fundamental error in my model.
This does, however, put me into disagreement with both Robin Hanson (“More likely than not, most folks who die today didn’t have to die!”) and Eliezer Yudkowsky (“Not signing up for cryonics [says that] you’ve stopped believing that human life, and your own life, is something of value”).
So are you saying the P(worse-than-death|revived) and the P(better-than-death|revived) probabilities are of similar magnitude? I’m having trouble imagining that. In my mind, you are most likely to be revived because the reviver feels some sort of moral obligation towards you, so the future in which this happens should, on the whole, be pretty decent. If it’s a future of eternal torture, it seems much less likely that something in it will care enough to revive some cryonics patients when it could, for example, design and make a person optimised for experiencing the maximal possible amount of misery. Or, to put it differently, the very fact that something wants to revive you suggests that that something cares about a very narrow set of objectives, and if it cares about that set of objects it’s likely because they were put there with the aim of achieving a “good” outcome.
(As an aside, I’m not very averse to “worse-than-death” outcomes, so my doubts definitely do arise partially from that, but at the same time I think they are reasonable in their own right.)
Yes. Like, maybe the latter probability is only 10 or 100 times greater than the former probability.
This seems strangely averse to bad outcomes to me. Are you taking into account that the ratio between the goodness of the best possible experiences and the badness of the worst possible experiences (per second, and per year) should be much closer to 1:1 than the ratio of the most intense per second experiences we observe today, for reasons discussed in this post?
Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don’t see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.
For some of the same reasons depressed people take drugs to elevate their mood.
I like that post very much. I’m trying to make such an update, but it’s hard to tell how much I should adjust from my intuitive impressions.
OK, what? When you say “worse-than-death”, are you including Friendship is Optimal?
What about a variant of Hanson’s future where:
versions of you repeatedly come into existence, do unfulfilling work for a while, and cease to exist
no version of you contacts any of the others
none of these future-selves directly contribute to changing this situation, but
your memories do make it into a mind that can act more freely than most or all of us today, and
the experiences of people like your other selves influence the values of this mind, and
the world stops using unhappy versions of you.
(Edited for fatigue.)
I haven’t read Friendship is Optimal, because I find it difficult to enjoy reading fiction in general.
Not sure how I feel about the described Hansonian future, actually.
I responded to this as a post here: http://lesswrong.com/r/discussion/lw/lrf/can_we_decrease_the_risk_of_worsethandeath/
I … don’t think it does, actually. Well, the bit about “most possible futures are empty” does put you in conflict with Robin Hanson (“More likely than not, most folks who die today didn’t have to die!”), I guess, but the actual thesis seems to fall into the category of Eliezer Yudkowsky’s “you’ve stopped believing that human life, and your own life, is something of value” (after a certain point in history.)
If there is a high probability of these bad futures happening before you retire, this belief reduces the cost of cryonics to you in terms of the opportunity cost of instead putting money into retirement accounts.
In the really bad futures you probably don’t experience extra suffering if you sign up for cryonics because all possible types of human minds get simulated.
Whoa. What? I notice that I am confused. Requesting additional information.
Most of the time, if I read something like that, I’d assume it was merely false—empty posturing from someone who didn’t understand the implications of what they were writing. In this case, though… everything else I’ve seen you write is coherent and precise. I’m inclined to believe your words literally, in which case either A) I’m missing some sort of context or qualifiers or B) you really ought to see a therapist or something.
Do you mean you’re not averse to death decades from now? Does that feel different from the possibility of getting hit by a bus next week?
(Only tangentially related, but I’m curious: what’s your order of magnitude probability estimate that cryonics would actually work?)
No, I’m sorry, but there are simply many atheists who really aren’t that scared of non-existence. We don’t seek it out, we do prefer continuation of our lives and its many joys, but dying doesn’t scare the hell out of us either.
This, in me at least, has nothing to do with depression or anything that requires therapy. I’m not suicidal in the least; even though I’d be scared of being trapped in an SF-style dystopia that didn’t allow me to so suicide.
What’s that quote that says something to the nature of “I didn’t exist for billions of years before I was born, and it didn’t bother me one bit” ?
“I do not fear death. I had been dead for billions and billions of years before I was born, and had not suffered the slightest inconvenience from it.” ― Mark Twain
The difference being that those are biased, whereas lukeprog would be expected to see through once the true rejection was addressed, which it has been.
I assume. I am not any of the participants in this conversation.
Sorry, I just meant that I seem to be less averse to death than other people. I’d be very sad to die, and not have the chance to achieve my goals, but I’m not as terrified as death as many people seem to be. I’ve clarified the original comment.