Describing this as being averse to risks doesn’t make much sense to me. Couldn’t a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have “high risk aversion” doesn’t really support one side over the other here.
This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be “tempting fate,” or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you’ll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn’t factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I’ve lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer—possibly indefinitely—are ones which could be very extreme, in either direction. I’m not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.
I don’t doubt that there are some “nightmare” situations in which I’d prefer cryonics—I’d rather be frozen than spend the next seventy years being tortured, for example—but I don’t live in one of those situations.
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
Describing this as being averse to risks doesn’t make much sense to me. Couldn’t a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have “high risk aversion” doesn’t really support one side over the other here.
This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be “tempting fate,” or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you’ll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn’t factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I’ve lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer—possibly indefinitely—are ones which could be very extreme, in either direction. I’m not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.
I don’t doubt that there are some “nightmare” situations in which I’d prefer cryonics—I’d rather be frozen than spend the next seventy years being tortured, for example—but I don’t live in one of those situations.
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
Are you just scared of the idea of evil aliens, or do you actually think that it’s a significant risk that cryonicists recklessly ignore?
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
I agree the chances are much smaller, but the question is what happens when you multiply by utility.