That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
Are you just scared of the idea of evil aliens, or do you actually think that it’s a significant risk that cryonicists recklessly ignore?
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
I agree the chances are much smaller, but the question is what happens when you multiply by utility.