I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets—even odds, say. And if you make it a $1000 bet versus $2000, I’ll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can’t tolerate any risk of being out two thousand dollars. (I can’t tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no “don’t gamble at all” option.)
I show no particular tendency to flinch from the deaths of those near me who were not preserved. Do you think my fear of my own death is so much greater as to drive me to irrationality only there, and only on cryonics? I could as easily accuse you of sour grapes for presently not having the money to sign up. Not that I am so accusing—but be wary of who you accuse of rationalization; there are many tragedies in this universe, but you should be careful not to go around accepting the ones that aren’t inevitable.
When I spoke of “not dealing with it”, I didn’t mean to say that you do this with people who die and aren’t signed up for cryonics. (I had already read and was very moved by your piece on Yehuda.) When someone does get frozen, though, it’s easy to categorize them as “maybe not dead”—since if a frozen person weren’t maybe-not-dead, no one would be frozen.
Alicorn, not everything that is less than absolutely awful to believe, is therefore false. In the end, either the information is there in the brain or not, and that’s a question of neuroscience and the limits of possible revival tech; that’s not something which can be possibly settled by observing which answers are comforting or discomforting.
I’m obviously not being very clear. I’m not making a case that it’s irrational to sign up for cryonics—I’m just saying it’s not appropriate for someone with a very high risk-aversion, such as myself. I’m informed by the same person who taught me about levels of risk aversion in the first place that no given level of risk aversion is necessarily irrational or irrational; it’s just a personal characteristic. It’s quite possible that by making these choices you’ll be around, enjoying a great quality of life, in four thousand years, and I won’t. That would be awesome for you and less awesome for me. I’m just not willing to take the bet.
Describing this as being averse to risks doesn’t make much sense to me. Couldn’t a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have “high risk aversion” doesn’t really support one side over the other here.
This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be “tempting fate,” or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you’ll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn’t factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I’ve lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer—possibly indefinitely—are ones which could be very extreme, in either direction. I’m not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.
I don’t doubt that there are some “nightmare” situations in which I’d prefer cryonics—I’d rather be frozen than spend the next seventy years being tortured, for example—but I don’t live in one of those situations.
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
I mean something like the second thing. Basically, I invariably would rather bet one dollar than bet two when the expected utility is identical with both bets—even odds, say. And if you make it a $1000 bet versus $2000, I’ll probably prefer the first bet over the second even if the expected utility is strictly worse, simply because I can’t tolerate any risk of being out two thousand dollars. (I can’t tolerate much risk of being out a thousand either, given my poor-grad-student finances, but this is assuming I have no “don’t gamble at all” option.)
I show no particular tendency to flinch from the deaths of those near me who were not preserved. Do you think my fear of my own death is so much greater as to drive me to irrationality only there, and only on cryonics? I could as easily accuse you of sour grapes for presently not having the money to sign up. Not that I am so accusing—but be wary of who you accuse of rationalization; there are many tragedies in this universe, but you should be careful not to go around accepting the ones that aren’t inevitable.
When I spoke of “not dealing with it”, I didn’t mean to say that you do this with people who die and aren’t signed up for cryonics. (I had already read and was very moved by your piece on Yehuda.) When someone does get frozen, though, it’s easy to categorize them as “maybe not dead”—since if a frozen person weren’t maybe-not-dead, no one would be frozen.
Alicorn, not everything that is less than absolutely awful to believe, is therefore false. In the end, either the information is there in the brain or not, and that’s a question of neuroscience and the limits of possible revival tech; that’s not something which can be possibly settled by observing which answers are comforting or discomforting.
I’m obviously not being very clear. I’m not making a case that it’s irrational to sign up for cryonics—I’m just saying it’s not appropriate for someone with a very high risk-aversion, such as myself. I’m informed by the same person who taught me about levels of risk aversion in the first place that no given level of risk aversion is necessarily irrational or irrational; it’s just a personal characteristic. It’s quite possible that by making these choices you’ll be around, enjoying a great quality of life, in four thousand years, and I won’t. That would be awesome for you and less awesome for me. I’m just not willing to take the bet.
Describing this as being averse to risks doesn’t make much sense to me. Couldn’t a pro-cryonics person equally well justify her decision as being motivated by risk aversion? By choosing not to be preserved in the event of death, you risk missing out on futures that are worth living in. If you want to take this into bizarre and unlikely science fiction ideas, as with your dystopian cannon fodder speculation, you could easily construct nightmare scenarios where cryonics is the better choice. Simply declaring yourself to have “high risk aversion” doesn’t really support one side over the other here.
This reminds me of a similar trope concerning wills: someone could avoid even thinking about setting up a will, because that would be “tempting fate,” or take the opposite position: that not having a will is tempting fate, and makes it dramatically more likely that you’ll get hit by a bus the next day. Of course, neither side there is very reasonable.
I call it risk aversion because if cryonics works at all, it ups the stakes. The money dropped on signing up for it is a sure thing, so it doesn’t factor into risk, and if I get frozen and just stay dead indefinitely (for whatever reason) then all I’ve lost compared to not signing up is that money and possibly some psychological closure for my loved ones. But the scenarios in which cryonics results in me being around for longer—possibly indefinitely—are ones which could be very extreme, in either direction. I’m not comfortable with such extreme stakes: I prefer everything I have to deal with to be within my finite lifespan, in the absence of having a near-certainty about a longer lifespan being awesome.
I don’t doubt that there are some “nightmare” situations in which I’d prefer cryonics—I’d rather be frozen than spend the next seventy years being tortured, for example—but I don’t live in one of those situations.
That’s starting to sound like a general argument for shorter lifetimes over longer ones. Is there a reason this wouldn’t apply just as well to living for five more years versus fifty? There’s more room for extreme positive or negative experiences in the extra 45 years.
Not at all—I’d take straight up immortality, if somebody offered, although I’d rather have a suicide option loophole for cases where I’m the only person to survive the heat death of the universe or something. Perhaps I unduly value the (illusion of?) control over my situation. But my reasoning is about the choice as a gamble: my risk aversion makes me prefer not to take the gamble that cryonics unambiguously is, which could go well or badly and has a cost to play.
Are you just scared of the idea of evil aliens, or do you actually think that it’s a significant risk that cryonicists recklessly ignore?
It’s not high on my list of phobias. I don’t judge the risk to be very serious. But then, the tiny risk of evil aliens isn’t opposed to a great chance of eternal bliss; it’s competing with an equally tiny chance of something very nice.
I would guess that however small the chances of being reanimated by benevolent people are, the chances of being reanimated by non-benevolent people are much smaller, just because any benevolent person with the capacity to do so cheaply will want to do so, while most non-benevolent futures I can imagine won’t bother.
Sadists exist even in the present. Unethical research programs are not unheard of in history. This is a little like saying that I shouldn’t worry about walking alone in a city at night in an area of uncertain crime rate, because if someone benevolent happens by they’ll buy me ice cream, and anyone who doesn’t wish me well will just ignore me.
But you wouldn’t choose to die rather than walk through the city, would you?
It’s hard for me to take the nightmare science fiction scenarios too seriously when the default actions comes with a well established, nonfictional nightmare: you don’t sign up for cryonics, you die, and that’s the end.
Economics are key here. What do people have to gain from taking certain actions on you/against you?
Also note that notions of “benevolence” have varied throughout the ages—and it has not been a monotonically increasing function!
There are times and places in this world when a lone drifter would have been—by default—“benevolently” enslaved by the authorities, but where this default action would change to “put to death” several decades later.
How well one is treated always depends on the economic and political power of the group you are associated with. Do our notions of lawful ownership match those of ancient civilizations? They do match in broad outlines, but in terms of specific artifacts, our notions diverge dramatically. If we somehow managed to clone Tutankhamen and recover his mind from the ether and re-implant it, what are the chances he’s going to get all of his stuff back?
I agree the chances are much smaller, but the question is what happens when you multiply by utility.