This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
This sounds like an unrealistically huge discount rate. To be precise, you anticipate:
(a) One week of being really unhappy while you go through the process of making new friends (perhaps with someone else who’s really unhappy for similar reasons). I assume here that you do not find the process of “making a new friend” to be itself enjoyable enough to compensate. I also suspect that you would start getting over the psychological shock almost immediately, but let’s suppose it actually does take until you’ve made a friend deep enough to have intimate conversations with, and let’s suppose that this does take a whole week.
(b) N years of living happily ever after.
It’s really hard to see how the former observer-moments outweigh the latter observer-moments.
I think it’s this that commenters are probably trying to express when they wonder if you’re thinking in the mode we name “rational”: it seems more like a decision made by mentally fleeing from the sheer terror of imagining the worst possible instant of the worst possible scenario, than any choice made by weighing and balancing.
I also tend to think of cryonics as a prophylactic for freak occurrences rather than inevitable death of old age, meaning that if you sign up now and then have to get suspended in the next 10 years for some reason, I’d rate a pretty good chance that you wake up before all your friends are dead of old age. But that shouldn’t even be an issue. As soon as you weigh a week against N years, it looks pretty clear that you’re not making your decision around the most important stakes in the balance.
I know you don’t endorse consequentialism, but it seems to me that this is just exactly the sort of issue where careful verbal thinking really does help people in real life, a lot—when people make decisions by focusing on one stake that weighs huge in their thoughts but obviously isn’t the most important stake, where here the stakes are “how I (imagine) feeling in the very first instant of waking up” versus “how I feel for the rest of my entire second life”. Deontologist or not, I don’t see how you could argue that it would be a better world for everyone if we all made decisions that way. Once you point it out, it just seems like an obvious bias—for an expected utility maximizer, a formal bias; but obviously wrong even in an informal sense.
I think that the distress would itself inhibit me in my friend-making attempts. It is a skill that I have to apply, not a chemical reaction where if you put me in a room with a friendly stranger and stir, poof, friendship.
Um… would I deeply offend you if I suggested that, perhaps, your worst fears and nightmares are not 100% reflective of what would actually happen in reality? I mean, what you’re saying here is that if you wake up without friends, you’ll be so shocked and traumatized that you’ll never make any friends again ever, despite any future friend-finding or friend-making-prediction software that could potentially be brought to bear. You’re saying that your problem here is unsolvable in the long run by powers up to and including Friendly superintelligence and it just doesn’t seem like THAT LEVEL of difficulty. Or you’re saying that the short-run problem is so terrible, so agonizing, that no amount of future life and happiness can compensate for it, and once again it just doesn’t seem THAT BAD. And I’ve already talked about how pitting verbal thought against this sort of raw fear really is one of those places where rationality excels at actually improving our lives.
Are you sure this is your true rejection or is there something even worse waiting in the wings?
I’m making projections based on psychological facts about myself. Anticipating being friendless and alone makes me unhappy all by itself; but I do have some data on how I get when it actually happens. I don’t think I would be able to bring to bear these clever solutions if that happened (to the appropriate greater magnitude).
I do consider this a problem, so I am actively trying to arrange to have someone I’d find suitable signed up (in either direction would work for). This is probably a matter of time, since my top comment here did yield responses. I’d bet you money, if you like, that (barring financial disaster on my part) I’ll be signed up within the next two years.
I asked this elsewhere, but I’ll ask again: what if the unhappiness and distress caused by the lack of friends could suddenly just disappear? If you could voluntarily suppress it, or stop suppressing it? There will almost certainly be technology in a post-revival future to let you do that, and you could wake up with that ability already set up.
This is an interesting point to consider, and I’m one who’s offered a lot of reasons to not sign up for cryonics.
For the record, a lower bound on my “true rejection” is “I’d sign up if it was free”.