>Failure (or ‘success’ with caveats) needn’t merely leave me with the default outcome (nothingness) -- some of the conceivable failure modes are horrifying. I don’t want to go into detail, but there’s the potential for a lot of suffering.
(This, by the way, does seem like both a potentially solid reason to not want cryopreservation, and also might describe a cryptic motive of people who haven’t thought about this as much as you have, which I gestured at by “afraid of future justice”.)
It’s an interesting one—I think people differ hugely in terms of both how they weigh (actual or potential) happiness against suffering, and how much they care about prolonging life per se. I’m pretty sure I’ve seen people on LW and/or SSC say they would prefer to intensely suffer forever than to die, whereas I am very much on the opposite side of that question. I’m also unusually conservative when it comes to trading off suffering against happiness.
I don’t know how much this comes down to differing in-the-moment experiences (some people tend to experience positive/negative feelings more/less intensely than others in similar circumstances), differing after-the-fact judgments and even memories (some people tend to forget how bad an unpleasant experience was; some are disproportionately traumatised by it), differing life circumstances, etc. I do suspect it’s largely based on some combination of factors like these, rather than disagreements that are primarily intellectual.
edit: I’ve kind of conflated the happiness-suffering tradeoff and the ‘suffering versus oblivion’ dilemma here. In the second paragraph I was mostly talking about the happiness-suffering tradeoff.
It’s an intellectual disagreement in the sense that it’s part of a false lack of Hope, and if that isn’t being corrected by local gradients I don’t see what other recourse there is besides reason.
If we agreed on the probability of each possible outcome of cryonic preservation, but disagreed on whether the risk was worth it, how would we go about trying to convince the other they were wrong?
The point isn’t to convince each other, the point is to find places where one or the other has true and useful information and ideas that the other doesn’t have.
The point of my post is that the probabilities themselves depend on whether we consider the risk worth it. To say it another way, which flattens some of the phenomenology I’m trying to do but might get the point across, I’m saying it’s a coordination problem, and computing beliefs in a CDT way is failing to get the benefits of participating fully in the possibilities of the coordination problem.
edit: Like, if everyone thought is was worth it, then it would be executed well (maybe), so the probability would be much higher, so it is worth it. A “self-fulfilling prophecy”, from a CDT perspective.
>Failure (or ‘success’ with caveats) needn’t merely leave me with the default outcome (nothingness) -- some of the conceivable failure modes are horrifying. I don’t want to go into detail, but there’s the potential for a lot of suffering.
(This, by the way, does seem like both a potentially solid reason to not want cryopreservation, and also might describe a cryptic motive of people who haven’t thought about this as much as you have, which I gestured at by “afraid of future justice”.)
It’s an interesting one—I think people differ hugely in terms of both how they weigh (actual or potential) happiness against suffering, and how much they care about prolonging life per se. I’m pretty sure I’ve seen people on LW and/or SSC say they would prefer to intensely suffer forever than to die, whereas I am very much on the opposite side of that question. I’m also unusually conservative when it comes to trading off suffering against happiness.
I don’t know how much this comes down to differing in-the-moment experiences (some people tend to experience positive/negative feelings more/less intensely than others in similar circumstances), differing after-the-fact judgments and even memories (some people tend to forget how bad an unpleasant experience was; some are disproportionately traumatised by it), differing life circumstances, etc. I do suspect it’s largely based on some combination of factors like these, rather than disagreements that are primarily intellectual.
edit: I’ve kind of conflated the happiness-suffering tradeoff and the ‘suffering versus oblivion’ dilemma here. In the second paragraph I was mostly talking about the happiness-suffering tradeoff.
It’s an intellectual disagreement in the sense that it’s part of a false lack of Hope, and if that isn’t being corrected by local gradients I don’t see what other recourse there is besides reason.
If we agreed on the probability of each possible outcome of cryonic preservation, but disagreed on whether the risk was worth it, how would we go about trying to convince the other they were wrong?
The point isn’t to convince each other, the point is to find places where one or the other has true and useful information and ideas that the other doesn’t have.
The point of my post is that the probabilities themselves depend on whether we consider the risk worth it. To say it another way, which flattens some of the phenomenology I’m trying to do but might get the point across, I’m saying it’s a coordination problem, and computing beliefs in a CDT way is failing to get the benefits of participating fully in the possibilities of the coordination problem.
edit: Like, if everyone thought is was worth it, then it would be executed well (maybe), so the probability would be much higher, so it is worth it. A “self-fulfilling prophecy”, from a CDT perspective.