If we agreed on the probability of each possible outcome of cryonic preservation, but disagreed on whether the risk was worth it, how would we go about trying to convince the other they were wrong?
The point isn’t to convince each other, the point is to find places where one or the other has true and useful information and ideas that the other doesn’t have.
The point of my post is that the probabilities themselves depend on whether we consider the risk worth it. To say it another way, which flattens some of the phenomenology I’m trying to do but might get the point across, I’m saying it’s a coordination problem, and computing beliefs in a CDT way is failing to get the benefits of participating fully in the possibilities of the coordination problem.
edit: Like, if everyone thought is was worth it, then it would be executed well (maybe), so the probability would be much higher, so it is worth it. A “self-fulfilling prophecy”, from a CDT perspective.
If we agreed on the probability of each possible outcome of cryonic preservation, but disagreed on whether the risk was worth it, how would we go about trying to convince the other they were wrong?
The point isn’t to convince each other, the point is to find places where one or the other has true and useful information and ideas that the other doesn’t have.
The point of my post is that the probabilities themselves depend on whether we consider the risk worth it. To say it another way, which flattens some of the phenomenology I’m trying to do but might get the point across, I’m saying it’s a coordination problem, and computing beliefs in a CDT way is failing to get the benefits of participating fully in the possibilities of the coordination problem.
edit: Like, if everyone thought is was worth it, then it would be executed well (maybe), so the probability would be much higher, so it is worth it. A “self-fulfilling prophecy”, from a CDT perspective.