If you got a lethal disease with a very expensive treatment, and you could afford it, would you refuse the treatment? What would the threshold price be? Does this idea feel as squicky as spending on cryonics?
Depends: has the treatment been proven to work before?
(Yes, I’ve heard the probability calculations. I don’t make medical decisions based on plausibility figures when it has simply never been seen to work before, even in animal models.)
Part of shutting up and multiplying is multiplying through the probability of a payoff with the value of the payoff, and then treating it as a guaranteed gain of that much utility. This is a basic property of rational utility functions.
(I think. People who know what they’re talking about, feel free to correct me)
You are correct regarding expected-utility calculations, but I make an epistemic separation between plausabilities and probabilities. Plausible means something could happen without contradicting the other things I know about reality. Probable means there is actually evidence something will happen. Expected value deals in probabilities, not plausibilities.
Now, given that cryonics has not been seen to work on, say, rats, I don’t see why I should expect it to already be working on humans. I am willing to reevaluate based on any evidence someone can present to me.
Of course, then there’s the question of what happens on the other side, so to speak, of who is restoring your preserved self and what they’re doing with you. Generally, every answer I’ve heard to that question made my skin crawl.
If you got a lethal disease with a very expensive treatment, and you could afford it, would you refuse the treatment? What would the threshold price be? Does this idea feel as squicky as spending on cryonics?
Depends: has the treatment been proven to work before?
(Yes, I’ve heard the probability calculations. I don’t make medical decisions based on plausibility figures when it has simply never been seen to work before, even in animal models.)
Part of shutting up and multiplying is multiplying through the probability of a payoff with the value of the payoff, and then treating it as a guaranteed gain of that much utility. This is a basic property of rational utility functions.
(I think. People who know what they’re talking about, feel free to correct me)
You are correct regarding expected-utility calculations, but I make an epistemic separation between plausabilities and probabilities. Plausible means something could happen without contradicting the other things I know about reality. Probable means there is actually evidence something will happen. Expected value deals in probabilities, not plausibilities.
Now, given that cryonics has not been seen to work on, say, rats, I don’t see why I should expect it to already be working on humans. I am willing to reevaluate based on any evidence someone can present to me.
Of course, then there’s the question of what happens on the other side, so to speak, of who is restoring your preserved self and what they’re doing with you. Generally, every answer I’ve heard to that question made my skin crawl.