But in this case, the degree of belief that becomes relevant is bounded by the utility trade-offs involved in the cost of cryonics and the other things you could do with the money. So, for my example, I assign (admittedly, via an intuitive and informal process of guesstimation) a sufficiently low probability to cryonics working (I have sufficiently little information saying it works...) that I’d rather just give life-insurance money and my remaining assets, when I die, to family, or at least to charity, all of which carry higher expected utility over any finite term (that is, they do good faster than cryonics does, in my belief). Since my family or charity can carry on doing good after I die just as indefinitely as cryonics can supposedly extend my life after I die, the higher derivative-of-good multiplied with the low probability of cryonics working means cryonics has too high an opportunity cost for me.
But in this case, the degree of belief that becomes relevant is bounded by the utility trade-offs involved in the cost of cryonics and the other things you could do with the money. So, for my example, I assign (admittedly, via an intuitive and informal process of guesstimation) a sufficiently low probability to cryonics working (I have sufficiently little information saying it works...) that I’d rather just give life-insurance money and my remaining assets, when I die, to family, or at least to charity, all of which carry higher expected utility over any finite term (that is, they do good faster than cryonics does, in my belief). Since my family or charity can carry on doing good after I die just as indefinitely as cryonics can supposedly extend my life after I die, the higher derivative-of-good multiplied with the low probability of cryonics working means cryonics has too high an opportunity cost for me.