The path to rationality is not the path where the evidence chooses the beliefs. The path to rationality is one without beliefs.
On the path to rationality, there are only probabilities.
I realized something the other day. I don’t believe in cryonics.†
But, I believe that cryonics has a chance of working, a small chance.
If I’m ever asked “Do you believe in cryonics?”, I’m going to be careful to respond accurately.
† (By this, I mean that I believe cryonics has a less than 50% chance of working.)
But in this case, the degree of belief that becomes relevant is bounded by the utility trade-offs involved in the cost of cryonics and the other things you could do with the money. So, for my example, I assign (admittedly, via an intuitive and informal process of guesstimation) a sufficiently low probability to cryonics working (I have sufficiently little information saying it works...) that I’d rather just give life-insurance money and my remaining assets, when I die, to family, or at least to charity, all of which carry higher expected utility over any finite term (that is, they do good faster than cryonics does, in my belief). Since my family or charity can carry on doing good after I die just as indefinitely as cryonics can supposedly extend my life after I die, the higher derivative-of-good multiplied with the low probability of cryonics working means cryonics has too high an opportunity cost for me.
I realized something the other day. I don’t believe in cryonics.†
But, I believe that cryonics has a chance of working, a small chance.
If I’m ever asked “Do you believe in cryonics?”, I’m going to be careful to respond accurately.
† (By this, I mean that I believe cryonics has a less than 50% chance of working.)
This is a very bad translation gives that most of people on LW who are signed up for cryonics give it a less than 50% chance of working.
Yeah, that’s my point: This is the translation I had been making, myself, and I had to realize that it wasn’t correct.
But in this case, the degree of belief that becomes relevant is bounded by the utility trade-offs involved in the cost of cryonics and the other things you could do with the money. So, for my example, I assign (admittedly, via an intuitive and informal process of guesstimation) a sufficiently low probability to cryonics working (I have sufficiently little information saying it works...) that I’d rather just give life-insurance money and my remaining assets, when I die, to family, or at least to charity, all of which carry higher expected utility over any finite term (that is, they do good faster than cryonics does, in my belief). Since my family or charity can carry on doing good after I die just as indefinitely as cryonics can supposedly extend my life after I die, the higher derivative-of-good multiplied with the low probability of cryonics working means cryonics has too high an opportunity cost for me.