Seems like a general issue with Bayesian probabilities? Like, I’m making a argument at >1000:1 odds ratio, it’s not meant to be 100%.
No? With normal probabilities, I can make bets and check my calibration. That’s not possible here.
Then the problem is that you can’t make bets and check your calibration, not that some people will arrive at the wrong conclusion, which is inevitable with probabilistic reasoning.
Seems like a general issue with Bayesian probabilities? Like, I’m making a argument at >1000:1 odds ratio, it’s not meant to be 100%.
No? With normal probabilities, I can make bets and check my calibration. That’s not possible here.
Then the problem is that you can’t make bets and check your calibration, not that some people will arrive at the wrong conclusion, which is inevitable with probabilistic reasoning.