You can, of course. For most situations, the effort is not worth the trade-off. But making a distinction between 1%, 25%, 50%. 75%. and 99% often is.
You can (at least formally) put error bars on the quantities that go into a Bayesian calculation. The problem, of course, is that error bars are short-hand for a distribution of possible values, and it’s not obvious what a distribution of probabilities means or should mean. Everything operational about probability functions is fully captured by their full set of expectation values, so this is no different than just immediately taking the mean, right?
Well, no. The uncertainties are a higher level model that not only makes predictions, but also calibrates how much these predictions are likely to move given new data.
It seems to me that this is somewhat related to the problem of logical uncertainty.
You can, of course. For most situations, the effort is not worth the trade-off. But making a distinction between 1%, 25%, 50%. 75%. and 99% often is.
You can (at least formally) put error bars on the quantities that go into a Bayesian calculation. The problem, of course, is that error bars are short-hand for a distribution of possible values, and it’s not obvious what a distribution of probabilities means or should mean. Everything operational about probability functions is fully captured by their full set of expectation values, so this is no different than just immediately taking the mean, right?
Well, no. The uncertainties are a higher level model that not only makes predictions, but also calibrates how much these predictions are likely to move given new data.
It seems to me that this is somewhat related to the problem of logical uncertainty.