I’m confused. How does conservation of expected evidence come in here? When you say “90% sure” do you mean “assign 90% probability of truth to the hypothesis”? You can’t expect that to change. Or do you mean “90% sure that my probability estimate of X% is correct”?
Your probability estimate ALREADY INCLUDES the probability distributions of future experiments (and of future research).
“I am 75% confident that hypothesis X is true—but if X really is true, I expect to gather more and more evidence in favor of X in the future, such that I expect my probability estimate of X to eventually exceed 99%. Of course, right now I am only 75% confident that X is true in the first place, so there is a 25% (subjective) chance that my probability estimate of X will decrease toward 0 instead of increasing toward 1.”
Another way to put this: I expect a large chance of a small update upwards, and a small chance of a large update downwards. This still conserves expected evidence.
On net, I expect to end up back where I started, EVEN though there’s a higher chance I’ll get evidence confirming my view.
While a true Bayesian’s estimate already includes the probability distributions of future experiments, in practice I don’t think it’s easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it’s hard to do in practice until I actually read the book that provides the nuance.
In a context like that, I definitely have experienced the feeling of “I am pretty sure that I will believe X less confidently upon further research, but right now I can’t help but feel very confident in X.”
Thank you—this is an important distinction. Are we talking about how something feels, or about probability estimates? I’d argue the error is in using numbers and probability notation to describe feelings of confidence that you haven’t actually tried to be rational about.
The topic of illegible beliefs (related to aliefs), and how to apply math to them is virtually unexplored.
I’m confused. How does conservation of expected evidence come in here? When you say “90% sure” do you mean “assign 90% probability of truth to the hypothesis”? You can’t expect that to change. Or do you mean “90% sure that my probability estimate of X% is correct”?
Your probability estimate ALREADY INCLUDES the probability distributions of future experiments (and of future research).
“I am 75% confident that hypothesis X is true—but if X really is true, I expect to gather more and more evidence in favor of X in the future, such that I expect my probability estimate of X to eventually exceed 99%. Of course, right now I am only 75% confident that X is true in the first place, so there is a 25% (subjective) chance that my probability estimate of X will decrease toward 0 instead of increasing toward 1.”
That makes perfect sense. And you should probably make BOTH halves of the statement: I expect to increase my estimate to 90% or decrease it to 20%.
Another way to put this: I expect a large chance of a small update upwards, and a small chance of a large update downwards. This still conserves expected evidence.
On net, I expect to end up back where I started, EVEN though there’s a higher chance I’ll get evidence confirming my view.
While a true Bayesian’s estimate already includes the probability distributions of future experiments, in practice I don’t think it’s easy for us humans to do that. For instance, I know based on past experience that a documentary on X will not incorporate as much nuance and depth as an academic book on X. I *should* immediately reduce the strength of any update to my beliefs on X upon watching a documentary given that I know this, but it’s hard to do in practice until I actually read the book that provides the nuance.
In a context like that, I definitely have experienced the feeling of “I am pretty sure that I will believe X less confidently upon further research, but right now I can’t help but feel very confident in X.”
Thank you—this is an important distinction. Are we talking about how something feels, or about probability estimates? I’d argue the error is in using numbers and probability notation to describe feelings of confidence that you haven’t actually tried to be rational about.
The topic of illegible beliefs (related to aliefs), and how to apply math to them is virtually unexplored.
In practice what I’m trying to do with practices like calibration training is determine the latter from the former.