I get where you’re coming from, but where do you get off the boat? The result is a theorem of probability: if (1) you update by conditioning on e, and (2) you had positive covariance for your own opinion and the truth, then you commit hindsight bias. So to say this is irrational we need to either say that (1) you don’t update by conditioning, or (2) you don’t have positive covariance between your opinion and the truth. Which do you deny, and why?
The standard route is to deny (2) by implicitly assuming that you know exactly what your prior probability was, at both the prior and future time. But that’s a radical idealization.
Perhaps more directly to your point: the shift only results in over-estimation if your INITIAL estimate is accurate. Remember we’re eliciting (i) E(P(e)) and (ii) E(P(e) | e), not (iii) P(e) and (ii) E(P(e) | e). If (i) always equaled (iii) (you always accurately estimated what you really thought at the initial time), then yes hindsight bias would decrease the accuracy of your estimates. But in contexts where you’re unsure what you think, you WON’T always accurately your prior.
I deny that “hindsight bias”, as a term used in common and specialized parlance, has anything to do with (1). If you respond to the implicit question of “what did you expect at time t” with anything that involves updates from stuff after time t, you are likely committing hindsight bias.
If you are a Bayesian updater, you do change your credence in something by conditioning, as time passes. But it is precisely the action of changing the subjective probability distribution you are talking about from the old one to the new one that is epistemically incorrect, if you are focused on the question of what you had actually believed before obtaining new information (and thus doing the Bayesian update).
This procedure, as described above, will almost always (under the two assumptions you mentioned) output a higher probability on the actual event than the forecaster did at the beginning. So this systematically overrates the accuracy of the prediction in a manner that is unduly self-serving; if you do not take this effect into account, you will not be able to properly assess whether the quality of this forecaster in the long run.
I get where you’re coming from, but where do you get off the boat? The result is a theorem of probability: if (1) you update by conditioning on e, and (2) you had positive covariance for your own opinion and the truth, then you commit hindsight bias. So to say this is irrational we need to either say that (1) you don’t update by conditioning, or (2) you don’t have positive covariance between your opinion and the truth. Which do you deny, and why?
The standard route is to deny (2) by implicitly assuming that you know exactly what your prior probability was, at both the prior and future time. But that’s a radical idealization.
Perhaps more directly to your point: the shift only results in over-estimation if your INITIAL estimate is accurate. Remember we’re eliciting (i) E(P(e)) and (ii) E(P(e) | e), not (iii) P(e) and (ii) E(P(e) | e). If (i) always equaled (iii) (you always accurately estimated what you really thought at the initial time), then yes hindsight bias would decrease the accuracy of your estimates. But in contexts where you’re unsure what you think, you WON’T always accurately your prior.
In fact, that’s a theorem. If P has higher-order uncertainty, then there must be some event q such that P(q) ≠ E(P(q)). See this old paper by Samet (https://www.tau.ac.il/~samet/papers/quantified.pdf), and this more recent one with a more elementary proof (https://philarchive.org/rec/DORHU).
I deny that “hindsight bias”, as a term used in common and specialized parlance, has anything to do with (1). If you respond to the implicit question of “what did you expect at time t” with anything that involves updates from stuff after time t, you are likely committing hindsight bias.
If you are a Bayesian updater, you do change your credence in something by conditioning, as time passes. But it is precisely the action of changing the subjective probability distribution you are talking about from the old one to the new one that is epistemically incorrect, if you are focused on the question of what you had actually believed before obtaining new information (and thus doing the Bayesian update).
As I said earlier: