I deny that “hindsight bias”, as a term used in common and specialized parlance, has anything to do with (1). If you respond to the implicit question of “what did you expect at time t” with anything that involves updates from stuff after time t, you are likely committing hindsight bias.
If you are a Bayesian updater, you do change your credence in something by conditioning, as time passes. But it is precisely the action of changing the subjective probability distribution you are talking about from the old one to the new one that is epistemically incorrect, if you are focused on the question of what you had actually believed before obtaining new information (and thus doing the Bayesian update).
This procedure, as described above, will almost always (under the two assumptions you mentioned) output a higher probability on the actual event than the forecaster did at the beginning. So this systematically overrates the accuracy of the prediction in a manner that is unduly self-serving; if you do not take this effect into account, you will not be able to properly assess whether the quality of this forecaster in the long run.
I deny that “hindsight bias”, as a term used in common and specialized parlance, has anything to do with (1). If you respond to the implicit question of “what did you expect at time t” with anything that involves updates from stuff after time t, you are likely committing hindsight bias.
If you are a Bayesian updater, you do change your credence in something by conditioning, as time passes. But it is precisely the action of changing the subjective probability distribution you are talking about from the old one to the new one that is epistemically incorrect, if you are focused on the question of what you had actually believed before obtaining new information (and thus doing the Bayesian update).
As I said earlier: