From this Bayesian standpoint, the over-correction advice is misguided, because overshooting isn’t harmless; you can make yourself epistemically worse off.
If you update on the evidence from .3 to .5, and then later evidence shows you that you still act as if you believe the probability is .3, then you should consider irrationally changing your belief. Of course, you risk overupdating to 0.7 or 0.9, but that is a question for expected utility, not a point against the concept entire.
It may be possible to be more accurate with over-correction than simply pushing in the right direction; inundate yourself with “0.5” propaganda or something.
If you update on the evidence from .3 to .5, and then later evidence shows you that you still act as if you believe the probability is .3, then you should consider irrationally changing your belief. Of course, you risk overupdating to 0.7 or 0.9, but that is a question for expected utility, not a point against the concept entire.
It may be possible to be more accurate with over-correction than simply pushing in the right direction; inundate yourself with “0.5” propaganda or something.