I find the “trapped prior” terminology a bit strange, since a big part of the issue (the main part?) doesn’t just seem to be about having a bad prior probability distribution, it’s about bias in the update and observation mechanisms.
I guess I was imagining that the problem was the trapping, not the prior.
Maybe it’s unfortunate that the same word is overloaded to cover “prior probability” (e.g., probability 0.2 that dogs are bad), and “prior information” in the sense of “a mathematical object that represents all of your starting information plus the way you learn from experience.”
I find the “trapped prior” terminology a bit strange, since a big part of the issue (the main part?) doesn’t just seem to be about having a bad prior probability distribution, it’s about bias in the update and observation mechanisms.
I guess I was imagining that the problem was the trapping, not the prior.
Maybe it’s unfortunate that the same word is overloaded to cover “prior probability” (e.g., probability 0.2 that dogs are bad), and “prior information” in the sense of “a mathematical object that represents all of your starting information plus the way you learn from experience.”