Possibly the decider knows that people sometimes make multiplicative errors, transposing numbers or misplacing decimals, and is confronted with a set of estimates hovering around, say, 0.05 (and that is plausible according to the decider’s prior) and a few estimates at estimated around 0.5 and 5.0. Would the correction effectively trim the outliers back to almost exactly 0.05 (because we can’t learn much information from an estimate that probably had at least one mistake in it), and the decider should go with the highest of the “plausible” numbers?
It seems to me like the conditional distributions that would lead to actually changing your decision are nearly as likely to be a source of error as a correction.
I’m trying to imagine a scenario.
Possibly the decider knows that people sometimes make multiplicative errors, transposing numbers or misplacing decimals, and is confronted with a set of estimates hovering around, say, 0.05 (and that is plausible according to the decider’s prior) and a few estimates at estimated around 0.5 and 5.0. Would the correction effectively trim the outliers back to almost exactly 0.05 (because we can’t learn much information from an estimate that probably had at least one mistake in it), and the decider should go with the highest of the “plausible” numbers?
It seems to me like the conditional distributions that would lead to actually changing your decision are nearly as likely to be a source of error as a correction.