An example of this: CFAR has published some results on an experiment where they tried to see if they could improve people’s probability estimates by asking them how surprised they’d be by truth about some question turning out one way or another. They expected it would, but it turned out it didn’t. And that doesn’t surprise me. If imagined feelings of surprise contained some information naive probability-estimation methods didn’t, why wouldn’t we have evolved to tap that information automatically?
Because so few of our ancestors died because they got numerical probability estimates wrong.
I agree with the general idea in your post, but I don’t think it strongly predicts that CFAR’s experiment would fail. Morever, if it predicts that, why doesn’t it also predict that we should have evolved to sample our intuitions multiple times and average the results, since that seems to give more accurate numerical estimates? (I don’t actually think this single article is very strong evidence for or against this interpretation of the hypothesis by itself, but neither do I think that CFAR’s experiment is; I think the likelihood ratios aren’t particularly extreme in either case.)
Because so few of our ancestors died because they got numerical probability estimates wrong.
I agree with the general idea in your post, but I don’t think it strongly predicts that CFAR’s experiment would fail. Morever, if it predicts that, why doesn’t it also predict that we should have evolved to sample our intuitions multiple times and average the results, since that seems to give more accurate numerical estimates? (I don’t actually think this single article is very strong evidence for or against this interpretation of the hypothesis by itself, but neither do I think that CFAR’s experiment is; I think the likelihood ratios aren’t particularly extreme in either case.)
Ah, you’re right. Will edit post to reflect that.