It also seemed to me that assigning 0.001 to Knox’s guilt was safer than assigning 0.999 to Guede’s guilt, yet people who assigned extreme estimates wanted to assign equally extreme estimates to both. I’m not confident this is an error, though, because the case for Guede’s guilt looks strong and it might be that in this state of evidence fewer than 1 in 1,000 people are innocent. On the other hand, I recall a strong tendency for people to say 0.10 and 0.90, 0.05 and 0.95, 0.01 and 0.99, or 0.001 and 0.999, and I don’t see how that could happen naturally.
It looks to me like the clash between the concepts of “overconfidence and calibration” versus “privileging the hypothesis” may also be behind my horrible LHC inconsistency.
It’s a question of whether errors in the story you know make the probability more extreme or less extreme. Knox seems like a bystander, pretty much, so the “privileging the hypothesis” concept applies to her. Guede seems pretty definitely involved, but the probability of error or misunderstanding the story might not be so low as 1 in 1000, and errors in his story make the probability less extreme.
It’s a question of how you try to apply compensation for overconfidence. With Guede, you apply compensation by lowering the probability of his guilt. But you can’t just take everyone in the world and say that to compensate for overconfidence you’re going to assign a non-extremely-low probability that they murdered Meredith.
You’re saying that sometimes compensating for overconfidence means moving a probability further away from 50%? That it somes means moving a probability estimate closer to some sort of “base rate”? Interesting and worth talking about more, I think. For one thing it gets you right into the “reference class tennis” you’ve talked about elsewhere—which in itself deserves further discussion.
I noticed, but wasn’t distracted.
It also seemed to me that assigning 0.001 to Knox’s guilt was safer than assigning 0.999 to Guede’s guilt, yet people who assigned extreme estimates wanted to assign equally extreme estimates to both. I’m not confident this is an error, though, because the case for Guede’s guilt looks strong and it might be that in this state of evidence fewer than 1 in 1,000 people are innocent. On the other hand, I recall a strong tendency for people to say 0.10 and 0.90, 0.05 and 0.95, 0.01 and 0.99, or 0.001 and 0.999, and I don’t see how that could happen naturally.
It looks to me like the clash between the concepts of “overconfidence and calibration” versus “privileging the hypothesis” may also be behind my horrible LHC inconsistency.
That’s interesting. I’m wondering if you could elaborate on why you think that’s so, since I would have guessed the opposite.
It’s a question of whether errors in the story you know make the probability more extreme or less extreme. Knox seems like a bystander, pretty much, so the “privileging the hypothesis” concept applies to her. Guede seems pretty definitely involved, but the probability of error or misunderstanding the story might not be so low as 1 in 1000, and errors in his story make the probability less extreme.
It’s a question of how you try to apply compensation for overconfidence. With Guede, you apply compensation by lowering the probability of his guilt. But you can’t just take everyone in the world and say that to compensate for overconfidence you’re going to assign a non-extremely-low probability that they murdered Meredith.
You’re saying that sometimes compensating for overconfidence means moving a probability further away from 50%? That it somes means moving a probability estimate closer to some sort of “base rate”? Interesting and worth talking about more, I think. For one thing it gets you right into the “reference class tennis” you’ve talked about elsewhere—which in itself deserves further discussion.
Yup.