I think I see what you’re saying—that our intuitions for extreme likelihood functions might be as bad as those for extreme prior probabilities. IIRC, research shows that humans have a good sense for probabilities in the neighborhood of 0.5, so I think you’re safe as long as your trials have sampling probabilities around 0.5 and you explicitly and sequentially imagine each counterfactual trial and your resulting feelings of credence.
The research result is interesting, but I can still imagine people being maybe an order of magnitude off every 10-20 coinflips. (Maybe by that time an order of magnitude no longer matters much.)
I think I’d imagine them to be very wrong when assessing the evidence in 100 heads results vs. 10 if they didn’t explicitly imagine every trial, just because of scope insensitivity. (Maybe these are more extreme cases than are likely to come up in applications.)
I think I see what you’re saying—that our intuitions for extreme likelihood functions might be as bad as those for extreme prior probabilities. IIRC, research shows that humans have a good sense for probabilities in the neighborhood of 0.5, so I think you’re safe as long as your trials have sampling probabilities around 0.5 and you explicitly and sequentially imagine each counterfactual trial and your resulting feelings of credence.
Right, that’s what I’m saying.
The research result is interesting, but I can still imagine people being maybe an order of magnitude off every 10-20 coinflips. (Maybe by that time an order of magnitude no longer matters much.)
I think I’d imagine them to be very wrong when assessing the evidence in 100 heads results vs. 10 if they didn’t explicitly imagine every trial, just because of scope insensitivity. (Maybe these are more extreme cases than are likely to come up in applications.)