But insisting that this is irrational underestimates how important informativity is to our everyday thought and talk.
As other authors emphasize, in most contexts it makes sense to trade accuracy for informativity. Consider the alternative: widening your intervals to obtain genuine 90% hit rates. (...) Asked when you’ll arrive for dinner, instead of “5:30ish” you say, “Between 5 and 8”.
This bit is weird to me. There’s no reason why people should use 90% intervals as opposed to 50% intervals in daily life. The ask is just that they widen it when specifically asked for a 90% interval.
My framing would be: when people give intervals in daily life, they’re typically inclined to give ~50% confidence intervals (right? Something like that?). When asked for a (“90%”) interval by a researcher, they’re inclined to give a normal-sounding interval. But this is a mistake, because the researcher asked for a very strange construct — a 90% interval turns out to be an interval where you’re not supposed to say what you think the answer is, but instead give an absurdly wide distribution that you’re almost never outside of.
Incidentally — if you ask people for centered 20% confidence intervals (40-60th percentile) do you get that they’re underconfident?
Yeah that’s a reasonable way to look at it. I’m not sure how much the two approaches really disagree: both are saying that the actual intervals people are giving are narrower than their genuine 90% intervals, and both presumably say that this is modulated by the fact that in everyday life, 50% intervals tend to be better. Right?
I take the point that the bit at the end might misrepresent what the irrationality interpretation is saying, though!
I haven’t come across any interval-estimation studies that ask for intervals narrower than 20%, though Don Moore (probably THE expert on this stuff) told me that people have told him about unpublished findings where yes, when they ask for 20% intervals people are underprecise.
There definitely are situations with estimation (variants on the two-point method) where people look over-confident in estimates >50% and underconfident in estimates <50%, though you don’t always get that.
Yeah that’s a reasonable way to look at it. I’m not sure how much the two approaches really disagree: both are saying that the actual intervals people are giving are narrower than their genuine 90% intervals, and both presumably say that this is modulated by the fact that in everyday life, 50% intervals tend to be better. Right?
Yeah sounds right to me!
I haven’t come across any interval-estimation studies that ask for intervals narrower than 20%, though Don Moore (probably THE expert on this stuff) told me that people have told him about unpublished findings where yes, when they ask for 20% intervals people are underprecise.
There definitely are situations with estimation (variants on the two-point method) where people look over-confident in estimates >50% and underconfident in estimates <50%, though you don’t always get that.
This bit is weird to me. There’s no reason why people should use 90% intervals as opposed to 50% intervals in daily life. The ask is just that they widen it when specifically asked for a 90% interval.
My framing would be: when people give intervals in daily life, they’re typically inclined to give ~50% confidence intervals (right? Something like that?). When asked for a (“90%”) interval by a researcher, they’re inclined to give a normal-sounding interval. But this is a mistake, because the researcher asked for a very strange construct — a 90% interval turns out to be an interval where you’re not supposed to say what you think the answer is, but instead give an absurdly wide distribution that you’re almost never outside of.
Incidentally — if you ask people for centered 20% confidence intervals (40-60th percentile) do you get that they’re underconfident?
Yeah that’s a reasonable way to look at it. I’m not sure how much the two approaches really disagree: both are saying that the actual intervals people are giving are narrower than their genuine 90% intervals, and both presumably say that this is modulated by the fact that in everyday life, 50% intervals tend to be better. Right?
I take the point that the bit at the end might misrepresent what the irrationality interpretation is saying, though!
I haven’t come across any interval-estimation studies that ask for intervals narrower than 20%, though Don Moore (probably THE expert on this stuff) told me that people have told him about unpublished findings where yes, when they ask for 20% intervals people are underprecise.
There definitely are situations with estimation (variants on the two-point method) where people look over-confident in estimates >50% and underconfident in estimates <50%, though you don’t always get that.
Yeah sounds right to me!
Nice, thanks!