Jaynes does go on to discuss everything you have pointed out here. He noted that confidence intervals had commonly been held not to require sufficient statistics, pointed out that some frequentist statisticians had been doubtful on that point, and remarked that if the frequentist estimator had been the sufficient statistic (the minimum lifetime) then the results would have agreed. I think the real point of the story is that he ran through the frequentist calculation for a group of people who did this sort of thing for a living and shocked them with it.
You got me: I didn’t read the what-went-wrong subsection that follows the example. (In my defence, I did start reading it, but rolled my eyes and stopped when I got to the claim that “there must be a very basic fallacy in the reasoning underlying the principle of confidence intervals”.)
I suspect I’m not the only one, though, so hopefully my explanation will catch some of the eyeballs that didn’t read Jaynes’s own post-mortem.
[Edit to add: you’re almost certainly right about the real point of the story, but I think my reply was fair given the spirit in which it was presented here, i.e. as a frequentism-v.-Bayesian thing rather than an orthodox-statisticians-are-taught-badly thing.]
Independently reproducing Jaynes’s analysis is excellent, but calling him “cheeky” for “implicitly us[ing] different estimators” is not fair given that he’s explicit on this point.
....given the spirit in which it was presented here, i.e. as a frequentism-v.-Bayesian thing rather than an orthodox-statisticians-are-taught-badly thing.
It’s a frequentism-v.-Bayesian thing to the extent that correct coverage is considered a sufficient condition for good frequentist statistical inference. This is the fallacy that you rolled your eyes at; the room full of shocked frequentists shows that it wasn’t a strawman at the time. [ETA: This isn’t quite right. The “v.-Bayesian” part comes in when correct coverage is considered a necessary condition, not a sufficient condition.]
ETA:
I suspect I’m not the only one, though, so hopefully my explanation will catch some of the eyeballs that didn’t read Jaynes’s own post-mortem.
This is a really good point, and it makes me happy that you wrote your explanation. For people for whom Jaynes’s phrasing gets in the way, your phrasing bypasses the polemics and lets them see the math behind the example.
Independently reproducing Jaynes’s analysis is excellent, but calling him “cheeky” for “implicitly us[ing] different estimators” is not fair given that he’s explicit on this point.
I was wrong to say that Jaynes implicitly used different estimators for the two methods. After the example he does mention it, a fact I missed due to skipping most of the post-mortem. I’ll edit my post higher up to fix that error. (That said, at the risk of being pedantic, I did take care to avoid calling Jaynes-the-person cheeky. I called his example cheeky, as well as his comparison of the frequentist CI to the Bayesian CI, kinda.)
It’s a frequentism-v.-Bayesian thing to the extent that correct coverage is considered a sufficient condition for good frequentist statistical inference. This is the fallacy that you rolled your eyes at; the room full of shocked frequentists shows that it wasn’t a strawman at the time. [ETA: This isn’t quite right. The “v.-Bayesian” part comes in when correct coverage is considered a necessary condition, not a sufficient condition.]
When I read Jaynes’s fallacy claim, I didn’t interpret it as saying that treating coverage as necessary/sufficient was fallacious; I read it as arguing that the use of confidence intervals in general was fallacious. That was made me roll my eyes. [Edit to clarify: that is, I was rolling my eyes at what I felt was a strawman, but a different one to the one you have in mind.] Having read his post-mortem fully and your reply, I think my initial, eye-roll-inducing interpretation was incorrect, though it was reasonable on first read-through given the context in which the “fallacy” statement appeared.
Jaynes does go on to discuss everything you have pointed out here. He noted that confidence intervals had commonly been held not to require sufficient statistics, pointed out that some frequentist statisticians had been doubtful on that point, and remarked that if the frequentist estimator had been the sufficient statistic (the minimum lifetime) then the results would have agreed. I think the real point of the story is that he ran through the frequentist calculation for a group of people who did this sort of thing for a living and shocked them with it.
You got me: I didn’t read the what-went-wrong subsection that follows the example. (In my defence, I did start reading it, but rolled my eyes and stopped when I got to the claim that “there must be a very basic fallacy in the reasoning underlying the principle of confidence intervals”.)
I suspect I’m not the only one, though, so hopefully my explanation will catch some of the eyeballs that didn’t read Jaynes’s own post-mortem.
[Edit to add: you’re almost certainly right about the real point of the story, but I think my reply was fair given the spirit in which it was presented here, i.e. as a frequentism-v.-Bayesian thing rather than an orthodox-statisticians-are-taught-badly thing.]
Independently reproducing Jaynes’s analysis is excellent, but calling him “cheeky” for “implicitly us[ing] different estimators” is not fair given that he’s explicit on this point.
It’s a frequentism-v.-Bayesian thing to the extent that correct coverage is considered a sufficient condition for good frequentist statistical inference. This is the fallacy that you rolled your eyes at; the room full of shocked frequentists shows that it wasn’t a strawman at the time. [ETA: This isn’t quite right. The “v.-Bayesian” part comes in when correct coverage is considered a necessary condition, not a sufficient condition.]
ETA:
This is a really good point, and it makes me happy that you wrote your explanation. For people for whom Jaynes’s phrasing gets in the way, your phrasing bypasses the polemics and lets them see the math behind the example.
I was wrong to say that Jaynes implicitly used different estimators for the two methods. After the example he does mention it, a fact I missed due to skipping most of the post-mortem. I’ll edit my post higher up to fix that error. (That said, at the risk of being pedantic, I did take care to avoid calling Jaynes-the-person cheeky. I called his example cheeky, as well as his comparison of the frequentist CI to the Bayesian CI, kinda.)
When I read Jaynes’s fallacy claim, I didn’t interpret it as saying that treating coverage as necessary/sufficient was fallacious; I read it as arguing that the use of confidence intervals in general was fallacious. That was made me roll my eyes. [Edit to clarify: that is, I was rolling my eyes at what I felt was a strawman, but a different one to the one you have in mind.] Having read his post-mortem fully and your reply, I think my initial, eye-roll-inducing interpretation was incorrect, though it was reasonable on first read-through given the context in which the “fallacy” statement appeared.
Fair point.