Can you point to examples of these “holy wars”? I haven’t encountered something I’d describe like that, so I don’t know if we’ve been seeing different things, or just interpreting it differently.
Various bits of Jaynes’s “Confidence intervals vs Bayesian intervals” seem holy war-ish to me. Perhaps the juiciest bit (from pages 197-198, or pages 23-24 of the PDF):
I first presented this result to a recent convention of reliability and quality control statisticians working in the computer and aerospace industries; and at this point the meeting was thrown into an uproar, about a dozen people trying to shout me down at once. They told me, “This is complete nonsense. A method as firmly established and thoroughly worked over as confidence intervals can’t possibly do such a thing. You are maligning a very great man; Neyman would never have advocated a method that breaks down on such a simple problem. If you can’t do your arithmetic right, you have no business running around giving talks like this”.
After partial calm was restored, I went a second time, very slowly and carefully, through the numerical work [...] with all of them leering at me, eager to see who would be the first to catch my mistake [...] In the end they had to concede that my result was correct after all.
To make a long story short, my talk was extended to four hours (all afternoon), and their reaction finally changed to: “My God – why didn’t somebody tell me about these things before? My professors and textbooks never said anything about this. Now I have to go back home and recheck everything I’ve done for years.”
This incident makes an interesting commentary on the kind of indoctrination that teachers of orthodox statistics have been giving their students for two generations now.
Various bits of Jaynes’s “Confidence intervals vs Bayesian intervals” seem holy war-ish to me. Perhaps the juiciest bit (from pages 197-198, or pages 23-24 of the PDF):