One aspect of the frequentist approach that I think should be mentioned is its compression of information, in regards its results.
This is prenicious for specialists, but for non-specialists it’s a boon. Rather than carting around precise numerical data for every proposed theory (numerical data that we can never remember, as the uncertainty over the 2⁄3 non-replication figure shows—it gets even worse if we have to remember whole distributions), you simply need to remember a binary result: significant/not significant.
(Things would be even simpler if we got rid of the 95% significance level altogether).
I’d suggest that specialists should use bayesian methods in their works, but that their summaries and press releases should be in a frequentist format.
One aspect of the frequentist approach that I think should be mentioned is its compression of information, in regards its results.
This is prenicious for specialists, but for non-specialists it’s a boon. Rather than carting around precise numerical data for every proposed theory (numerical data that we can never remember, as the uncertainty over the 2⁄3 non-replication figure shows—it gets even worse if we have to remember whole distributions), you simply need to remember a binary result: significant/not significant.
(Things would be even simpler if we got rid of the 95% significance level altogether).
I’d suggest that specialists should use bayesian methods in their works, but that their summaries and press releases should be in a frequentist format.