It is important here to distinguish two roles of statistics in science: exploration and confirmation. It seems likely that Bayesian methods are more powerful (and less prone to misuse) than non-Bayesian methods the exploratory paradigm.
However, for the more important issue of confirmation, the primary importance of statistical theory is to:
1) provide a set of quantitative guidelines for scientists to design effective (confirmatory) experiments and avoid being mislead by the results of poorly designed experiments or experiments with inadequate sample sizes
2) produce results which can be readily interpreted by their scientific peers.
And here, the NST canon for regression and comparison of means fulfills both purposes more effectively than the Bayesian equivalents, primarily due to the technical difficulty of Bayesian procedures for even the simplest problems: like a normal distribution with unknown mean and variance. While a suitably well-designed Bayesian statistics package is one possible remedy, it would still seem preferable in such cases that scientists learn the usual maximum likelihood estimators, so that they can at least know the formulas for the statistics they are computing. And, as satt and gwern have argued in these comments, it is doubtful that a shift to Bayesianism would prevent scientists from making mistakes like that of the psi study: the use of a Bayesian t-test will not always be sufficient to save the day. Conversely, it is also doubtful that the widespread, correct use of Bayesian would make a huge difference in most day-to-day science. Well-designed experiments will produce both convincing p-values and convincing likelihood ratios; when NHST is applicable, a Bayesian approach would at best allow for perhaps a constant-factor reduction in the necessary sample size.
Even statistically competent individuals have good reason to continue using NHST, or non-Bayesian techniques in general. Just as physicists have not stopped using the Newtonian “approximation” in light of the discovery of relativity, it remains perfectly reasonable to use convenient non-Bayesian techniques when they are “good enough for the job.” An especially important case is non/semi-parametric inference: that is, inference with only very weak assumptions on the nature of certain relevant probability distributions. Practical ways of doing Bayesian nonparametric inference still remain to be developed, and while Bayesian nonparametrics is currently an active topic of statistical research, it seems foolish to hope that implementations of Bayesian non/semi-parametric inference can ever be as computationally scalable as their non-Bayesian counterparts.
It is important here to distinguish two roles of statistics in science: exploration and confirmation. It seems likely that Bayesian methods are more powerful (and less prone to misuse) than non-Bayesian methods the exploratory paradigm.
However, for the more important issue of confirmation, the primary importance of statistical theory is to: 1) provide a set of quantitative guidelines for scientists to design effective (confirmatory) experiments and avoid being mislead by the results of poorly designed experiments or experiments with inadequate sample sizes 2) produce results which can be readily interpreted by their scientific peers. And here, the NST canon for regression and comparison of means fulfills both purposes more effectively than the Bayesian equivalents, primarily due to the technical difficulty of Bayesian procedures for even the simplest problems: like a normal distribution with unknown mean and variance. While a suitably well-designed Bayesian statistics package is one possible remedy, it would still seem preferable in such cases that scientists learn the usual maximum likelihood estimators, so that they can at least know the formulas for the statistics they are computing. And, as satt and gwern have argued in these comments, it is doubtful that a shift to Bayesianism would prevent scientists from making mistakes like that of the psi study: the use of a Bayesian t-test will not always be sufficient to save the day. Conversely, it is also doubtful that the widespread, correct use of Bayesian would make a huge difference in most day-to-day science. Well-designed experiments will produce both convincing p-values and convincing likelihood ratios; when NHST is applicable, a Bayesian approach would at best allow for perhaps a constant-factor reduction in the necessary sample size.
Even statistically competent individuals have good reason to continue using NHST, or non-Bayesian techniques in general. Just as physicists have not stopped using the Newtonian “approximation” in light of the discovery of relativity, it remains perfectly reasonable to use convenient non-Bayesian techniques when they are “good enough for the job.” An especially important case is non/semi-parametric inference: that is, inference with only very weak assumptions on the nature of certain relevant probability distributions. Practical ways of doing Bayesian nonparametric inference still remain to be developed, and while Bayesian nonparametrics is currently an active topic of statistical research, it seems foolish to hope that implementations of Bayesian non/semi-parametric inference can ever be as computationally scalable as their non-Bayesian counterparts.