You can also do Bayesian analysis with ‘non-informative’ priors or weakly-informative priors. As an example of the latter: if you’re trying to figure out the mean change earth’s surface temperature you might say ‘it’s almost certainly more then −50C and less than 50C’.
Unfortunately, if there is disagreement merely about how much prior uncertainty is appropriate, then this is sufficient to render the outcome controversial.
Controversial prior info, but posterior dominated by likelihood: Choose weak enough priors to convince skeptics. Bayes works well.
Controversial prior info, posterior not dominated by likelihood: If you choose very weak priors skeptics won’t be convinced. If you choose strong priors skeptics won’t be convinced. Bayes doesn’t work well. Frequentism will also not work well unless you sneak in strong assumptions.
Frequentism will also not work well unless you sneak in strong assumptions.
You can get frequentism to work well by its own lights by throwing away information. The canonical example here would be the Mann-Whitney U test. Even if the prior info and data are both too sparse to indicate an adequate sampling distribution/data model, this test will still work (for frequentist values of “work”).
You can also do Bayesian analysis with ‘non-informative’ priors or weakly-informative priors. As an example of the latter: if you’re trying to figure out the mean change earth’s surface temperature you might say ‘it’s almost certainly more then −50C and less than 50C’.
Unfortunately, if there is disagreement merely about how much prior uncertainty is appropriate, then this is sufficient to render the outcome controversial.
I think your initial point is wrong.
There are 3 situations
Clear prior info: Bayes works well.
Controversial prior info, but posterior dominated by likelihood: Choose weak enough priors to convince skeptics. Bayes works well.
Controversial prior info, posterior not dominated by likelihood: If you choose very weak priors skeptics won’t be convinced. If you choose strong priors skeptics won’t be convinced. Bayes doesn’t work well. Frequentism will also not work well unless you sneak in strong assumptions.
You can get frequentism to work well by its own lights by throwing away information. The canonical example here would be the Mann-Whitney U test. Even if the prior info and data are both too sparse to indicate an adequate sampling distribution/data model, this test will still work (for frequentist values of “work”).