Ah… yeah, I forgot that the non-null hypothesis being tested isn’t explicitly represented.
Finally, to push back slightly on your main argument, sometimes the most important hypotheses are the ones that you can’t state explicitly right now. In which case maybe you need some sort of “default hypothesis” to represent this possibility. Though such calculations are certainly something to be more skeptical of.
I think I’ve seen a paper put forward that kind of approach (I don’t remember enough to find it right now), but yeah, it is hard to see how a “default hypothesis” can be representative enough of all the neglected hypotheses.
Taking a logical-induction approach to the problem, we could say: it is possible to have a principled estimate of the probability which does not add up to the average probability assigned by all the hypotheses we can explicitly write down, because we can learn adjustment heuristics through experience (such as “probabilities estimated from the explicit hypotheses I can think of to write down tend to be overconfident by about x%).
Ah… yeah, I forgot that the non-null hypothesis being tested isn’t explicitly represented.
I think I’ve seen a paper put forward that kind of approach (I don’t remember enough to find it right now), but yeah, it is hard to see how a “default hypothesis” can be representative enough of all the neglected hypotheses.
Taking a logical-induction approach to the problem, we could say: it is possible to have a principled estimate of the probability which does not add up to the average probability assigned by all the hypotheses we can explicitly write down, because we can learn adjustment heuristics through experience (such as “probabilities estimated from the explicit hypotheses I can think of to write down tend to be overconfident by about x%).