As an aspiring scientist, I hold the Truth above all.
That will change!
More seriously though...
As one can see, the biggest problem is determining burden of proof. Statistically speaking, this is much like the problem of defining the null hypothesis.
Well, not really. The null and alternative hypotheses in frequentist statistics are defined in terms of their model complexity, not our prior beliefs (that would be Bayesian!). Specifically, the null hypothesis represents the model with fewer free parameters.
You might still face some sort of statistical disagreement with the theist, but it would have to be a disagreement over which hypothesis is more/less parsimonious—which is really a rather different argument than what you’ve outlined (and IMO, one that the theist would have a hard time defending).
I’m not saying that the frequentist statistical belief logic actually goes like that above. What I’m saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.
As I’ve said before, the MOST common problem is not the actual statistics, but how the ignorant interpret that statistics. I am merely saying, I would prefer Bayesian statistics to be taught because it is much harder to botch up and read our own interpretation into it. (For one, it is ruled by a relatively easy formula)
Also, isn’t model complexity quite hard to determine with the statements “God exists” and “God does not exist”. Isn’t the complexity in this sense subject to easy bias?
What I’m saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.
But that’s not right. The problem that your burden of proof example describes is a problem of priors. The theist and the atheist are starting with priors that favor different hypotheses. But priors (notoriously!) don’t enter into the NHST calculus. Given two statistical models, one of which is a nested subset of the other (this is required in order to directly compare them), there is not a choice of which is the null: the null model is the one with fewer parameters (i.e., it is the nested subset). It isn’t up for debate.
There are other problems with NHST—as you point out later in the post, some people have a hard time keeping straight just what the numbers are telling them—but the issue I highlighted above isn’t one of them for me.
Also, isn’t model complexity quite hard to determine with the statements “God exists” and “God does not exist”. Isn’t the complexity in this sense subject to easy bias?
Yes. As you noted in your OP, forcing this pair of hypotheses into a strictly statistical framework is awkward no matter how you slice it. Statistical hypotheses ought to be simple empirical statements.
That will change!
More seriously though...
Well, not really. The null and alternative hypotheses in frequentist statistics are defined in terms of their model complexity, not our prior beliefs (that would be Bayesian!). Specifically, the null hypothesis represents the model with fewer free parameters.
You might still face some sort of statistical disagreement with the theist, but it would have to be a disagreement over which hypothesis is more/less parsimonious—which is really a rather different argument than what you’ve outlined (and IMO, one that the theist would have a hard time defending).
I’m not saying that the frequentist statistical belief logic actually goes like that above. What I’m saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.
As I’ve said before, the MOST common problem is not the actual statistics, but how the ignorant interpret that statistics. I am merely saying, I would prefer Bayesian statistics to be taught because it is much harder to botch up and read our own interpretation into it. (For one, it is ruled by a relatively easy formula)
Also, isn’t model complexity quite hard to determine with the statements “God exists” and “God does not exist”. Isn’t the complexity in this sense subject to easy bias?
But that’s not right. The problem that your burden of proof example describes is a problem of priors. The theist and the atheist are starting with priors that favor different hypotheses. But priors (notoriously!) don’t enter into the NHST calculus. Given two statistical models, one of which is a nested subset of the other (this is required in order to directly compare them), there is not a choice of which is the null: the null model is the one with fewer parameters (i.e., it is the nested subset). It isn’t up for debate.
There are other problems with NHST—as you point out later in the post, some people have a hard time keeping straight just what the numbers are telling them—but the issue I highlighted above isn’t one of them for me.
Yes. As you noted in your OP, forcing this pair of hypotheses into a strictly statistical framework is awkward no matter how you slice it. Statistical hypotheses ought to be simple empirical statements.