Nah, they’re welcome to use whichever statistics they like. We might point out interpretation errors, though, if they make any.
Under the assumptions I described, a p-value of 0.16 is about 0.99 nats of evidence which is essentially canceled by the 1 nat prior. A p-value of 0.05 under the same assumptions would be about 1.92 nats of evidence, so if there’s a lot of published science that matches those assumptions (which is dubious), then they’re merely weak evidence, not necessarily wrong.
It’s not the job of the complexity penalty to “prove the null hypothesis is correct”. Proving what’s right and what’s wrong is a job for evidence. The penalty was merely a cheap substitute for an informed prior.
Nah, they’re welcome to use whichever statistics they like. We might point out interpretation errors, though, if they make any.
Under the assumptions I described, a p-value of 0.16 is about 0.99 nats of evidence which is essentially canceled by the 1 nat prior. A p-value of 0.05 under the same assumptions would be about 1.92 nats of evidence, so if there’s a lot of published science that matches those assumptions (which is dubious), then they’re merely weak evidence, not necessarily wrong.
It’s not the job of the complexity penalty to “prove the null hypothesis is correct”. Proving what’s right and what’s wrong is a job for evidence. The penalty was merely a cheap substitute for an informed prior.