In the course of figuring out what the hell the parent comment was talking about and how one was supposed to do the calculation, I found this. p-values are much clearer for me now, thanks for bringing this up.
Don’t get me wrong, this is a good paper, well-written to be clearly understandable and not to be deliberately obtuse like far too many math papers these days, and the author’s heart is clearly in the right place, but I still screamed while reading it.
How can anyone read this, and not bang their head against the wall at how horribly arbitrary this all is… no wonder more than half of published findings are false.
Unfortunately, walls solid enough to sustain the force of the bang I wanted to produce were not to be found within a radius of five meters when I was reading it. I did want to bang my head on my desk, though.
The arbitrari-ness of all the decisions (who decides the cutoff point to reject the null and on what basis? “Meh, whatever” seems to be the ruling methodology) did strike me as unscientific. Or, well, as un-((Some Term For What I Used To Think “Science” Meant Until I Saw That Most Of It Was About Testing Arbitrary Hypotheses Rather Than Deliberate Cornering Of Facts)) as something actually following the scientific method can get.
I don’t mind the arbitrary cutoff point. That’s like a Bayesian reporting likelihood ratios and leaving the prior up to the reader.
It’s more things like, “And now we’ll multiply all the significances together, and calculate the probability that their multiplicand would be equal to or lower than the result, given the null hypothesis” that make me want to scream. Why not take the arithmetic mean of the significances and calculate the probability of that instead, so long as we’re pretending the actual result is part of an arbitrary class of results? It just seems horribly obvious that you just get further and further away from what the likelihood ratios are actually telling you, as you pile arbitrary test on arbitrary test...
Also, I found that the function R_k in Section 2 has the slightly-more-closed formula
ρ⋅Pk(log(1/ρ) where P_k(x) is the first k terms of the Taylor series for e^x (and has the formula with factorials and everything). Just in case anyone wants to try this at home.
In the course of figuring out what the hell the parent comment was talking about and how one was supposed to do the calculation, I found this. p-values are much clearer for me now, thanks for bringing this up.
Don’t get me wrong, this is a good paper, well-written to be clearly understandable and not to be deliberately obtuse like far too many math papers these days, and the author’s heart is clearly in the right place, but I still screamed while reading it.
How can anyone read this, and not bang their head against the wall at how horribly arbitrary this all is… no wonder more than half of published findings are false.
Unfortunately, walls solid enough to sustain the force of the bang I wanted to produce were not to be found within a radius of five meters when I was reading it. I did want to bang my head on my desk, though.
The arbitrari-ness of all the decisions (who decides the cutoff point to reject the null and on what basis? “Meh, whatever” seems to be the ruling methodology) did strike me as unscientific. Or, well, as un-((Some Term For What I Used To Think “Science” Meant Until I Saw That Most Of It Was About Testing Arbitrary Hypotheses Rather Than Deliberate Cornering Of Facts)) as something actually following the scientific method can get.
I don’t mind the arbitrary cutoff point. That’s like a Bayesian reporting likelihood ratios and leaving the prior up to the reader.
It’s more things like, “And now we’ll multiply all the significances together, and calculate the probability that their multiplicand would be equal to or lower than the result, given the null hypothesis” that make me want to scream. Why not take the arithmetic mean of the significances and calculate the probability of that instead, so long as we’re pretending the actual result is part of an arbitrary class of results? It just seems horribly obvious that you just get further and further away from what the likelihood ratios are actually telling you, as you pile arbitrary test on arbitrary test...
That is a really interesting paper.
Also, I found that the function R_k in Section 2 has the slightly-more-closed formula ρ⋅Pk(log(1/ρ) where P_k(x) is the first k terms of the Taylor series for e^x (and has the formula with factorials and everything). Just in case anyone wants to try this at home.