basic stats are pretty confusing given that most seem to rely on the assumption of a normal distribution
If a psych researcher finds “basic stats” confusing, he is not qualified to write a paper which looks at statistical interpretations of whatever results he got. He should either acquire some competency or stop pretending he understands what he is writing.
Many estimates do rely on the assumption of a normal distribution in the sense that these estimates have characteristics (e.g. “unbiased” or “most efficient”) which are mathematically proven in the normal distribution case. If this assumption breaks down, these characteristics are no longer guaranteed. This does not mean that the estimates are now “bad” or useless—in many cases they are still the best you could go given the data.
To give a crude example, 100 is guaranteed to be biggest number in the [1 .. 100] set of integers. If your set of integers is “from one to about a hundred, more or less”, 100 is no longer guaranteed to be the biggest, but it’s still not a bad estimate of the biggest number in that set.
If a psych researcher finds “basic stats” confusing, he is not qualified to write a paper which looks at statistical interpretations of whatever results he got. He should either acquire some competency or stop pretending he understands what he is writing.
The problem is that psychology and statistics are different skills, and someone who is talented at one may not be talented at the other.
To give a crude example, 100 is guaranteed to be biggest number in the [1 .. 100] set of integers. If your set of integers is “from one to about a hundred, more or less”, 100 is no longer guaranteed to be the biggest, but it’s still not a bad estimate of the biggest number in that set.
I take your point, but you can no longer say that 100 is the biggest number with 95% confidence, and this is the problem.
someone who is talented at one may not be talented at the other.
You don’t need to be talented, you only need to be competent. If you can’t pass even that low bar, maybe you shouldn’t publish papers which use statistics.
you can no longer say that 100 is the biggest number with 95% confidence, and this is the problem.
I don’t see any problem here.
First, 95% is an arbitrary number, it’s pure convention that does not correspond to any joint in the underlying reality.
Second, the t-test does NOT mean what most people think it means. See e.g. this or this.
Third, and most important, your certainty level should be entirely determined by the data. If your data does not support 95% confidence, then it does not. Trying to pretend otherwise is fraud.
If a psych researcher finds “basic stats” confusing, he is not qualified to write a paper which looks at statistical interpretations of whatever results he got. He should either acquire some competency or stop pretending he understands what he is writing.
Many estimates do rely on the assumption of a normal distribution in the sense that these estimates have characteristics (e.g. “unbiased” or “most efficient”) which are mathematically proven in the normal distribution case. If this assumption breaks down, these characteristics are no longer guaranteed. This does not mean that the estimates are now “bad” or useless—in many cases they are still the best you could go given the data.
To give a crude example, 100 is guaranteed to be biggest number in the [1 .. 100] set of integers. If your set of integers is “from one to about a hundred, more or less”, 100 is no longer guaranteed to be the biggest, but it’s still not a bad estimate of the biggest number in that set.
The problem is that psychology and statistics are different skills, and someone who is talented at one may not be talented at the other.
I take your point, but you can no longer say that 100 is the biggest number with 95% confidence, and this is the problem.
You don’t need to be talented, you only need to be competent. If you can’t pass even that low bar, maybe you shouldn’t publish papers which use statistics.
I don’t see any problem here.
First, 95% is an arbitrary number, it’s pure convention that does not correspond to any joint in the underlying reality.
Second, the t-test does NOT mean what most people think it means. See e.g. this or this.
Third, and most important, your certainty level should be entirely determined by the data. If your data does not support 95% confidence, then it does not. Trying to pretend otherwise is fraud.