This is the same mistake, broadly speaking, as using something like pearsonr instead of an arbitrary estimator (or even better, 20 of them) and a k-fold-crossvalidation in order to determine “correlation” as a factor of the predictive power of the best models.
Second, and see an SSC post on this that does the subject better justice (completely missing the point), we love drawing metaphorical straight line, we believe and give social status to people that do this.
If you were to study intelligence with an endpoint/goal in mind, or with the goal of explaining the world, the standard dist would be useless. Except for one goal, that of making your “findings” seem appealing, of giving them extra generalizability/authorizativeness that they lack, normalizing tests and results to fit the bell curve does exactly that.
Basically, the way I would explain it, you are right, using a bell curve and using various techniques to make your data fit it is stupid.
This derives from two reasons, one is am artifact, the fact that distributions were computation-simplyfing mechanisms in the past, even though this is no longer true. More on this here: https://www.lesswrong.com/posts/gea4TBueYq7ZqXyAk/named-distributions-as-artifacts
This is the same mistake, broadly speaking, as using something like pearsonr instead of an arbitrary estimator (or even better, 20 of them) and a k-fold-crossvalidation in order to determine “correlation” as a factor of the predictive power of the best models.
Second, and see an SSC post on this that does the subject better justice (completely missing the point), we love drawing metaphorical straight line, we believe and give social status to people that do this.
If you were to study intelligence with an endpoint/goal in mind, or with the goal of explaining the world, the standard dist would be useless. Except for one goal, that of making your “findings” seem appealing, of giving them extra generalizability/authorizativeness that they lack, normalizing tests and results to fit the bell curve does exactly that.