TGGP: Eliezer referenced the book (the wikipedia url on the “real” link, lookup for the phrase “Is Idang Alibi about to take a position on the real heart of the uproar?”). I thought everybody followed the links before commenting ;). Anyway I assume that if something is referenced its discussion is on topic.
Regarding their data, we can’t just remove the data they fudged, we need to redo the analysis with the original data. We can’t just discard data because it doesn’t fit our conclusions. Using their raw data without fudging we are left with low correlation, many data points outside the curve.
Ditto for any other studies. I highly skeptical of sociologists or psychologist papers because they always (again IME) have use very bad statistics. Most assume a gaussian or poisson distribution without even proving that the process generating the data has the right properties. The measurement process is highly subjective and there’s no analysis to assess the deviance of individual measures, so they don’t properly find the actual stddev of their data. If one wants to aggregate studies, first one must prove that the measurement process for each study is the same (in the studies mentioned in your “predictive power” link this is false: at least two Lynn studies use population samples with different properties, also another couple use different IQ tests) otherwise we are mixing unrelated hypothesis.
I’m highly skeptical of IQ measurement, because it’s too subjective. Measuring the same individual over and over on a long interval we get different results, but we shouldn’t. A physicist wouldn’t use a mass measurement process that depended on subjective factors (e.g. if the measured object is pretty or the time of measurement isn’t jinxed), in a similar way we shouldn’t use a measure of mental capacity that is highly dependent of stress (which has no objective measurement process) or emotional state. In this situation one of the best approaches would be using many different data measurements for each individual and aggregate the data with Monte Carlo analysis to find the probability of each results. We can’t just fudge the data, discard sample we don’t like and use a subjective methodology, otherwise it isn’t science. When a physicist does a experiment he has a theory in mind, so he either already has an equation or ends up discovering one. The equation must account for all variables and the theory must prove why the other variables (e.g. speed of wind in Peking) doesn’t matter. “IQ and the Wealth of Nations” fails to prove that any other factors influencing GDP are irrelevant to the IQ correlation, that alone discredits the results.
Correlation is the most overused statistical tool. It is useful to show patterns but unless you have a theory to explain the results and make actual predictions it’s irrelevant as much as the scientific method is concerned. If we ignore this anything can be “proven”.
TGGP: Eliezer referenced the book (the wikipedia url on the “real” link, lookup for the phrase “Is Idang Alibi about to take a position on the real heart of the uproar?”). I thought everybody followed the links before commenting ;). Anyway I assume that if something is referenced its discussion is on topic.
Regarding their data, we can’t just remove the data they fudged, we need to redo the analysis with the original data. We can’t just discard data because it doesn’t fit our conclusions. Using their raw data without fudging we are left with low correlation, many data points outside the curve.
Ditto for any other studies. I highly skeptical of sociologists or psychologist papers because they always (again IME) have use very bad statistics. Most assume a gaussian or poisson distribution without even proving that the process generating the data has the right properties. The measurement process is highly subjective and there’s no analysis to assess the deviance of individual measures, so they don’t properly find the actual stddev of their data. If one wants to aggregate studies, first one must prove that the measurement process for each study is the same (in the studies mentioned in your “predictive power” link this is false: at least two Lynn studies use population samples with different properties, also another couple use different IQ tests) otherwise we are mixing unrelated hypothesis.
I’m highly skeptical of IQ measurement, because it’s too subjective. Measuring the same individual over and over on a long interval we get different results, but we shouldn’t. A physicist wouldn’t use a mass measurement process that depended on subjective factors (e.g. if the measured object is pretty or the time of measurement isn’t jinxed), in a similar way we shouldn’t use a measure of mental capacity that is highly dependent of stress (which has no objective measurement process) or emotional state. In this situation one of the best approaches would be using many different data measurements for each individual and aggregate the data with Monte Carlo analysis to find the probability of each results. We can’t just fudge the data, discard sample we don’t like and use a subjective methodology, otherwise it isn’t science. When a physicist does a experiment he has a theory in mind, so he either already has an equation or ends up discovering one. The equation must account for all variables and the theory must prove why the other variables (e.g. speed of wind in Peking) doesn’t matter. “IQ and the Wealth of Nations” fails to prove that any other factors influencing GDP are irrelevant to the IQ correlation, that alone discredits the results.
Correlation is the most overused statistical tool. It is useful to show patterns but unless you have a theory to explain the results and make actual predictions it’s irrelevant as much as the scientific method is concerned. If we ignore this anything can be “proven”.