No I do not. I think IQ can be a useful predictor for some things (as good as one number can be, really). But that isn’t the story with g, is it? It is claimed to be a causal factor.
If we want to do prediction, let’s just get a ton of features and use that, like they do in machine learning. Why fixate on one number?
Also—we know IQ is not a causal factor, IQ is a result of a test (so it’s a consequence, not a cause).
If we want to do prediction, let’s just get a ton of features and use that, like they do in machine learning. Why fixate on one number?
Because it makes sense for many different people to study the same number.
In the last month I talked two times about Gottman. The guy got couples into his lab and observed them for 15 minutes while measuring all sorts of variables. Afterwards he did a mathematical model and found that the model has a 91% success rate in predicting whether newly-wed couples will divorce within 10 years.
The problem? The model is likely overfitted. Instead of using the model he generated in his first study I uses a new model for the next study that’s also overfitted.
If he would have instead work on developing a Gottman metric, other researcher could research the same metric. Other researcher could see what factors correlate with the Gottman metric.
In the case of IQ, IQ is seen as a robust metric. The EPA did studies to estimate how much IQ point are lost due to Mercury pollution. They priced IQ points. The compared the dollar value of the lost IQ points due to Mercury pollution with the cost for filters that reduce Mercury pollution.
That strong datadriven case allowed the EPA under Obama to take bold steps to reduce Mercury pollution. The Koch brothers didn’t make a fuss about but payed for the installation of better filters. From their perspective the statistics were robust enough that it doesn’t make sense to fight the EPA in the public sphere on the mercury regulation backed up by data driven argument.
The EPA can only do that because IQ isn’t a metric that they invented themselves where someone can claim that the EPA simply did p-hacking to make it’s case.
The main issue is to have consensus statistics. That reduces possibilities for clever h-hacking and allows researchers to study how the same metric acts in a variety of different contexts.
If every researcher invents his own metrics you get things like voodoo neuroscience.
I wouldn’t call the problem “shitty models” but models that aren’t tried and tested in many different contexts.
We know about how the model of IQ works a lot better than how a new model of intelligence works that a new researcher creates for his PHD thesis.
Once you think that it’s good to have a single metric for intelligence because it helps you to make arguments about issues like the effect of mercury pollution on intelligence, there are additional arguments why IQ is a good metric for that purpose.
Single parameter models for anything complicated are shitty models. Intelligence is complicated. A single parameter model of intelligence is a shitty model.
Do you think IQ has to be a causal factor to be a good predictor/be meaningful?
No I do not. I think IQ can be a useful predictor for some things (as good as one number can be, really). But that isn’t the story with g, is it? It is claimed to be a causal factor.
If we want to do prediction, let’s just get a ton of features and use that, like they do in machine learning. Why fixate on one number?
Also—we know IQ is not a causal factor, IQ is a result of a test (so it’s a consequence, not a cause).
Because it makes sense for many different people to study the same number.
In the last month I talked two times about Gottman. The guy got couples into his lab and observed them for 15 minutes while measuring all sorts of variables. Afterwards he did a mathematical model and found that the model has a 91% success rate in predicting whether newly-wed couples will divorce within 10 years.
The problem? The model is likely overfitted. Instead of using the model he generated in his first study I uses a new model for the next study that’s also overfitted. If he would have instead work on developing a Gottman metric, other researcher could research the same metric. Other researcher could see what factors correlate with the Gottman metric.
In the case of IQ, IQ is seen as a robust metric. The EPA did studies to estimate how much IQ point are lost due to Mercury pollution. They priced IQ points. The compared the dollar value of the lost IQ points due to Mercury pollution with the cost for filters that reduce Mercury pollution.
That strong datadriven case allowed the EPA under Obama to take bold steps to reduce Mercury pollution. The Koch brothers didn’t make a fuss about but payed for the installation of better filters. From their perspective the statistics were robust enough that it doesn’t make sense to fight the EPA in the public sphere on the mercury regulation backed up by data driven argument.
The EPA can only do that because IQ isn’t a metric that they invented themselves where someone can claim that the EPA simply did p-hacking to make it’s case.
Life is complicated, why restrict to single parameter models? Nobody in statistics or machine learning does this, with good reason.
If your argument for single parameter models has the phrase “unwashed masses” in it, I wouldn’t find it very convincing.
If you are worried about p-hacking, just don’t do p-hacking, don’t lobotomize your model.
The main issue is to have consensus statistics. That reduces possibilities for clever h-hacking and allows researchers to study how the same metric acts in a variety of different contexts.
If every researcher invents his own metrics you get things like voodoo neuroscience.
Yeah, I don’t buy it. Lying with statistics and shitty models are completely orthogonal issues. You can lie with shitty models or with good models.
Also the argument “we should use IQ because people lie with statistics” is a very different argument from the one usually made by IQ proponents.
I wouldn’t call the problem “shitty models” but models that aren’t tried and tested in many different contexts. We know about how the model of IQ works a lot better than how a new model of intelligence works that a new researcher creates for his PHD thesis.
Once you think that it’s good to have a single metric for intelligence because it helps you to make arguments about issues like the effect of mercury pollution on intelligence, there are additional arguments why IQ is a good metric for that purpose.
Single parameter models for anything complicated are shitty models. Intelligence is complicated. A single parameter model of intelligence is a shitty model.
Do you think it’s shitty in the sense that what the EPA is doing with it is without basis?
I think I am done repeating myself.