I tend to interpret “Is X real?” more or less as “Is X a part of the best predictive theory of the relevant domain?” This doesn’t require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties.
According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn’t? I don’t know enough about the relevant science to make that judgment.
Anyway, given my preferred pragmatist way of thinking about ontology, there isn’t much difference between the reality, validity and usefulness of a concept.
I tend to interpret “Is X real?” more or less as “Is X a part of the best predictive theory of the relevant domain?”
It seems excessive to me to define real as a superlative. Isn’t it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It’s not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing.
I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.
Lorentz ether theory is a good predictive theory, but I don’t want to say that ether is real. In general, if there’s a better theory currently available that doesn’t include property X, I’d say we’re justified in rejecting the reality of X.
I do agree that if there’s no better theory currently available, it’s a bit weird to say “I reject the reality of X because I’m sure we’re going to come up with a better theory at some point.” Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it’s probably a good idea to withhold ontological commitment until a better theory comes along.
Again, I don’t know enough about IQ research to judge which, if any, of these scenarios holds in that field.
I tend to interpret “Is X real?” more or less as “Is X a part of the best predictive theory of the relevant domain?” This doesn’t require an object/property to be ontologically fundamental, since our best (all things considered) theories of macroscopic domains include reference to macroscopic (non-fundamental) properties.
According to this standard, Atran is arguing that IQ is not real, I think. Temperature would be real (as far as we know), but maybe BMI wouldn’t? I don’t know enough about the relevant science to make that judgment.
Anyway, given my preferred pragmatist way of thinking about ontology, there isn’t much difference between the reality, validity and usefulness of a concept.
It seems excessive to me to define real as a superlative. Isn’t it enough to be part of some good predictive theory? Shalizi explicitly takes this position, but it seems insane to me. He says very clearly says that he rejects IQ because he thinks that there is a better model. It’s not that he complains that people are failing to adopt a better model, but failing to develop a better model. To the extent that Atran means anything, he appears to mean the same thing.
I think the difference between usefulness and validity is that usefulness is a cost-benefit analysis, considering the cost of using the model in a useful domain.
Lorentz ether theory is a good predictive theory, but I don’t want to say that ether is real. In general, if there’s a better theory currently available that doesn’t include property X, I’d say we’re justified in rejecting the reality of X.
I do agree that if there’s no better theory currently available, it’s a bit weird to say “I reject the reality of X because I’m sure we’re going to come up with a better theory at some point.” Working with what you have now is good epistemic practice in general. But it is possible that your best current theory is so bad at making predictions that you have no reason to place any substantive degree of confidence in its ontology. In that case, I think it’s probably a good idea to withhold ontological commitment until a better theory comes along.
Again, I don’t know enough about IQ research to judge which, if any, of these scenarios holds in that field.