I believe the best example of ‘fake numbers’ may be the measurement of IQ. The problem of this sort of fake numbers is that it is not certain to tell whether IQ really represents our true intellectual being but people still use it to be judgmental or even to justify their study not knowing when to stop to regard it as a simple reference.
Fake numbers seem to prevail in our professional life as companies do quantify people’s labor thanks to technology. They might be good estimates but that kind of numerical fixation affects people’s mind tremendously so that the moment the numbers are revealed it now controls the people. It won’t stay as a mere measurement reflecting the phenomena it gathered.
I’m not certain how true this is. It’s not exactly the same thing, but Dalliard discusses something similar here (see section “Shalizi’s first error”). Specifically, a number of IQ tests have been designed with the intention that they would not produce a positive manifold (which would I think to at least some extent imply not agreeing with existing tests). Instead they end up producing a positive manifold and mostly agreeing with existing tests.
Again, this isn’t exactly the same thing, because it’s not like they were intended to produce a single number that disagreed with existing tests, so much as to go beyond the single-number-IQ model. Also, it’s possible that even though they were in some sense designed to agree with existing tests, they only get used because they instead agree (but for CAS this appears to be false (at least going by the article) and for some of the others it doesn’t apply). Still, it’s similar enough that I thought it was worth noting.
Specifically, a number of IQ tests have been designed with the intention that they would not produce a positive manifold (which would I think to at least some extent imply not agreeing with existing tests). Instead they end up producing a positive manifold and mostly agreeing with existing tests.
Coincidentally, I just came across this, probably already well-known and discussed on LW, which includes claims that rationality and intelligence “often have very little to do with each other”, and that “it is as malleable as intelligence and possibly much more so”. So there’s a clearly cognitive skill (rationality) claimed to be distinct from and not closely correlated with g.
Has evidence yet emerged to judge these claims? The article is less than a year old and is about the start of a 3-year project, so perhaps none yet.
Hmm, it’s tricky. If new IQ tests are chosen based on correlations with many existing non-IQ tests, doesn’t that also make it more likely that different IQ tests will agree with each other? And do we see more agreement in practice than we’d expect apriori, given that selection process? (I have no idea and hadn’t thought of this question until now, thanks!)
Thinking about the geometry of that, it seems to me that any two things that correlate positively with all of a “vast battery of non-IQ-tests” are likely to correlate positively with each other, and the more diverse (i.e. less correlated with each other) that battery is, the tighter the constraint it places on the set of things that correlate with all of them.
Furthermore, if it correlates better with that battery than existing IQ tests do, it again seems, just from visualising the geometry, that it will correlate worse with the IQ tests than they do with each other.
But my intuition may be faulty. Do you have a concrete example of the phenomenon?
Thinking about the geometry of that, it seems to me that any two things that correlate positively with all of a “vast battery of non-IQ-tests” are likely to correlate positively with each other, and the more diverse (i.e. less correlated with each other) that battery is, the tighter the constraint it places on the set of things that correlate with all of them.
Indeed. To the extent that existing IQ tests are not terrible & crappy (and, y’know, they aren’t), any better IQ test is still going to correlate heavily with the old ones.
It wasn’t intended to be. Were you pattern-matching against debates with other people?
(Late reply; knocked out by a cold all week.)
ETA: Having just read Sniffnoy’s comment and the useful article linked there, I have a better appreciation of the context around this issue. Some people have made the observation I made as such an argument.
I believe the best example of ‘fake numbers’ may be the measurement of IQ. The problem of this sort of fake numbers is that it is not certain to tell whether IQ really represents our true intellectual being but people still use it to be judgmental or even to justify their study not knowing when to stop to regard it as a simple reference.
Fake numbers seem to prevail in our professional life as companies do quantify people’s labor thanks to technology. They might be good estimates but that kind of numerical fixation affects people’s mind tremendously so that the moment the numbers are revealed it now controls the people. It won’t stay as a mere measurement reflecting the phenomena it gathered.
By the standard of reproducibility using different methods, IQ is assuredly real; there are many varieties of IQ test, and their results mostly agree.
If a proposed test didn’t agree with the existing ones, it wouldn’t be used as an IQ test.
I’m not certain how true this is. It’s not exactly the same thing, but Dalliard discusses something similar here (see section “Shalizi’s first error”). Specifically, a number of IQ tests have been designed with the intention that they would not produce a positive manifold (which would I think to at least some extent imply not agreeing with existing tests). Instead they end up producing a positive manifold and mostly agreeing with existing tests.
Again, this isn’t exactly the same thing, because it’s not like they were intended to produce a single number that disagreed with existing tests, so much as to go beyond the single-number-IQ model. Also, it’s possible that even though they were in some sense designed to agree with existing tests, they only get used because they instead agree (but for CAS this appears to be false (at least going by the article) and for some of the others it doesn’t apply). Still, it’s similar enough that I thought it was worth noting.
Coincidentally, I just came across this, probably already well-known and discussed on LW, which includes claims that rationality and intelligence “often have very little to do with each other”, and that “it is as malleable as intelligence and possibly much more so”. So there’s a clearly cognitive skill (rationality) claimed to be distinct from and not closely correlated with g.
Has evidence yet emerged to judge these claims? The article is less than a year old and is about the start of a 3-year project, so perhaps none yet.
If it correlated better with the vast battery of non-IQ-tests which nevertheless exhibit a positive manifold, it’d be used.
Hmm, it’s tricky. If new IQ tests are chosen based on correlations with many existing non-IQ tests, doesn’t that also make it more likely that different IQ tests will agree with each other? And do we see more agreement in practice than we’d expect apriori, given that selection process? (I have no idea and hadn’t thought of this question until now, thanks!)
Thinking about the geometry of that, it seems to me that any two things that correlate positively with all of a “vast battery of non-IQ-tests” are likely to correlate positively with each other, and the more diverse (i.e. less correlated with each other) that battery is, the tighter the constraint it places on the set of things that correlate with all of them.
Furthermore, if it correlates better with that battery than existing IQ tests do, it again seems, just from visualising the geometry, that it will correlate worse with the IQ tests than they do with each other.
But my intuition may be faulty. Do you have a concrete example of the phenomenon?
Indeed. To the extent that existing IQ tests are not terrible & crappy (and, y’know, they aren’t), any better IQ test is still going to correlate heavily with the old ones.
If a proposed scale didn’t agree with existing ones, it wouldn’t be used to measure mass.
Of course. Your point?
This means that your comment here isn’t actually evidence that IQ is not measurable.
It wasn’t intended to be. Were you pattern-matching against debates with other people?
(Late reply; knocked out by a cold all week.)
ETA: Having just read Sniffnoy’s comment and the useful article linked there, I have a better appreciation of the context around this issue. Some people have made the observation I made as such an argument.