“Intelligence” seems to consist of multiple different systems, but there are many tasks which recruit several of those systems simultaneously. That said, this doesn’t exclude the possibility of a hierarchy—in some people all of those systems could be working well, in some people all of them could be working badly, and most folks would be somewhere in between. (Which would seem to match the genetic load theory of intelligence.) But of course, this is a partially ordered set rather than a pure hierarchy—different people can have the same overall score, but have different capabilities in various subtasks.
IQ in childhood is predictive of IQ scores in adulthood, but not completely reliably; adult scores are more stable. There have been many interventions which aimed to increase IQ, but so far none of them has worked out.
IQ is one of the strongestgeneral predictors of life outcomes and work performance… but that “general” means that you can still predict performance on some specific task better via some other variable. Also, IQ is one of the best such predictors together with conscientiousness, which implies that hard work also matters a lot in life. We also know that e.g. personality type and skills matter when it comes to rationality.
I would suppose that the kinds of people referred to “the level above mine” would be some of those rare types who’ve had the luck of getting a high score on all important variables—a high IQ, a high conscientiousness, a naturally curious personality type, high reserves of mental energy, and so on. To what extent these various things are trainable is an open question.
In which case, if IQ is a good and stable predictor, then we are placing high confidence in #1 if we know their IQ. Is IQ or test scores what we commonly base intelligence assessments on?
If we can put high confidence in #1 via testing, can we still put high confidence in it on based on a general impression or a conversation, or even on the basis of mysterious evidence? e.g. This quote: “(Interesting question: If I’m not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don’t remember any stunning epiphanies in his presentation at the Summit. I didn’t talk to him very long in person. He just came across as… formidable, somehow.)”
I mean, I would assume aura judgment is less effective than testing, particularly at discriminating between levels above that of the aura judge, but how much worse isn’t clear to me. I’m particularly suspicious of it because evaluating someone else’s intelligence routinely involves a comparison with myself, and I’m very uncertain I can make those comparisons without bias.
I appreciate your response immensely. I have almost no training in any sort of cognitively focused science, and so my impressions about the constancy of intelligence are largely drawn from my personal experience, which is obviously an enormously impoverished data set. Your explanation and data does offer a compelling reason to believe intelligence corresponds with some fixed aspect of an individual, at least with some reasonable probability.
I can certainly think of exceptions, individuals with triple-digit SAT scores that went on to pursue Ph.Ds,, but perhaps that does not mean the model is wrong, as unlikely events do occur. Or perhaps the adult IQ doesn’t stabilize until sometimes after 25, and so they underwent a large IQ fluctuation in college. Perhaps as I age and spend more time with older people, I’ll become more confident in predicting future intelligence from current intelligence.
Intelligence is generally measured using either explicit IQ tests or performance on tasks which are known to correlate reliably with IQ (such as SAT scores).
I think there was a study somewhere—it might have been discussed on this site, but I couldn’t find it on a quick search—where an audience listened to two people have a conversation, and they knew that one of the people had been allowed to pick a topic that he knew a lot about and the other person didn’t. Despite knowing that, the audience consistently thought that the person who’d been allowed to pick the topic was more intelligent, as he had better things to say about it. That would at least weakly suggest that people aren’t very good at controlling for irrelevant factors when estimating someone’s intelligence.
One of the classic demonstrations of the Fundamental Attribution Error is the ‘quiz study’ of Ross, Amabile, and Steinmetz (1977). In the study, subjects were randomly assigned to either ask or answer questions in quiz show style, and were observed by other subjects who were asked to rate them for competence/knowledge. Even knowing that the assignments were random did not prevent the raters from rating the questioners higher than the answerers. Of course, when we rate individuals highly the affect heuristic comes into play, and if we’re not careful that can lead to a super-happy death spiral of reverence. Students can revere teachers or science popularizers (even devotion to Richard Dawkins can get a bit extreme at his busy web forum) simply because the former only interact with the latter in domains where the students know less. This is certainly a problem with blogging, where the blogger chooses to post in domains of expertise.
If anyone knows what this study is, I’d be very interested to learn more about it, since it sounds like it might be a falsification of my hypothesized http://www.gwern.net/backfire-effect
If we can put high confidence in #1 via testing, can we still put high confidence in it on based on a general impression or a conversation, or even on the basis of mysterious evidence? e.g. This quote: “(Interesting question: If I’m not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don’t remember any stunning epiphanies in his presentation at the Summit. I didn’t talk to him very long in person. He just came across as… formidable, somehow.)”
I don’t think you can. A conversation or ‘general impression’ is going to be based on interpersonal skills, and unless it is a highly technical conversation, be based mostly on verbal sorts of skills. Asking whether an IQ test would be less reliable than a conversation is a little like asking ‘if we drop the SAT Math section and just use Verbal, is that better than using both the Math and Verbal sections?’ No one item loads very heavily on g which is why IQ tests typically have a bunch of subtests.
“Intelligence” seems to consist of multiple different systems, but there are many tasks which recruit several of those systems simultaneously. That said, this doesn’t exclude the possibility of a hierarchy—in some people all of those systems could be working well, in some people all of them could be working badly, and most folks would be somewhere in between. (Which would seem to match the genetic load theory of intelligence.) But of course, this is a partially ordered set rather than a pure hierarchy—different people can have the same overall score, but have different capabilities in various subtasks.
IQ in childhood is predictive of IQ scores in adulthood, but not completely reliably; adult scores are more stable. There have been many interventions which aimed to increase IQ, but so far none of them has worked out.
IQ is one of the strongest general predictors of life outcomes and work performance… but that “general” means that you can still predict performance on some specific task better via some other variable. Also, IQ is one of the best such predictors together with conscientiousness, which implies that hard work also matters a lot in life. We also know that e.g. personality type and skills matter when it comes to rationality.
I would suppose that the kinds of people referred to “the level above mine” would be some of those rare types who’ve had the luck of getting a high score on all important variables—a high IQ, a high conscientiousness, a naturally curious personality type, high reserves of mental energy, and so on. To what extent these various things are trainable is an open question.
In which case, if IQ is a good and stable predictor, then we are placing high confidence in #1 if we know their IQ. Is IQ or test scores what we commonly base intelligence assessments on?
If we can put high confidence in #1 via testing, can we still put high confidence in it on based on a general impression or a conversation, or even on the basis of mysterious evidence? e.g. This quote: “(Interesting question: If I’m not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me? I don’t remember any stunning epiphanies in his presentation at the Summit. I didn’t talk to him very long in person. He just came across as… formidable, somehow.)”
I mean, I would assume aura judgment is less effective than testing, particularly at discriminating between levels above that of the aura judge, but how much worse isn’t clear to me. I’m particularly suspicious of it because evaluating someone else’s intelligence routinely involves a comparison with myself, and I’m very uncertain I can make those comparisons without bias.
I appreciate your response immensely. I have almost no training in any sort of cognitively focused science, and so my impressions about the constancy of intelligence are largely drawn from my personal experience, which is obviously an enormously impoverished data set. Your explanation and data does offer a compelling reason to believe intelligence corresponds with some fixed aspect of an individual, at least with some reasonable probability.
I can certainly think of exceptions, individuals with triple-digit SAT scores that went on to pursue Ph.Ds,, but perhaps that does not mean the model is wrong, as unlikely events do occur. Or perhaps the adult IQ doesn’t stabilize until sometimes after 25, and so they underwent a large IQ fluctuation in college. Perhaps as I age and spend more time with older people, I’ll become more confident in predicting future intelligence from current intelligence.
Intelligence is generally measured using either explicit IQ tests or performance on tasks which are known to correlate reliably with IQ (such as SAT scores).
I think there was a study somewhere—it might have been discussed on this site, but I couldn’t find it on a quick search—where an audience listened to two people have a conversation, and they knew that one of the people had been allowed to pick a topic that he knew a lot about and the other person didn’t. Despite knowing that, the audience consistently thought that the person who’d been allowed to pick the topic was more intelligent, as he had better things to say about it. That would at least weakly suggest that people aren’t very good at controlling for irrelevant factors when estimating someone’s intelligence.
Found it: http://lesswrong.com/lw/4b/dont_revere_the_bearer_of_good_info/
If anyone knows what this study is, I’d be very interested to learn more about it, since it sounds like it might be a falsification of my hypothesized http://www.gwern.net/backfire-effect
EDIT: found it by accident, see sibling comment
I don’t think you can. A conversation or ‘general impression’ is going to be based on interpersonal skills, and unless it is a highly technical conversation, be based mostly on verbal sorts of skills. Asking whether an IQ test would be less reliable than a conversation is a little like asking ‘if we drop the SAT Math section and just use Verbal, is that better than using both the Math and Verbal sections?’ No one item loads very heavily on g which is why IQ tests typically have a bunch of subtests.