First, I’d like to make sure that you understand I’m trying to explicate a hypothesis that seems to me like it could be true or false that seems to be considered “almost certainly false” in this community. I’m arguing for wider error bars on this subject, not a reversal of position, and also suggesting that a different set of conceptual tools (more focused on the world and less focused on “generic cognitive efficacy”) are relevant.
Second: yes that is somewhat closer to the point of my objection and it also includes a wonderfully specific prediction which I suspect is false.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar.
My current leading hypothesis here this that this is false in two ways, although one of those ways might be a contingent fact about the nature of the world at the present time.
Keep in mind that the studies that show IQ to be correlated with adaptive life outcomes (like income and longevity and so on) are mostly based on the middle of the curve. It appears to just be more helpful for people to have an IQ of 110 instead of 90 and there are lots of such people to run the stats to determine this. The upper edge is harder to study for lack of data but that’s what we’re trying to make inferences about. I suspect that either of us could be shown to be in error here by a good solid empirical investigation in the future.
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties “playing well with others” rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all “upgraded” to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
(For the record, so far as I can tell I’m not one of the super-brains… I just have sympathy for them, because the people I’ve met who are in this range seem to have hard lives. One of the things that makes their lives hard is that most people can’t tell them apart from people like me who are dancing on the edge of this zone.)
The second reason high IQ may not be very useful is much deeper and follows on issues similar to the concept of the value of information. Simply put, “IQ” can be glossed as “the speed with which useful mindware and information can be acquired and deployed”, and there may be diminishing returns in mindware just as their are diminishing returns in simpler information. Quoting Grady Towers quoting Hollingworth:
A second adjustment problem faced by all gifted persons is due to their uncommon versatility. Hollingworth says:
Another problem of development with reference to occupation grows out of the versatility of these children. So far from being one-sided in ability and interest, they are typically capable of so many different kinds of success that they may have difficulty in confining themselves to a reasonable number of enterprises. Some of them are lost to usefulness through spreading their available time and energy over such a wide array of projects that nothing can be finished or done perfectly. After all, time and space are limited for the gifted as for others, and the life-span is probably not much longer for them than for others. A choice must be made among the numerous possibilities, since modern life calls for specialization [3, p. 259].
In your comment you wrote:
Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word “intelligence” to explain why, so I would expect Omega to be able to beat a human for the same reason.
Chess is a beautiful example, because it is a full information deterministic zero sum game, which means there “exists” (ie there mathematically exists) a way for both sides to play perfectly. The final state of the game that results from perfect play is just a mathematical fact about which we are currently ignorant: it will either be a win for white, a win for black, or a tie. Checkers has been weakly solved and, with perfect play, it is a tie. If its ever fully solved then a person with an internet connection, some google-fu, and trivial system admin and software usage skills would be able to tie Omega. Its not a fact about my brain that I would be able to tie Omega that way, its a fact about checkers. That’s just how checkers is. Perhaps they could even use Anki and some structured practice to internalize the checkers solution so that they could just tie Omega directly.
So what if a given occupation, or more broadly “dealing with reality in general” is similar to chess in this respect? What if reality admits of something like “perfect play” and perfect play turns out to not be all that complicated? A bit of tit-for-tat, some operations research, a 3D physics simulator for manual dexterity, and so on with various skills, but a finite list of basically prosaic knowledge and mindware. It is almost certain that a teachable version of such a strategy has not been developed and delivered to kids in modern public schools, and even a pedagogically optimized version of it might not fit in our heads without some way of augmenting our brains to a greater or lesser extent.
The fact that a bright person can master a profession swiftly enough to get bored and switch to some other profession may indicate that humans were not incredibly far from this state already.
I’m not saying there’s nothing to IQ/intelligence/whatever. I’m just saying that it may be the case that the really interesting thing is “what optimal play looks like” and then you only need enough mindware loading and deploying ability to learn it and apply it. If this is the case, and everyone is obsessing over “learning and deployment speed”, and we’re not actually talking much about what optimal strategy looks like even though we don’t have it nailed down yet, then that seems to me like it would be an important thing to be aware of. Like maybe really important.
And practically speaking, the answer seems like it might not be found by studying brains or algorithms. My tendency (and I might be off track here) is to look for the answer somewhere in the shape of the world itself. Does it admit of optimal play or not? Can we put bounds on a given strategy we actually have at hand to say that this strategy is X far away from optimal?
And more generally but more personally, my biggest fear for the singularity is that “world bots” (analogous to “chess bots”) won’t actually be that hard to develop, and they’ll win against humans because we don’t execute very well and we keep dying and having to re-learn the boring basics over and over every generation, and that will be that. No glorious mind children. No flowering of art and soulfulness as humans are eventually out competed by things of vastly greater spiritual and mental depth. Just unreflective algorithms grinding out a sort of “optimal buildout strategy” in a silent and mindless universe. Forever.
That’s my current default vision for the singularity and its why I’m still hanging out on this website. If we can get something humanly better than that, even if it slows down the buildout, then that would be good. So far, this website seems like the place where I’d meet people who want to do that.
If someone knows of a better place for such work please PM me. I see XiXiDu as paying attention to the larger game as well… and getting down voted for it… and I find this a little bit distressing… and so I’m writing about it here in the hopes of either learning (or teaching) something useful :-)
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties “playing well with others” rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all “upgraded” to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
Some subscribe to the ability-threshold/creativity hypothesis, which postulates that the likelihood of producing something creative increases with intelligence up to about an IQ of 120, beyond which further increments in IQ do not significantly augment one’s chances for creative accomplishment (Dai, 2010; Lubart, 2003). There are several research findings that refute the ability-threshold/creativity hypothesis. In a series of studies, Lubinski and colleagues (Park et al., 2007, 2008; Robertson et al., 2010; Wai et al., 2005) showed that creative accomplishments in academic (degrees obtained) vocational (careers) and scientific (patents) arenas are predicted by differences in ability. These researchers argue that previous studies have not found a relationship between cognitive ability and creative accomplishments for several reasons. First, measures of ability and outcome criteria did not have high enough ceilings to capture variation in the upper tail of the distribution; and second, the time frame was not long enough to detect indices of more matured talent, such as the acquisition of a patent (Park et al., 2007).
Dai, D. Y. (2010). The nature and nurture of giftedness: A new framework for understanding gifted education. New York, NY: Teachers College Press.
Lubart, T. I. (2003). In search of creative intelligence. In R.J. Sternberg, J. Lautrey, & T. I. Lubart (Eds.), Models of intelligence: International perspectives (pp. 279–292). Washington, DC: American Psychological Association
Park, G., Lubinski, D., & Benbow, C. P. (2007). Contrasting intellectual patterns predict creativity in the arts and sciences: Tracking intellectually precocious youth over 25 years. Psychological Science, 18, 948–952. doi:10.1111/j.1467-9280.2007.02007.x
Park, G., Lubinski, D., & Benbow, C. P. (2008). Ability differences among people who have commensurate degrees matter for scientific creativity. Psychological Science, 19, 957–961. doi:10.1111/j.1467-9280.2008.02182.x
Robertson, K. F., Smeets, S., Lubinski, D., & Benbow, C. P. (2010). Beyond the threshold hypothesis: Even among the gifted and top math/science graduate students, cognitive abilities, vocational interests, and lifestyle preferences matter for career choice, performance, and persistence. Current Directions in Psychological Science, 19, 346–351. doi:10.1177/0963721410391442
Wai, J., Lubinski, D., & Benbow, C. P. (2005). Creativity and occupational accomplishments among intellectually precocious youths: An age 13 to age 33 longitudinal study. Journal of Educational Psychology, 97, 484–492. doi:10.1037/0022-0663.97.3.484
The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don’t really seem to address because patents and money don’t necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn’t want to “win an argument” on the chess point and have it be a cheap shot that doesn’t mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn’t say humans are at the optimum, just that we’re close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I’m talking about is that very smart people aren’t as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and “cerebral” personal interests that lead to the steady accumulation of more and “better” culture.
First, I’d like to make sure that you understand I’m trying to explicate a hypothesis that seems to me like it could be true or false that seems to be considered “almost certainly false” in this community. I’m arguing for wider error bars on this subject, not a reversal of position, and also suggesting that a different set of conceptual tools (more focused on the world and less focused on “generic cognitive efficacy”) are relevant.
Second: yes that is somewhat closer to the point of my objection and it also includes a wonderfully specific prediction which I suspect is false.
My current leading hypothesis here this that this is false in two ways, although one of those ways might be a contingent fact about the nature of the world at the present time.
Keep in mind that the studies that show IQ to be correlated with adaptive life outcomes (like income and longevity and so on) are mostly based on the middle of the curve. It appears to just be more helpful for people to have an IQ of 110 instead of 90 and there are lots of such people to run the stats to determine this. The upper edge is harder to study for lack of data but that’s what we’re trying to make inferences about. I suspect that either of us could be shown to be in error here by a good solid empirical investigation in the future.
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties “playing well with others” rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all “upgraded” to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
(For the record, so far as I can tell I’m not one of the super-brains… I just have sympathy for them, because the people I’ve met who are in this range seem to have hard lives. One of the things that makes their lives hard is that most people can’t tell them apart from people like me who are dancing on the edge of this zone.)
The second reason high IQ may not be very useful is much deeper and follows on issues similar to the concept of the value of information. Simply put, “IQ” can be glossed as “the speed with which useful mindware and information can be acquired and deployed”, and there may be diminishing returns in mindware just as their are diminishing returns in simpler information. Quoting Grady Towers quoting Hollingworth:
In your comment you wrote:
Chess is a beautiful example, because it is a full information deterministic zero sum game, which means there “exists” (ie there mathematically exists) a way for both sides to play perfectly. The final state of the game that results from perfect play is just a mathematical fact about which we are currently ignorant: it will either be a win for white, a win for black, or a tie. Checkers has been weakly solved and, with perfect play, it is a tie. If its ever fully solved then a person with an internet connection, some google-fu, and trivial system admin and software usage skills would be able to tie Omega. Its not a fact about my brain that I would be able to tie Omega that way, its a fact about checkers. That’s just how checkers is. Perhaps they could even use Anki and some structured practice to internalize the checkers solution so that they could just tie Omega directly.
So what if a given occupation, or more broadly “dealing with reality in general” is similar to chess in this respect? What if reality admits of something like “perfect play” and perfect play turns out to not be all that complicated? A bit of tit-for-tat, some operations research, a 3D physics simulator for manual dexterity, and so on with various skills, but a finite list of basically prosaic knowledge and mindware. It is almost certain that a teachable version of such a strategy has not been developed and delivered to kids in modern public schools, and even a pedagogically optimized version of it might not fit in our heads without some way of augmenting our brains to a greater or lesser extent.
The fact that a bright person can master a profession swiftly enough to get bored and switch to some other profession may indicate that humans were not incredibly far from this state already.
I’m not saying there’s nothing to IQ/intelligence/whatever. I’m just saying that it may be the case that the really interesting thing is “what optimal play looks like” and then you only need enough mindware loading and deploying ability to learn it and apply it. If this is the case, and everyone is obsessing over “learning and deployment speed”, and we’re not actually talking much about what optimal strategy looks like even though we don’t have it nailed down yet, then that seems to me like it would be an important thing to be aware of. Like maybe really important.
And practically speaking, the answer seems like it might not be found by studying brains or algorithms. My tendency (and I might be off track here) is to look for the answer somewhere in the shape of the world itself. Does it admit of optimal play or not? Can we put bounds on a given strategy we actually have at hand to say that this strategy is X far away from optimal?
And more generally but more personally, my biggest fear for the singularity is that “world bots” (analogous to “chess bots”) won’t actually be that hard to develop, and they’ll win against humans because we don’t execute very well and we keep dying and having to re-learn the boring basics over and over every generation, and that will be that. No glorious mind children. No flowering of art and soulfulness as humans are eventually out competed by things of vastly greater spiritual and mental depth. Just unreflective algorithms grinding out a sort of “optimal buildout strategy” in a silent and mindless universe. Forever.
That’s my current default vision for the singularity and its why I’m still hanging out on this website. If we can get something humanly better than that, even if it slows down the buildout, then that would be good. So far, this website seems like the place where I’d meet people who want to do that.
If someone knows of a better place for such work please PM me. I see XiXiDu as paying attention to the larger game as well… and getting down voted for it… and I find this a little bit distressing… and so I’m writing about it here in the hopes of either learning (or teaching) something useful :-)
Which re-analysis was that? The material I am aware of show that income continues to increase with IQ as high as the scale goes, which certainly doesn’t sound like dysfunction; eg “‘The Effects of Education, Personality, and IQ on Earnings of High-Ability Men’, Gensowski et al 2011” (similar to SMPY results). And from “Rethinking Giftedness and Gifted Education: A Proposed Direction Forward Based on Psychological Science”, which is very germane to this discussion:
The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don’t really seem to address because patents and money don’t necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn’t want to “win an argument” on the chess point and have it be a cheap shot that doesn’t mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn’t say humans are at the optimum, just that we’re close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I’m talking about is that very smart people aren’t as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and “cerebral” personal interests that lead to the steady accumulation of more and “better” culture.