Specifically, I think you might be missing the halo effect, the fundamental attribution error, survivorship bias, and strategic signalling to gain access to power, influence, and money.
What is the nature of the property that the general would have a 93% chance of having? Is it a property you’d hypothesize was shared by about 7% of all humans in history? Is it shared by 7% of extant generals? What if the internal details of the property you hypothesize is being revealed are such that no general actually has it, even though some general always wins each battle? How would you distinguish between these outcomes? How many real full scale battles are necessary and how expensive are they to run to push P(at least one general has the trait) and P(a specific general has the trait|at least one general has the trait) close to 1 or 0?
XiXiDu titled his article “The Futility Of Intelligence”. What I’m proposing is something more like “The Use And Abuse Of Appearances Of General Intelligence, And What Remains Of The Theory Of General Intelligence After Subtracting Out This Noise”. I think that there is something left, but I suspect it isn’t as magically powerful or generic as is sometimes assumed, especially around these parts. You have discussed similar themes in the past in less mechanistic and more personal, friendly, humanized, and generally better written forms :-)
This point is consonant with ryjm’s sibling comment but if my suspicions stand then the implications are not simply “subtle and not incredibly useful” but have concrete personal implications (it suggests studying important domains is more important than studying abstractions about how to study, unless abstraction+domain is faster to acquire than the domain itself, and abstraction+abstraction+domain faces similar constraints (which is again not a particularly original insight)). The same suspicion has application to political discourse and dynamics where it suggests that claims of generic capacity are frequently false, except when precise mechanisms are spelled out, as with market pricing as a reasonably robust method for coordinating complex behaviors to achieve outcomes no individual could achieve on their own.
A roughly analogous issue comes up in the selection of “actively managed” investment funds. All of them charge something for their cognitive labor and some of them actually add value thereby, but a lot of it is just survivorship bias and investor gullibility. “Past performance is no guarantee of future results.” Companies in that industry will regularly create new investment funds, run them for a while, and put the “funds that have survived with the best results so far” on their investment brochures while keeping their other investment funds in the background where stinkers can be quietly culled. Its a good trick for extracting rent from marks, but it’s not the sort of thing that would be done if there was solid and simple evidence of a “real” alpha that investors could pay attention to as a useful and generic predictor of future success without knowing much about the context.
I have a strong suspicion, and I’d love this hunch to be proved wrong, that there’s mostly no free lunches when it comes to epistemology. Being smart about one investment regime is not the same as being smart about another investment regime. Being a general and playing chess have relatively little cross-applicable knowledge. Being good at chess has relatively little in common with being good at the abstractions of game theory.
With this claim (which I’m not entirely sure of because its very abstract and hard to ground in observables) I’m not saying that AGI that implements something like “general learning ability in silicon and steel” wouldn’t be amazing or socially transformative, I’m not saying that extreme rationality is worthless, its more like I’m claiming that its not magic, with a sub-claim that sometimes some people seem to speak (and act?) as though they think it might be magic. Like they can hand-wave the details because they’ve posited “being smarter” as an ontologically basic property rather than as a summary for having nailed down many details in a functional whole. If you adopt an implementation perspective, then the summary evaporates because the details are what remain before you to manipulate.
So I’m interpreting your point as being “What if what we think of when we say ‘general intelligence’ isn’t really all that useful in different domains, but we keep treating it as if it were the kind of thing that could constantly win battles or conquer Rome or whatever?” Perhaps then it was a mistake to talk about generals in battle, as your theory is that there may be an especially victorious general, but his fortune may be due more to some specific skill at tactics than his general intelligence?
I guess my belief in the utility of general intelligence (you cited an article of mine arguing against huge gains from technical rationality, which I consider very different; here I’m talking about pure IQ) would come from a comparison with subnormal intelligence. A dog would make a terrible general. To decreasing degrees, so too would a chimp, a five year old child, a person with Down’s Syndrome, and most likely a healthy person with an IQ of 75. These animals and people would also, more likely than not, be terrible chess players, mathematicians, writers, politicians, and chefs.
This is true regardless of domain-specific training: you can read von Clausewitz’s On War to a dog and it will just sit there, wagging its tail. You can read it to a person with IQ 75, and most of the more complicated concepts will be lost. Maybe reading On War would allow a person with a few dozen IQ point handicap to win, but it’s not going to make a difference across a gulf the size of the one between dogs and humans.
Humans certainly didn’t evolve a separate chess playing module, or a separate submarine tactics module, so we attribute our being able to wipe the floor with dogs and apes in chess or submarine warfare to some kind of “high general intelligence” we have and they don’t.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar. Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word “intelligence” to explain why, so I would expect Omega to be able to beat a human for the same reason.
Is that a little closer to the point of your objection?
First, I’d like to make sure that you understand I’m trying to explicate a hypothesis that seems to me like it could be true or false that seems to be considered “almost certainly false” in this community. I’m arguing for wider error bars on this subject, not a reversal of position, and also suggesting that a different set of conceptual tools (more focused on the world and less focused on “generic cognitive efficacy”) are relevant.
Second: yes that is somewhat closer to the point of my objection and it also includes a wonderfully specific prediction which I suspect is false.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar.
My current leading hypothesis here this that this is false in two ways, although one of those ways might be a contingent fact about the nature of the world at the present time.
Keep in mind that the studies that show IQ to be correlated with adaptive life outcomes (like income and longevity and so on) are mostly based on the middle of the curve. It appears to just be more helpful for people to have an IQ of 110 instead of 90 and there are lots of such people to run the stats to determine this. The upper edge is harder to study for lack of data but that’s what we’re trying to make inferences about. I suspect that either of us could be shown to be in error here by a good solid empirical investigation in the future.
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties “playing well with others” rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all “upgraded” to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
(For the record, so far as I can tell I’m not one of the super-brains… I just have sympathy for them, because the people I’ve met who are in this range seem to have hard lives. One of the things that makes their lives hard is that most people can’t tell them apart from people like me who are dancing on the edge of this zone.)
The second reason high IQ may not be very useful is much deeper and follows on issues similar to the concept of the value of information. Simply put, “IQ” can be glossed as “the speed with which useful mindware and information can be acquired and deployed”, and there may be diminishing returns in mindware just as their are diminishing returns in simpler information. Quoting Grady Towers quoting Hollingworth:
A second adjustment problem faced by all gifted persons is due to their uncommon versatility. Hollingworth says:
Another problem of development with reference to occupation grows out of the versatility of these children. So far from being one-sided in ability and interest, they are typically capable of so many different kinds of success that they may have difficulty in confining themselves to a reasonable number of enterprises. Some of them are lost to usefulness through spreading their available time and energy over such a wide array of projects that nothing can be finished or done perfectly. After all, time and space are limited for the gifted as for others, and the life-span is probably not much longer for them than for others. A choice must be made among the numerous possibilities, since modern life calls for specialization [3, p. 259].
In your comment you wrote:
Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word “intelligence” to explain why, so I would expect Omega to be able to beat a human for the same reason.
Chess is a beautiful example, because it is a full information deterministic zero sum game, which means there “exists” (ie there mathematically exists) a way for both sides to play perfectly. The final state of the game that results from perfect play is just a mathematical fact about which we are currently ignorant: it will either be a win for white, a win for black, or a tie. Checkers has been weakly solved and, with perfect play, it is a tie. If its ever fully solved then a person with an internet connection, some google-fu, and trivial system admin and software usage skills would be able to tie Omega. Its not a fact about my brain that I would be able to tie Omega that way, its a fact about checkers. That’s just how checkers is. Perhaps they could even use Anki and some structured practice to internalize the checkers solution so that they could just tie Omega directly.
So what if a given occupation, or more broadly “dealing with reality in general” is similar to chess in this respect? What if reality admits of something like “perfect play” and perfect play turns out to not be all that complicated? A bit of tit-for-tat, some operations research, a 3D physics simulator for manual dexterity, and so on with various skills, but a finite list of basically prosaic knowledge and mindware. It is almost certain that a teachable version of such a strategy has not been developed and delivered to kids in modern public schools, and even a pedagogically optimized version of it might not fit in our heads without some way of augmenting our brains to a greater or lesser extent.
The fact that a bright person can master a profession swiftly enough to get bored and switch to some other profession may indicate that humans were not incredibly far from this state already.
I’m not saying there’s nothing to IQ/intelligence/whatever. I’m just saying that it may be the case that the really interesting thing is “what optimal play looks like” and then you only need enough mindware loading and deploying ability to learn it and apply it. If this is the case, and everyone is obsessing over “learning and deployment speed”, and we’re not actually talking much about what optimal strategy looks like even though we don’t have it nailed down yet, then that seems to me like it would be an important thing to be aware of. Like maybe really important.
And practically speaking, the answer seems like it might not be found by studying brains or algorithms. My tendency (and I might be off track here) is to look for the answer somewhere in the shape of the world itself. Does it admit of optimal play or not? Can we put bounds on a given strategy we actually have at hand to say that this strategy is X far away from optimal?
And more generally but more personally, my biggest fear for the singularity is that “world bots” (analogous to “chess bots”) won’t actually be that hard to develop, and they’ll win against humans because we don’t execute very well and we keep dying and having to re-learn the boring basics over and over every generation, and that will be that. No glorious mind children. No flowering of art and soulfulness as humans are eventually out competed by things of vastly greater spiritual and mental depth. Just unreflective algorithms grinding out a sort of “optimal buildout strategy” in a silent and mindless universe. Forever.
That’s my current default vision for the singularity and its why I’m still hanging out on this website. If we can get something humanly better than that, even if it slows down the buildout, then that would be good. So far, this website seems like the place where I’d meet people who want to do that.
If someone knows of a better place for such work please PM me. I see XiXiDu as paying attention to the larger game as well… and getting down voted for it… and I find this a little bit distressing… and so I’m writing about it here in the hopes of either learning (or teaching) something useful :-)
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties “playing well with others” rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all “upgraded” to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
Some subscribe to the ability-threshold/creativity hypothesis, which postulates that the likelihood of producing something creative increases with intelligence up to about an IQ of 120, beyond which further increments in IQ do not significantly augment one’s chances for creative accomplishment (Dai, 2010; Lubart, 2003). There are several research findings that refute the ability-threshold/creativity hypothesis. In a series of studies, Lubinski and colleagues (Park et al., 2007, 2008; Robertson et al., 2010; Wai et al., 2005) showed that creative accomplishments in academic (degrees obtained) vocational (careers) and scientific (patents) arenas are predicted by differences in ability. These researchers argue that previous studies have not found a relationship between cognitive ability and creative accomplishments for several reasons. First, measures of ability and outcome criteria did not have high enough ceilings to capture variation in the upper tail of the distribution; and second, the time frame was not long enough to detect indices of more matured talent, such as the acquisition of a patent (Park et al., 2007).
Dai, D. Y. (2010). The nature and nurture of giftedness: A new framework for understanding gifted education. New York, NY: Teachers College Press.
Lubart, T. I. (2003). In search of creative intelligence. In R.J. Sternberg, J. Lautrey, & T. I. Lubart (Eds.), Models of intelligence: International perspectives (pp. 279–292). Washington, DC: American Psychological Association
Park, G., Lubinski, D., & Benbow, C. P. (2007). Contrasting intellectual patterns predict creativity in the arts and sciences: Tracking intellectually precocious youth over 25 years. Psychological Science, 18, 948–952. doi:10.1111/j.1467-9280.2007.02007.x
Park, G., Lubinski, D., & Benbow, C. P. (2008). Ability differences among people who have commensurate degrees matter for scientific creativity. Psychological Science, 19, 957–961. doi:10.1111/j.1467-9280.2008.02182.x
Robertson, K. F., Smeets, S., Lubinski, D., & Benbow, C. P. (2010). Beyond the threshold hypothesis: Even among the gifted and top math/science graduate students, cognitive abilities, vocational interests, and lifestyle preferences matter for career choice, performance, and persistence. Current Directions in Psychological Science, 19, 346–351. doi:10.1177/0963721410391442
Wai, J., Lubinski, D., & Benbow, C. P. (2005). Creativity and occupational accomplishments among intellectually precocious youths: An age 13 to age 33 longitudinal study. Journal of Educational Psychology, 97, 484–492. doi:10.1037/0022-0663.97.3.484
The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don’t really seem to address because patents and money don’t necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn’t want to “win an argument” on the chess point and have it be a cheap shot that doesn’t mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn’t say humans are at the optimum, just that we’re close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I’m talking about is that very smart people aren’t as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and “cerebral” personal interests that lead to the steady accumulation of more and “better” culture.
Upvoted and hopefully answered :-)
Specifically, I think you might be missing the halo effect, the fundamental attribution error, survivorship bias, and strategic signalling to gain access to power, influence, and money.
What is the nature of the property that the general would have a 93% chance of having? Is it a property you’d hypothesize was shared by about 7% of all humans in history? Is it shared by 7% of extant generals? What if the internal details of the property you hypothesize is being revealed are such that no general actually has it, even though some general always wins each battle? How would you distinguish between these outcomes? How many real full scale battles are necessary and how expensive are they to run to push P(at least one general has the trait) and P(a specific general has the trait|at least one general has the trait) close to 1 or 0?
XiXiDu titled his article “The Futility Of Intelligence”. What I’m proposing is something more like “The Use And Abuse Of Appearances Of General Intelligence, And What Remains Of The Theory Of General Intelligence After Subtracting Out This Noise”. I think that there is something left, but I suspect it isn’t as magically powerful or generic as is sometimes assumed, especially around these parts. You have discussed similar themes in the past in less mechanistic and more personal, friendly, humanized, and generally better written forms :-)
This point is consonant with ryjm’s sibling comment but if my suspicions stand then the implications are not simply “subtle and not incredibly useful” but have concrete personal implications (it suggests studying important domains is more important than studying abstractions about how to study, unless abstraction+domain is faster to acquire than the domain itself, and abstraction+abstraction+domain faces similar constraints (which is again not a particularly original insight)). The same suspicion has application to political discourse and dynamics where it suggests that claims of generic capacity are frequently false, except when precise mechanisms are spelled out, as with market pricing as a reasonably robust method for coordinating complex behaviors to achieve outcomes no individual could achieve on their own.
A roughly analogous issue comes up in the selection of “actively managed” investment funds. All of them charge something for their cognitive labor and some of them actually add value thereby, but a lot of it is just survivorship bias and investor gullibility. “Past performance is no guarantee of future results.” Companies in that industry will regularly create new investment funds, run them for a while, and put the “funds that have survived with the best results so far” on their investment brochures while keeping their other investment funds in the background where stinkers can be quietly culled. Its a good trick for extracting rent from marks, but it’s not the sort of thing that would be done if there was solid and simple evidence of a “real” alpha that investors could pay attention to as a useful and generic predictor of future success without knowing much about the context.
I have a strong suspicion, and I’d love this hunch to be proved wrong, that there’s mostly no free lunches when it comes to epistemology. Being smart about one investment regime is not the same as being smart about another investment regime. Being a general and playing chess have relatively little cross-applicable knowledge. Being good at chess has relatively little in common with being good at the abstractions of game theory.
With this claim (which I’m not entirely sure of because its very abstract and hard to ground in observables) I’m not saying that AGI that implements something like “general learning ability in silicon and steel” wouldn’t be amazing or socially transformative, I’m not saying that extreme rationality is worthless, its more like I’m claiming that its not magic, with a sub-claim that sometimes some people seem to speak (and act?) as though they think it might be magic. Like they can hand-wave the details because they’ve posited “being smarter” as an ontologically basic property rather than as a summary for having nailed down many details in a functional whole. If you adopt an implementation perspective, then the summary evaporates because the details are what remain before you to manipulate.
So I’m interpreting your point as being “What if what we think of when we say ‘general intelligence’ isn’t really all that useful in different domains, but we keep treating it as if it were the kind of thing that could constantly win battles or conquer Rome or whatever?” Perhaps then it was a mistake to talk about generals in battle, as your theory is that there may be an especially victorious general, but his fortune may be due more to some specific skill at tactics than his general intelligence?
I guess my belief in the utility of general intelligence (you cited an article of mine arguing against huge gains from technical rationality, which I consider very different; here I’m talking about pure IQ) would come from a comparison with subnormal intelligence. A dog would make a terrible general. To decreasing degrees, so too would a chimp, a five year old child, a person with Down’s Syndrome, and most likely a healthy person with an IQ of 75. These animals and people would also, more likely than not, be terrible chess players, mathematicians, writers, politicians, and chefs.
This is true regardless of domain-specific training: you can read von Clausewitz’s On War to a dog and it will just sit there, wagging its tail. You can read it to a person with IQ 75, and most of the more complicated concepts will be lost. Maybe reading On War would allow a person with a few dozen IQ point handicap to win, but it’s not going to make a difference across a gulf the size of the one between dogs and humans.
Humans certainly didn’t evolve a separate chess playing module, or a separate submarine tactics module, so we attribute our being able to wipe the floor with dogs and apes in chess or submarine warfare to some kind of “high general intelligence” we have and they don’t.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar. Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word “intelligence” to explain why, so I would expect Omega to be able to beat a human for the same reason.
Is that a little closer to the point of your objection?
First, I’d like to make sure that you understand I’m trying to explicate a hypothesis that seems to me like it could be true or false that seems to be considered “almost certainly false” in this community. I’m arguing for wider error bars on this subject, not a reversal of position, and also suggesting that a different set of conceptual tools (more focused on the world and less focused on “generic cognitive efficacy”) are relevant.
Second: yes that is somewhat closer to the point of my objection and it also includes a wonderfully specific prediction which I suspect is false.
My current leading hypothesis here this that this is false in two ways, although one of those ways might be a contingent fact about the nature of the world at the present time.
Keep in mind that the studies that show IQ to be correlated with adaptive life outcomes (like income and longevity and so on) are mostly based on the middle of the curve. It appears to just be more helpful for people to have an IQ of 110 instead of 90 and there are lots of such people to run the stats to determine this. The upper edge is harder to study for lack of data but that’s what we’re trying to make inferences about. I suspect that either of us could be shown to be in error here by a good solid empirical investigation in the future.
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties “playing well with others” rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all “upgraded” to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
(For the record, so far as I can tell I’m not one of the super-brains… I just have sympathy for them, because the people I’ve met who are in this range seem to have hard lives. One of the things that makes their lives hard is that most people can’t tell them apart from people like me who are dancing on the edge of this zone.)
The second reason high IQ may not be very useful is much deeper and follows on issues similar to the concept of the value of information. Simply put, “IQ” can be glossed as “the speed with which useful mindware and information can be acquired and deployed”, and there may be diminishing returns in mindware just as their are diminishing returns in simpler information. Quoting Grady Towers quoting Hollingworth:
In your comment you wrote:
Chess is a beautiful example, because it is a full information deterministic zero sum game, which means there “exists” (ie there mathematically exists) a way for both sides to play perfectly. The final state of the game that results from perfect play is just a mathematical fact about which we are currently ignorant: it will either be a win for white, a win for black, or a tie. Checkers has been weakly solved and, with perfect play, it is a tie. If its ever fully solved then a person with an internet connection, some google-fu, and trivial system admin and software usage skills would be able to tie Omega. Its not a fact about my brain that I would be able to tie Omega that way, its a fact about checkers. That’s just how checkers is. Perhaps they could even use Anki and some structured practice to internalize the checkers solution so that they could just tie Omega directly.
So what if a given occupation, or more broadly “dealing with reality in general” is similar to chess in this respect? What if reality admits of something like “perfect play” and perfect play turns out to not be all that complicated? A bit of tit-for-tat, some operations research, a 3D physics simulator for manual dexterity, and so on with various skills, but a finite list of basically prosaic knowledge and mindware. It is almost certain that a teachable version of such a strategy has not been developed and delivered to kids in modern public schools, and even a pedagogically optimized version of it might not fit in our heads without some way of augmenting our brains to a greater or lesser extent.
The fact that a bright person can master a profession swiftly enough to get bored and switch to some other profession may indicate that humans were not incredibly far from this state already.
I’m not saying there’s nothing to IQ/intelligence/whatever. I’m just saying that it may be the case that the really interesting thing is “what optimal play looks like” and then you only need enough mindware loading and deploying ability to learn it and apply it. If this is the case, and everyone is obsessing over “learning and deployment speed”, and we’re not actually talking much about what optimal strategy looks like even though we don’t have it nailed down yet, then that seems to me like it would be an important thing to be aware of. Like maybe really important.
And practically speaking, the answer seems like it might not be found by studying brains or algorithms. My tendency (and I might be off track here) is to look for the answer somewhere in the shape of the world itself. Does it admit of optimal play or not? Can we put bounds on a given strategy we actually have at hand to say that this strategy is X far away from optimal?
And more generally but more personally, my biggest fear for the singularity is that “world bots” (analogous to “chess bots”) won’t actually be that hard to develop, and they’ll win against humans because we don’t execute very well and we keep dying and having to re-learn the boring basics over and over every generation, and that will be that. No glorious mind children. No flowering of art and soulfulness as humans are eventually out competed by things of vastly greater spiritual and mental depth. Just unreflective algorithms grinding out a sort of “optimal buildout strategy” in a silent and mindless universe. Forever.
That’s my current default vision for the singularity and its why I’m still hanging out on this website. If we can get something humanly better than that, even if it slows down the buildout, then that would be good. So far, this website seems like the place where I’d meet people who want to do that.
If someone knows of a better place for such work please PM me. I see XiXiDu as paying attention to the larger game as well… and getting down voted for it… and I find this a little bit distressing… and so I’m writing about it here in the hopes of either learning (or teaching) something useful :-)
Which re-analysis was that? The material I am aware of show that income continues to increase with IQ as high as the scale goes, which certainly doesn’t sound like dysfunction; eg “‘The Effects of Education, Personality, and IQ on Earnings of High-Ability Men’, Gensowski et al 2011” (similar to SMPY results). And from “Rethinking Giftedness and Gifted Education: A Proposed Direction Forward Based on Psychological Science”, which is very germane to this discussion:
The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don’t really seem to address because patents and money don’t necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn’t want to “win an argument” on the chess point and have it be a cheap shot that doesn’t mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn’t say humans are at the optimum, just that we’re close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I’m talking about is that very smart people aren’t as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and “cerebral” personal interests that lead to the steady accumulation of more and “better” culture.