The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don’t really seem to address because patents and money don’t necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn’t want to “win an argument” on the chess point and have it be a cheap shot that doesn’t mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn’t say humans are at the optimum, just that we’re close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I’m talking about is that very smart people aren’t as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and “cerebral” personal interests that lead to the steady accumulation of more and “better” culture.
The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don’t really seem to address because patents and money don’t necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn’t want to “win an argument” on the chess point and have it be a cheap shot that doesn’t mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn’t say humans are at the optimum, just that we’re close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I’m talking about is that very smart people aren’t as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and “cerebral” personal interests that lead to the steady accumulation of more and “better” culture.