The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It’s the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways—they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
After all, a human can still beat the best chess programs with a mere pawn handicap.
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of ‘no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050’?
BTW, is ELO supposed to have that kind of linear interpretation?
I’m not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn’t show me any warning signs—ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn’t matter. I think.
we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
This is a possibility (made more plausible if we’re talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it’s greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
BTW, is ELO supposed to have that kind of linear interpretation?
It seems that whether or not it’s supposed to, in practice it does. From the just released “Intrinsic Chess Ratings”, which takes Rybka and does exhaustive evaluations (deep enough to be ‘relatively omniscient’) of many thousands of modern chess games; on page 9:
We conclude that there is a smooth relationship between the actual players’ Elo ratings and the intrinsic quality of the move choices as measured by the chess program and the agent fitting. Moreover, the final s-fit values obtained are nearly the same for the corresponding entries of all three time periods. Since a lower s indicates higher skill, we conclude that there has been little or no ‘inflation’ in ratings over time—if anything there has been deflation. This runs counter to conventional wisdom, but is predicted by population models on which rating systems have been based [Gli99].
The results also support a no answer to question 2 [“Were the top players of earlier times as strong as the top players of today?”]. In the 1970’s there were only two players with ratings over 2700, namely Bobby Fischer and Anatoly Karpov, and there were years as late as 1981 when no one had a rating over 2700 (see [Wee00]). In the past decade there have usually been thirty or more players with such ratings. Thus lack of inflation implies that those players are better than all but Fischer and Karpov were. Extrapolated backwards, this would be consistent with the findings of [DHMG07], which however (like some recent competitions to improve on the Elo system) are based only on the results of games, not on intrinsic decision-making.
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It’s the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways—they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
O RLY?
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of ‘no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050’?
I’m not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn’t show me any warning signs—ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn’t matter. I think.
This is a possibility (made more plausible if we’re talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it’s greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
It seems that whether or not it’s supposed to, in practice it does. From the just released “Intrinsic Chess Ratings”, which takes Rybka and does exhaustive evaluations (deep enough to be ‘relatively omniscient’) of many thousands of modern chess games; on page 9: