I think these comparisons of “yes, the AI is better than the vast majority of humans at X, but it doesn’t really count, because . . .” miss the point that the danger lies not in the superiority of AI in a fair-as-judged-by-humans-ex-post-facto contest, but in its superiority at all.
A point could be made that there is no real-world analog of contests that are biased in favor of an AI the way this kind of Diplomacy is, but how sure can we be about that?
I agree, and I don’t use this argument regarding arbitrary AI achievements.
But it’s very relevant when capabilities completely orthogonal to the AI are being sold as AI. The Starcraft example is more egregious, because AlphaStar had a different kind of access to the game state than a human has, which was claimed to be “equivalent” by the deepmind team. This resulted in extremely fine-grained control of units that the game was not designed around. Starcraft is partially a sport, i.e., a game of dexterity, concentration, and endurance. It’s unsurprising that a machine beats a human at that.
If you (generic you) are going to make an argument about how speed of execution, parallel communication and so on are game changers (specially in an increasingly online, API accessible world), then make that argument. But don’t dress it up with the supposed intelligence of the agent in question.
(ep stat: it’s hard to model my past beliefs accurately, but this is how I remember it)
I mean, it’s unsurprising now, but before that series of matches where AlphaStar won, it was impossible.
Maybe for you. But anyone who has actually played starcraft knows that it is a game that is (1) heavily dexterity capped, and (2) intense enough that you barely have time to think strategically. It’s all snap decisions and executing pre-planned builds and responses.
I’m not saying it’s easy to build a system that plays this game well. But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players. I do remember being somewhat skeptical of these systems working for RTS games, because the action space is huge, so it’s very hard to even write down a coherent menu of possible actions. I still don’t really understand how this is achieved.
When AlphaStar is capped by human ability and data availability, it’s still better than 99.8% of players, unless I’m missing something, so even if all a posteriori revealed non-intelligence-related advantages are taken away, it looks like there is still some extremely significant Starcraft-specialized kind of intelligence at play.
I haven’t looked into this in detail, so assuming the characterization in the article is accurate, this is indeed significant progress. But the 99.8% number is heavily misleading. The system was tuned to have an effective APM of 268, that’s probably top 5% of human players. Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players.
I remember that now—it wasn’t surprising for me, but I thought nobody else expected it.
The system was tuned to have an effective APM of 268, that’s probably top 5% of human players.
I mean, it has to be at the top level—otherwise, it would artificially handicap itself in games against the best players (and then we wouldn’t know if it lost because of its Starcraft intelligence, or because of its lower agility). (Edit: Actually, I think it would ideally be matched to the APM of the other player.)
Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
This is a good point. On the other hand, this is just a general feature of problems in the physical world (that humans make mistakes and are slow while computers don’t make the same kind of mistakes and are extra fast), so this seems to generalize to being a threat in general.
(In this specific case, I think the AI can miss some information it sees by it being lost somewhere between the input and the output layer, and the reaction time is between the input and the computation of the output, so it’s probably greater than one frame(?))
I think these comparisons of “yes, the AI is better than the vast majority of humans at X, but it doesn’t really count, because . . .” miss the point that the danger lies not in the superiority of AI in a fair-as-judged-by-humans-ex-post-facto contest, but in its superiority at all.
A point could be made that there is no real-world analog of contests that are biased in favor of an AI the way this kind of Diplomacy is, but how sure can we be about that?
I agree, and I don’t use this argument regarding arbitrary AI achievements.
But it’s very relevant when capabilities completely orthogonal to the AI are being sold as AI. The Starcraft example is more egregious, because AlphaStar had a different kind of access to the game state than a human has, which was claimed to be “equivalent” by the deepmind team. This resulted in extremely fine-grained control of units that the game was not designed around. Starcraft is partially a sport, i.e., a game of dexterity, concentration, and endurance. It’s unsurprising that a machine beats a human at that.
If you (generic you) are going to make an argument about how speed of execution, parallel communication and so on are game changers (specially in an increasingly online, API accessible world), then make that argument. But don’t dress it up with the supposed intelligence of the agent in question.
I mean, it’s unsurprising now, but before that series of matches where AlphaStar won, it was impossible.
When AlphaStar is capped by human ability and data availability, it’s still better than 99.8% of players, unless I’m missing something, so even if all a posteriori revealed non-intelligence-related advantages are taken away, it looks like there is still some extremely significant Starcraft-specialized kind of intelligence at play.
(ep stat: it’s hard to model my past beliefs accurately, but this is how I remember it)
Maybe for you. But anyone who has actually played starcraft knows that it is a game that is (1) heavily dexterity capped, and (2) intense enough that you barely have time to think strategically. It’s all snap decisions and executing pre-planned builds and responses.
I’m not saying it’s easy to build a system that plays this game well. But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players. I do remember being somewhat skeptical of these systems working for RTS games, because the action space is huge, so it’s very hard to even write down a coherent menu of possible actions. I still don’t really understand how this is achieved.
I haven’t looked into this in detail, so assuming the characterization in the article is accurate, this is indeed significant progress. But the 99.8% number is heavily misleading. The system was tuned to have an effective APM of 268, that’s probably top 5% of human players. Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
I remember that now—it wasn’t surprising for me, but I thought nobody else expected it.
I mean, it has to be at the top level—otherwise, it would artificially handicap itself in games against the best players (and then we wouldn’t know if it lost because of its Starcraft intelligence, or because of its lower agility). (Edit: Actually, I think it would ideally be matched to the APM of the other player.)
This is a good point. On the other hand, this is just a general feature of problems in the physical world (that humans make mistakes and are slow while computers don’t make the same kind of mistakes and are extra fast), so this seems to generalize to being a threat in general.
(In this specific case, I think the AI can miss some information it sees by it being lost somewhere between the input and the output layer, and the reaction time is between the input and the computation of the output, so it’s probably greater than one frame(?))