(ep stat: it’s hard to model my past beliefs accurately, but this is how I remember it)
I mean, it’s unsurprising now, but before that series of matches where AlphaStar won, it was impossible.
Maybe for you. But anyone who has actually played starcraft knows that it is a game that is (1) heavily dexterity capped, and (2) intense enough that you barely have time to think strategically. It’s all snap decisions and executing pre-planned builds and responses.
I’m not saying it’s easy to build a system that plays this game well. But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players. I do remember being somewhat skeptical of these systems working for RTS games, because the action space is huge, so it’s very hard to even write down a coherent menu of possible actions. I still don’t really understand how this is achieved.
When AlphaStar is capped by human ability and data availability, it’s still better than 99.8% of players, unless I’m missing something, so even if all a posteriori revealed non-intelligence-related advantages are taken away, it looks like there is still some extremely significant Starcraft-specialized kind of intelligence at play.
I haven’t looked into this in detail, so assuming the characterization in the article is accurate, this is indeed significant progress. But the 99.8% number is heavily misleading. The system was tuned to have an effective APM of 268, that’s probably top 5% of human players. Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players.
I remember that now—it wasn’t surprising for me, but I thought nobody else expected it.
The system was tuned to have an effective APM of 268, that’s probably top 5% of human players.
I mean, it has to be at the top level—otherwise, it would artificially handicap itself in games against the best players (and then we wouldn’t know if it lost because of its Starcraft intelligence, or because of its lower agility). (Edit: Actually, I think it would ideally be matched to the APM of the other player.)
Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
This is a good point. On the other hand, this is just a general feature of problems in the physical world (that humans make mistakes and are slow while computers don’t make the same kind of mistakes and are extra fast), so this seems to generalize to being a threat in general.
(In this specific case, I think the AI can miss some information it sees by it being lost somewhere between the input and the output layer, and the reaction time is between the input and the computation of the output, so it’s probably greater than one frame(?))
I mean, it’s unsurprising now, but before that series of matches where AlphaStar won, it was impossible.
When AlphaStar is capped by human ability and data availability, it’s still better than 99.8% of players, unless I’m missing something, so even if all a posteriori revealed non-intelligence-related advantages are taken away, it looks like there is still some extremely significant Starcraft-specialized kind of intelligence at play.
(ep stat: it’s hard to model my past beliefs accurately, but this is how I remember it)
Maybe for you. But anyone who has actually played starcraft knows that it is a game that is (1) heavily dexterity capped, and (2) intense enough that you barely have time to think strategically. It’s all snap decisions and executing pre-planned builds and responses.
I’m not saying it’s easy to build a system that plays this game well. But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players. I do remember being somewhat skeptical of these systems working for RTS games, because the action space is huge, so it’s very hard to even write down a coherent menu of possible actions. I still don’t really understand how this is achieved.
I haven’t looked into this in detail, so assuming the characterization in the article is accurate, this is indeed significant progress. But the 99.8% number is heavily misleading. The system was tuned to have an effective APM of 268, that’s probably top 5% of human players. Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
I remember that now—it wasn’t surprising for me, but I thought nobody else expected it.
I mean, it has to be at the top level—otherwise, it would artificially handicap itself in games against the best players (and then we wouldn’t know if it lost because of its Starcraft intelligence, or because of its lower agility). (Edit: Actually, I think it would ideally be matched to the APM of the other player.)
This is a good point. On the other hand, this is just a general feature of problems in the physical world (that humans make mistakes and are slow while computers don’t make the same kind of mistakes and are extra fast), so this seems to generalize to being a threat in general.
(In this specific case, I think the AI can miss some information it sees by it being lost somewhere between the input and the output layer, and the reaction time is between the input and the computation of the output, so it’s probably greater than one frame(?))