But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players.
I remember that now—it wasn’t surprising for me, but I thought nobody else expected it.
The system was tuned to have an effective APM of 268, that’s probably top 5% of human players.
I mean, it has to be at the top level—otherwise, it would artificially handicap itself in games against the best players (and then we wouldn’t know if it lost because of its Starcraft intelligence, or because of its lower agility). (Edit: Actually, I think it would ideally be matched to the APM of the other player.)
Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.
This is a good point. On the other hand, this is just a general feature of problems in the physical world (that humans make mistakes and are slow while computers don’t make the same kind of mistakes and are extra fast), so this seems to generalize to being a threat in general.
(In this specific case, I think the AI can miss some information it sees by it being lost somewhere between the input and the output layer, and the reaction time is between the input and the computation of the output, so it’s probably greater than one frame(?))
I remember that now—it wasn’t surprising for me, but I thought nobody else expected it.
I mean, it has to be at the top level—otherwise, it would artificially handicap itself in games against the best players (and then we wouldn’t know if it lost because of its Starcraft intelligence, or because of its lower agility). (Edit: Actually, I think it would ideally be matched to the APM of the other player.)
This is a good point. On the other hand, this is just a general feature of problems in the physical world (that humans make mistakes and are slow while computers don’t make the same kind of mistakes and are extra fast), so this seems to generalize to being a threat in general.
(In this specific case, I think the AI can miss some information it sees by it being lost somewhere between the input and the output layer, and the reaction time is between the input and the computation of the output, so it’s probably greater than one frame(?))