In June 2012, Zen19 had a rating of 5 dan (on KGS), from Katja’s paper.
Go algorithms had been improving about 1 stone / year for decades.
The difference between 5 dan and the best player in the world is less than 5 stones—professional dan are much closer together than amateur dan, whether you measure by handicap or elo. (In march 2012 Zen also won a game with a 4 stone handicap against one of the world’s best players, with 30m time controls, which basically lines up.)
If you extrapolated that trend literally, your prediction would be ~5 years for Zen19 to beat the best humans. I think the truth is more like 6 years for Zen, and 4 years for AlphaGo.
I believe Zen19 is implemented by one person hacking away at the problem.
(All that said, I can easily imagine someone in 2040 making a similar comment about AGI timelines.)
If taken literally as a scale of Go skill, the stone difference interestingly implies that pros are clustered near a “ceiling”of (human) play. Given that Zen19 has in fact kept getting better, maybe that’s interpretation has more merit than I would have thought.
The pro’s certainly believe that they are clustered that way.
It would be interesting to see how much stones AlphaGo can currently give a pro and still win but unfortunately Google doesn’t seem to be interested in finding out.
In June 2012, Zen19 had a rating of 5 dan (on KGS), from Katja’s paper.
Go algorithms had been improving about 1 stone / year for decades.
The difference between 5 dan and the best player in the world is less than 5 stones—professional dan are much closer together than amateur dan, whether you measure by handicap or elo. (In march 2012 Zen also won a game with a 4 stone handicap against one of the world’s best players, with 30m time controls, which basically lines up.)
There are 1-2 stones of error in those estimates, e.g. because of the different interpretation of different rating systems.
If you extrapolated that trend literally, your prediction would be ~5 years for Zen19 to beat the best humans. I think the truth is more like 6 years for Zen, and 4 years for AlphaGo.
I believe Zen19 is implemented by one person hacking away at the problem.
(All that said, I can easily imagine someone in 2040 making a similar comment about AGI timelines.)
If taken literally as a scale of Go skill, the stone difference interestingly implies that pros are clustered near a “ceiling”of (human) play. Given that Zen19 has in fact kept getting better, maybe that’s interpretation has more merit than I would have thought.
The pro’s certainly believe that they are clustered that way.
It would be interesting to see how much stones AlphaGo can currently give a pro and still win but unfortunately Google doesn’t seem to be interested in finding out.