The commentator (on the Deepmind channel) calling out several of AlphaGo’s moves as conservative. Essentially, it would play an additional stone to settle or augment some group that he wouldn’t necessarily have played around. What I’m curious about is how much this reflects an attempt by AlphaGo to conserve computational resources. “I think move A is a 12 point swing, and move B is a 10 point swing, but move B narrows the search tree for future moves in a way that I think will net me at least 2 more points.”
If the search tree is narrowed, it is narrowed for both players, so why would it be a gain?
If the search tree is narrowed, it is narrowed for both players, so why would it be a gain?
There may be an asymmetry between successful modes of attack and successful modes of defense—if there’s a narrow thread that white can win through, and a thick thread that black can threaten through, then white wins computationally by closing off that tree.
But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.
But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.
As it turns out, we learned later that Fan Hui started working with Deepmind on AlphaGo after their match, and played a bunch of games against it as it improved. So it did have a number of AI vs. human training games.
If the search tree is narrowed, it is narrowed for both players, so why would it be a gain?
There may be an asymmetry between successful modes of attack and successful modes of defense—if there’s a narrow thread that white can win through, and a thick thread that black can threaten through, then white wins computationally by closing off that tree.
But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.
As it turns out, we learned later that Fan Hui started working with Deepmind on AlphaGo after their match, and played a bunch of games against it as it improved. So it did have a number of AI vs. human training games.