If the search tree is narrowed, it is narrowed for both players, so why would it be a gain?
There may be an asymmetry between successful modes of attack and successful modes of defense—if there’s a narrow thread that white can win through, and a thick thread that black can threaten through, then white wins computationally by closing off that tree.
But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.
But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.
As it turns out, we learned later that Fan Hui started working with Deepmind on AlphaGo after their match, and played a bunch of games against it as it improved. So it did have a number of AI vs. human training games.
There may be an asymmetry between successful modes of attack and successful modes of defense—if there’s a narrow thread that white can win through, and a thick thread that black can threaten through, then white wins computationally by closing off that tree.
But thanks for asking: I was confused somewhat because I was thinking about AI vs. human games, but the AI is trained mostly on human vs. human and AI vs. AI games, neither of which will have the AI vs. human feature. Well, except for bots playing on KGS.
As it turns out, we learned later that Fan Hui started working with Deepmind on AlphaGo after their match, and played a bunch of games against it as it improved. So it did have a number of AI vs. human training games.