I think it may not be correct to shuffle this off into a box labelled “adversarial example” as if it doesn’t say anything central about the nature of current go AIs.
Go involves intuitive aspects (what moves “look right”), and tree search, and also something that might be seen as “theorem proving”. An example theorem is “a group with two eyes is alive”. Another is “a capture race between two groups, one with 23 liberties, the other with 22 liberties, will be won by the group with more liberties”. Human players don’t search the tree down to a depth of 23 to determine this—they apply the theorem. One might have thought that strong go AIs “know” these theorems, but it seems that they may not—they may just be good at faking it, most of the time.
I think it may not be correct to shuffle this off into a box labelled “adversarial example” as if it doesn’t say anything central about the nature of current go AIs.
Go involves intuitive aspects (what moves “look right”), and tree search, and also something that might be seen as “theorem proving”. An example theorem is “a group with two eyes is alive”. Another is “a capture race between two groups, one with 23 liberties, the other with 22 liberties, will be won by the group with more liberties”. Human players don’t search the tree down to a depth of 23 to determine this—they apply the theorem. One might have thought that strong go AIs “know” these theorems, but it seems that they may not—they may just be good at faking it, most of the time.