It’s not clear to me that there’s a very principled distinction between look-ahead and search, since there’s not a line of play that’s guaranteed to happen. Search is just the comparison of look-ahead on multiple lines. It’s notable that the paper generally talks about “look-ahead or search” throughout.
That said, I haven’t read this paper very closely, so I recognize I might be misinterpreting.
Or to clarify that a bit, it seems like the reason to evaluate any lines at all is in order to do search, even if they didn’t test that. Otherwise what would incentivize the model to do look-ahead at all?
In chess, a “line” is sequence of moves that are hard to interrupt. There are kind of obvious moves you have to play or else you are just losing (such as recapturing a piece, moving king out of check, performing checkmate etc). Leela uses the neural network more for policy, which means giving a score to a given board position, which then the MCTS can use to determine whether or not to prune that direction or explore that section more. So it makes sense that Leela would have an embedding of powerful lines as part of its heuristic, since it isn’t doing to main work of search. It’s more pattern recognition on the board state, so it can learn to recognize the kinds of lines that are useful and whether or not they are “present” in the current board state. It gets this information from the MCTS system as it trains, and compresses the “triggers” into the earlier evaluations, which then this paper explores.
It’s very cool work and result, but I feel it’s too strong to say that the policy network is doing search as opposed to recognizing lines from its training at earlier board states.
In chess, a “line” is sequence of moves that are hard to interrupt. There are kind of obvious moves you have to play or else you are just losing
Ah, ok, thanks for the clarification; I assumed ‘line’ just meant ‘a sequence of moves’. I’m more of a go player than a chess player myself.
It still seems slightly fuzzy in that other than check/mate situations no moves are fully mandatory and eg recaptures may occasionally turn out to be the wrong move?
But I retract my claim that this paper is evidence of search, and appreciate you helping me see that.
It still seems slightly fuzzy in that other than check/mate situations no moves are fully mandatory and eg recaptures may occasionally turn out to be the wrong move?
Indeed it can be difficult to know when it is actually better not to continue the line vs when it is, but that is precisely what MCTS would help figure out. MCTS would do actual exploration of board states and the budget for which states it explores would be informed by the policy network. It’s usually better to continue a line vs not, so I would expect MCTS to spend most of its budget continuing the line, and the policy would be updated during training with whether or not the recommendation resulted in more wins. Ultimately though, the policy network is probably storing a fuzzy pattern matcher for good board states (perhaps encoding common lines or interpolations of lines encountered by the MCTS) that it can use to more effectively guide the search by giving it an appropriate score.
To be clear, I don’t think a transformer is completely incapable of doing any search, just that it is probably not learning to do it in this case and is probably pretty inefficient at doing it when prompted to.
It’s not clear to me that there’s a very principled distinction between look-ahead and search, since there’s not a line of play that’s guaranteed to happen. Search is just the comparison of look-ahead on multiple lines. It’s notable that the paper generally talks about “look-ahead or search” throughout.
That said, I haven’t read this paper very closely, so I recognize I might be misinterpreting.
Or to clarify that a bit, it seems like the reason to evaluate any lines at all is in order to do search, even if they didn’t test that. Otherwise what would incentivize the model to do look-ahead at all?
In chess, a “line” is sequence of moves that are hard to interrupt. There are kind of obvious moves you have to play or else you are just losing (such as recapturing a piece, moving king out of check, performing checkmate etc). Leela uses the neural network more for policy, which means giving a score to a given board position, which then the MCTS can use to determine whether or not to prune that direction or explore that section more. So it makes sense that Leela would have an embedding of powerful lines as part of its heuristic, since it isn’t doing to main work of search. It’s more pattern recognition on the board state, so it can learn to recognize the kinds of lines that are useful and whether or not they are “present” in the current board state. It gets this information from the MCTS system as it trains, and compresses the “triggers” into the earlier evaluations, which then this paper explores.
It’s very cool work and result, but I feel it’s too strong to say that the policy network is doing search as opposed to recognizing lines from its training at earlier board states.
Ah, ok, thanks for the clarification; I assumed ‘line’ just meant ‘a sequence of moves’. I’m more of a go player than a chess player myself.
It still seems slightly fuzzy in that other than check/mate situations no moves are fully mandatory and eg recaptures may occasionally turn out to be the wrong move?
But I retract my claim that this paper is evidence of search, and appreciate you helping me see that.
Indeed it can be difficult to know when it is actually better not to continue the line vs when it is, but that is precisely what MCTS would help figure out. MCTS would do actual exploration of board states and the budget for which states it explores would be informed by the policy network. It’s usually better to continue a line vs not, so I would expect MCTS to spend most of its budget continuing the line, and the policy would be updated during training with whether or not the recommendation resulted in more wins. Ultimately though, the policy network is probably storing a fuzzy pattern matcher for good board states (perhaps encoding common lines or interpolations of lines encountered by the MCTS) that it can use to more effectively guide the search by giving it an appropriate score.
To be clear, I don’t think a transformer is completely incapable of doing any search, just that it is probably not learning to do it in this case and is probably pretty inefficient at doing it when prompted to.