Thanks for running these experiments! My guess is that these puzzles are hard enough that Leela doesn’t really “know what’s going on” in many of them and gets the first move right in significant part by “luck” (i.e., the first move is heuristically natural and can be found without (even heuristically) knowing why it’s actually good). I think your results are mainly reflections of that, rather than Leela generally not having sensibly correlated move and value estimates (but I’m confused about what a case would be where we’d actually make different predictions about this correlation).
In our dataset, we tried to avoid cases like that by discarding puzzles where even a much weaker network (“LD2”) got the first move right, so that Leela getting the first move right was actually evidence it had noticed the non-obvious tactic.
Some predictions based on that:
Running our experiments on your dataset would result in smaller effect sizes than in our paper (in my view, that would be because Leela isn’t relying on look-ahead in your puzzles but is in ours but there could be other explanations)
LD2 would assign non-trivial probability to the correct first move in your dataset (for context, LD2 is pretty weak, and we’re only using puzzles where it puts <5% probability on the correct move; this leaves us with a lot of sacrifices and other cases where the first move is non-obvious)
Leela is much less confident on your dataset than on our puzzles (this is a cheap prediction because we specifically filtered our dataset to have Leela assign >50% probability to the correct move)
Leela gets some subsequent moves wrong a decent fraction of the time even in cases where it gets the first move right. Less confidently, there might not be much correlation between getting the first move right and getting later moves right, but I’d need to think about that part more.
You might agree with all of these predictions, they aren’t meant to be super strong. If you do, then I’m not sure which predictions we actually disagree about—maybe there’s a way to make a dataset where we expect different amounts of correlation between policy and value output but I’d need to think about that.
But I think it can be ruled out that a substantial part of Leela network’s prowess in solving chess puzzles or predicting game outcome is due to deliberate calculation.
FWIW, I think it’s quite plausible that only a small part of Leela’s strength is due to look-ahead, we’re only testing on a pretty narrow distribution of puzzles after all. (Though similarly, I disagree somewhat with “ruling out” given that you also just look at pretty specific puzzles (which I think might just be too hard to be a good example of Leela’s strength)).
ETA: If you can share your dataset, I’d be happy to test the predictions above if we disagree about any of them, also happy to make them more concrete if it seems like we might disagree. Though again, I’m not claiming you should disagree with any of them just based on what you’ve said so far.
I actually originally thought about filtering with a weaker model, but that would run into the argument: “So you adversarially filtered the puzzles for those transformers are bad at and now you’ve shown that bigger transformers are also bad at them.”
I think we don’t disagree too much, because you are too damn careful … ;-)
You only talk about “look-ahead” and you see this as on a spectrum from algo to pattern recognition.
I intentionally talked about “search” because it implies more deliberate “going through possible outcomes”. I mostly argue about the things that are implied by mentioning “reasoning”, “system 2″, “algorithm”.
I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don’t need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn’t happen in Leela.
I do have an additional dataset with puzzles extracted from Lichess games. Maybe I’ll get around to running the analysis on that dataset as well.
I thought about an additional experiment one could run: Finetuning on tasks like help mates. If there is a learned algo that looks ahead, this should work much better than if the work is done by a ton of pattern recognition which is useless for the new task. Of course the result of such an experiment would probably be difficult to interpret.
Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don’t know experiments we’d clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)
I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don’t need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn’t happen in Leela.
I don’t think I understand your ontology for thinking about this, but I would probably also put Leela below this “turning point” (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).
That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that’s not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don’t see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).
That’s why I’ve tried to avoid absolute claims about how much Leela is doing pattern recognition vs “reasoning/...” but instead focused on arguing for a particular structure in Leela’s cognition: I just don’t know what it would mean to place Leela on either one of those sides. But I can see that if you think there’s a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.
Thanks for running these experiments! My guess is that these puzzles are hard enough that Leela doesn’t really “know what’s going on” in many of them and gets the first move right in significant part by “luck” (i.e., the first move is heuristically natural and can be found without (even heuristically) knowing why it’s actually good). I think your results are mainly reflections of that, rather than Leela generally not having sensibly correlated move and value estimates (but I’m confused about what a case would be where we’d actually make different predictions about this correlation).
In our dataset, we tried to avoid cases like that by discarding puzzles where even a much weaker network (“LD2”) got the first move right, so that Leela getting the first move right was actually evidence it had noticed the non-obvious tactic.
Some predictions based on that:
Running our experiments on your dataset would result in smaller effect sizes than in our paper (in my view, that would be because Leela isn’t relying on look-ahead in your puzzles but is in ours but there could be other explanations)
LD2 would assign non-trivial probability to the correct first move in your dataset (for context, LD2 is pretty weak, and we’re only using puzzles where it puts <5% probability on the correct move; this leaves us with a lot of sacrifices and other cases where the first move is non-obvious)
Leela is much less confident on your dataset than on our puzzles (this is a cheap prediction because we specifically filtered our dataset to have Leela assign >50% probability to the correct move)
Leela gets some subsequent moves wrong a decent fraction of the time even in cases where it gets the first move right. Less confidently, there might not be much correlation between getting the first move right and getting later moves right, but I’d need to think about that part more.
You might agree with all of these predictions, they aren’t meant to be super strong. If you do, then I’m not sure which predictions we actually disagree about—maybe there’s a way to make a dataset where we expect different amounts of correlation between policy and value output but I’d need to think about that.
FWIW, I think it’s quite plausible that only a small part of Leela’s strength is due to look-ahead, we’re only testing on a pretty narrow distribution of puzzles after all. (Though similarly, I disagree somewhat with “ruling out” given that you also just look at pretty specific puzzles (which I think might just be too hard to be a good example of Leela’s strength)).
ETA: If you can share your dataset, I’d be happy to test the predictions above if we disagree about any of them, also happy to make them more concrete if it seems like we might disagree. Though again, I’m not claiming you should disagree with any of them just based on what you’ve said so far.
I actually originally thought about filtering with a weaker model, but that would run into the argument: “So you adversarially filtered the puzzles for those transformers are bad at and now you’ve shown that bigger transformers are also bad at them.”
I think we don’t disagree too much, because you are too damn careful … ;-)
You only talk about “look-ahead” and you see this as on a spectrum from algo to pattern recognition.
I intentionally talked about “search” because it implies more deliberate “going through possible outcomes”. I mostly argue about the things that are implied by mentioning “reasoning”, “system 2″, “algorithm”.
I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don’t need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn’t happen in Leela.
I do have an additional dataset with puzzles extracted from Lichess games. Maybe I’ll get around to running the analysis on that dataset as well.
I thought about an additional experiment one could run: Finetuning on tasks like help mates. If there is a learned algo that looks ahead, this should work much better than if the work is done by a ton of pattern recognition which is useless for the new task. Of course the result of such an experiment would probably be difficult to interpret.
Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don’t know experiments we’d clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)
I don’t think I understand your ontology for thinking about this, but I would probably also put Leela below this “turning point” (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).
That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that’s not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don’t see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).
That’s why I’ve tried to avoid absolute claims about how much Leela is doing pattern recognition vs “reasoning/...” but instead focused on arguing for a particular structure in Leela’s cognition: I just don’t know what it would mean to place Leela on either one of those sides. But I can see that if you think there’s a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.