Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don’t know experiments we’d clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)
I think if there is a spectrum from pattern recognition to search algorithm there must be a turning point somewhere: Pattern recognition means storing more and more knowledge to get better. A search algo means that you don’t need that much knowledge. So at some point of the training where the NN is pushed along this spectrum much of this stored knowledge should start to be pared away and generalised into an algorithm. This happens for toy tasks during grokking. I think it doesn’t happen in Leela.
I don’t think I understand your ontology for thinking about this, but I would probably also put Leela below this “turning point” (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).
That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that’s not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don’t see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).
That’s why I’ve tried to avoid absolute claims about how much Leela is doing pattern recognition vs “reasoning/...” but instead focused on arguing for a particular structure in Leela’s cognition: I just don’t know what it would mean to place Leela on either one of those sides. But I can see that if you think there’s a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.
Yeah, I feel like we do still disagree about some conceptual points but they seem less crisp than I initially thought and I don’t know experiments we’d clearly make different predictions for. (I expect you could finetune Leela for help mates faster than training a model from scratch, but I expect most of this would be driven by things closer to pattern recognition than search.)
I don’t think I understand your ontology for thinking about this, but I would probably also put Leela below this “turning point” (e.g., I expect most of its parameters are spent on storing knowledge and patterns rather than implementing crisp algorithms).
That said, for me, the natural spectrum is between a literal look-up table and brute-force tree search with no heuristics at all. (Of course, that’s not a spectrum I expect to be traversed during training, just a hypothetical spectrum of algorithms.) On that spectrum, I think Leela is clearly far removed from both sides, but I find it pretty difficult to define its place more clearly. In particular, I don’t see your turning point there (you start storing less knowledge immediately as you move away from the look-up table).
That’s why I’ve tried to avoid absolute claims about how much Leela is doing pattern recognition vs “reasoning/...” but instead focused on arguing for a particular structure in Leela’s cognition: I just don’t know what it would mean to place Leela on either one of those sides. But I can see that if you think there’s a crisp distinction between these two sides with a turning point in the middle, asking which side Leela is on is much more compelling.