My short timelines have their highest probability path going through:
Current LLMs get scaled enough that they are capable of automating search for new and better algorithms.
Somebody does this search and finds something dramatically better than transformers.
A new model trained on this new architecture repeats the search, but even more competently.
An even better architecture is found.
The new model trained on this architecture becomes AGI.
So it seems odd to me that so many people seem focused on transformer-based LLMs becoming AGI just through scaling. That seems theoretically possible to me, but I expect it to be so much less efficient that I expect it to take longer. Thus, I don’t expect that path to pay off before algorithm search has rendered it irrelevant.
My crux is that LLMs are inherently bad at search tasks over a new domain. Thus, I don’t expect LLMs to scale to improve search.
Anecdotal evidence: I’ve used LLMs extensively and my experience is that LLMs are great at retrieval but terrible at suggestion when it comes to ideas. You usually get something resembling an amalgamation of Google searches vs. suggestions from some kind of insight.
[EDIT: @ChosunOne convincingly argues below that the paper I cite in this comment is not good evidence for search, and I would no longer claim that it is, although I’m not necessarily sold on the broader claim that LLMs are inherently bad at search (which I see largely as an expression of the core disagreement I present in this post).]
LLMs are inherently bad at search tasks over a new domain.
The recently-published ‘Evidence of Learned Look-Ahead in a Chess-Playing Neural Network’ suggests that this may not be a fundamental limitation. It’s looking at a non-LLM transformer, and the degree to which we can treat it as evidence about LLMs is non-obvious (at least to me). But it’s enough to make me hesitant to conclude that this is a fundamental limitation rather than something that’ll improve with scale (especially since we see performance on planning problems, which in my view are essentially search problems, improving with scale).
The cited paper in Section 5 (Conclusion-Limitations) states plainly:
(2) We focus on look-ahead along a single line of play; we do not test whether Leela compares multiple different lines of play (what one might call search). … (4) Chess as a domain might favor look-ahead to an unusually strong extent.
The paper is more just looking at how Leela evaluates a given line rather than doing any kind of search. And this makes sense. Pattern recognition is an extremely important part of playing chess (as a player myself), and it is embedded in another system doing the actual search, namely Monte Carlo Tree Search. So it isn’t surprising that it has learned to look ahead in a straight line since that’s what all of its training experience is going to entail. If transformers were any good at doing the search, I would expect a chess bot without employing something like MCTS.
It’s not clear to me that there’s a very principled distinction between look-ahead and search, since there’s not a line of play that’s guaranteed to happen. Search is just the comparison of look-ahead on multiple lines. It’s notable that the paper generally talks about “look-ahead or search” throughout.
That said, I haven’t read this paper very closely, so I recognize I might be misinterpreting.
Or to clarify that a bit, it seems like the reason to evaluate any lines at all is in order to do search, even if they didn’t test that. Otherwise what would incentivize the model to do look-ahead at all?
In chess, a “line” is sequence of moves that are hard to interrupt. There are kind of obvious moves you have to play or else you are just losing (such as recapturing a piece, moving king out of check, performing checkmate etc). Leela uses the neural network more for policy, which means giving a score to a given board position, which then the MCTS can use to determine whether or not to prune that direction or explore that section more. So it makes sense that Leela would have an embedding of powerful lines as part of its heuristic, since it isn’t doing to main work of search. It’s more pattern recognition on the board state, so it can learn to recognize the kinds of lines that are useful and whether or not they are “present” in the current board state. It gets this information from the MCTS system as it trains, and compresses the “triggers” into the earlier evaluations, which then this paper explores.
It’s very cool work and result, but I feel it’s too strong to say that the policy network is doing search as opposed to recognizing lines from its training at earlier board states.
In chess, a “line” is sequence of moves that are hard to interrupt. There are kind of obvious moves you have to play or else you are just losing
Ah, ok, thanks for the clarification; I assumed ‘line’ just meant ‘a sequence of moves’. I’m more of a go player than a chess player myself.
It still seems slightly fuzzy in that other than check/mate situations no moves are fully mandatory and eg recaptures may occasionally turn out to be the wrong move?
But I retract my claim that this paper is evidence of search, and appreciate you helping me see that.
It still seems slightly fuzzy in that other than check/mate situations no moves are fully mandatory and eg recaptures may occasionally turn out to be the wrong move?
Indeed it can be difficult to know when it is actually better not to continue the line vs when it is, but that is precisely what MCTS would help figure out. MCTS would do actual exploration of board states and the budget for which states it explores would be informed by the policy network. It’s usually better to continue a line vs not, so I would expect MCTS to spend most of its budget continuing the line, and the policy would be updated during training with whether or not the recommendation resulted in more wins. Ultimately though, the policy network is probably storing a fuzzy pattern matcher for good board states (perhaps encoding common lines or interpolations of lines encountered by the MCTS) that it can use to more effectively guide the search by giving it an appropriate score.
To be clear, I don’t think a transformer is completely incapable of doing any search, just that it is probably not learning to do it in this case and is probably pretty inefficient at doing it when prompted to.
How is this not basically the widespread idea of recursive self improvement?
This idea is simple enough that it has occurred even to me, and there is no way that, e.g. Ilya Sutskever hasn’t thought about that.
I agree with most of this. My claim here is mainly that if this is the case, then there’s at least one remaining necessary breakthrough, of unknown difficulty, before AGI, and so we can’t naively extrapolate timelines from LLM progress to date.
I additionally think that if this is the case, then LLMs’ difficulty with planning is evidence that they may not be great at automating search for new and better algorithms, although hardly conclusive evidence.
Yeah, I think my claim needs evidence to support it. That’s why I’m personally very excited to design evals targeted at detecting self-improvement capabilities.
We shouldn’t be stuck guessing about something so important!
My short timelines have their highest probability path going through:
Current LLMs get scaled enough that they are capable of automating search for new and better algorithms.
Somebody does this search and finds something dramatically better than transformers.
A new model trained on this new architecture repeats the search, but even more competently. An even better architecture is found.
The new model trained on this architecture becomes AGI.
So it seems odd to me that so many people seem focused on transformer-based LLMs becoming AGI just through scaling. That seems theoretically possible to me, but I expect it to be so much less efficient that I expect it to take longer. Thus, I don’t expect that path to pay off before algorithm search has rendered it irrelevant.
My crux is that LLMs are inherently bad at search tasks over a new domain. Thus, I don’t expect LLMs to scale to improve search.
Anecdotal evidence: I’ve used LLMs extensively and my experience is that LLMs are great at retrieval but terrible at suggestion when it comes to ideas. You usually get something resembling an amalgamation of Google searches vs. suggestions from some kind of insight.
[EDIT: @ChosunOne convincingly argues below that the paper I cite in this comment is not good evidence for search, and I would no longer claim that it is, although I’m not necessarily sold on the broader claim that LLMs are inherently bad at search (which I see largely as an expression of the core disagreement I present in this post).]
The recently-published ‘Evidence of Learned Look-Ahead in a Chess-Playing Neural Network’ suggests that this may not be a fundamental limitation. It’s looking at a non-LLM transformer, and the degree to which we can treat it as evidence about LLMs is non-obvious (at least to me). But it’s enough to make me hesitant to conclude that this is a fundamental limitation rather than something that’ll improve with scale (especially since we see performance on planning problems, which in my view are essentially search problems, improving with scale).
The cited paper in Section 5 (Conclusion-Limitations) states plainly:
The paper is more just looking at how Leela evaluates a given line rather than doing any kind of search. And this makes sense. Pattern recognition is an extremely important part of playing chess (as a player myself), and it is embedded in another system doing the actual search, namely Monte Carlo Tree Search. So it isn’t surprising that it has learned to look ahead in a straight line since that’s what all of its training experience is going to entail. If transformers were any good at doing the search, I would expect a chess bot without employing something like MCTS.
It’s not clear to me that there’s a very principled distinction between look-ahead and search, since there’s not a line of play that’s guaranteed to happen. Search is just the comparison of look-ahead on multiple lines. It’s notable that the paper generally talks about “look-ahead or search” throughout.
That said, I haven’t read this paper very closely, so I recognize I might be misinterpreting.
Or to clarify that a bit, it seems like the reason to evaluate any lines at all is in order to do search, even if they didn’t test that. Otherwise what would incentivize the model to do look-ahead at all?
In chess, a “line” is sequence of moves that are hard to interrupt. There are kind of obvious moves you have to play or else you are just losing (such as recapturing a piece, moving king out of check, performing checkmate etc). Leela uses the neural network more for policy, which means giving a score to a given board position, which then the MCTS can use to determine whether or not to prune that direction or explore that section more. So it makes sense that Leela would have an embedding of powerful lines as part of its heuristic, since it isn’t doing to main work of search. It’s more pattern recognition on the board state, so it can learn to recognize the kinds of lines that are useful and whether or not they are “present” in the current board state. It gets this information from the MCTS system as it trains, and compresses the “triggers” into the earlier evaluations, which then this paper explores.
It’s very cool work and result, but I feel it’s too strong to say that the policy network is doing search as opposed to recognizing lines from its training at earlier board states.
Ah, ok, thanks for the clarification; I assumed ‘line’ just meant ‘a sequence of moves’. I’m more of a go player than a chess player myself.
It still seems slightly fuzzy in that other than check/mate situations no moves are fully mandatory and eg recaptures may occasionally turn out to be the wrong move?
But I retract my claim that this paper is evidence of search, and appreciate you helping me see that.
Indeed it can be difficult to know when it is actually better not to continue the line vs when it is, but that is precisely what MCTS would help figure out. MCTS would do actual exploration of board states and the budget for which states it explores would be informed by the policy network. It’s usually better to continue a line vs not, so I would expect MCTS to spend most of its budget continuing the line, and the policy would be updated during training with whether or not the recommendation resulted in more wins. Ultimately though, the policy network is probably storing a fuzzy pattern matcher for good board states (perhaps encoding common lines or interpolations of lines encountered by the MCTS) that it can use to more effectively guide the search by giving it an appropriate score.
To be clear, I don’t think a transformer is completely incapable of doing any search, just that it is probably not learning to do it in this case and is probably pretty inefficient at doing it when prompted to.
Sorry to be that guy but maybe this idea shouldn’t be posted publicly (I never read it before)
How is this not basically the widespread idea of recursive self improvement? This idea is simple enough that it has occurred even to me, and there is no way that, e.g. Ilya Sutskever hasn’t thought about that.
I guess the vague idea is in the water. Just never saw it stated so explicitly. Not a big deal.
I agree with most of this. My claim here is mainly that if this is the case, then there’s at least one remaining necessary breakthrough, of unknown difficulty, before AGI, and so we can’t naively extrapolate timelines from LLM progress to date.
I additionally think that if this is the case, then LLMs’ difficulty with planning is evidence that they may not be great at automating search for new and better algorithms, although hardly conclusive evidence.
Yeah, I think my claim needs evidence to support it. That’s why I’m personally very excited to design evals targeted at detecting self-improvement capabilities.
We shouldn’t be stuck guessing about something so important!