Aren’t LLMs already capable of two very different kinds of search?
Firstly, their whole deal is predicting the next token—which is a kind of search. They’re evaluation all the tokens at every step, and in the end choose the most probable seeming one.
Secondly, across-token search when prompted accordingly. Say “Please come up with 10 options for X, then rate them all according to Y, and select the best option” is something that current LLMs can perform very reliably—whether or not “within token search” exists as well.
But then again, one might of course argue that search happening within a single forward pass, and maybe even a type of search that “emerged ” via SGD rather than being hard baked into the architecture, would be particularly interesting/important/dangerous. We just shouldn’t make the mistake of assuming that this would be the only type of search that’s relevant.
I think across-token search via prompting already has the potential to lead to the AGI like problems that we associate with mesa optimizers. Evidently the technology is not quite there yet because PoCs like AutoGPT basically don’t quite work, so far. But conditional on AGI being developed in the next few years, it would seem very likely to me that this kind of search would be the one that enables it, rather than some hidden “O(1)” search deeply within the network itself.
Edit: I should of course add a “thanks for the post” and mention that I enjoyed reading it, and it made some very useful points!
I’d take an agnostic view on whether LLMs are doing search internally. Crucially, though, I think the relevant output to be searching over is distributions of tokens, rather than the actual token that gets chosen. Search is not required to generate a single distribution over next tokens.
I agree that external search via scaffolding can also be done, and would be much easier to identify, but without understanding the internals it’s hard to know how powerful the search process will be.
Aren’t LLMs already capable of two very different kinds of search? Firstly, their whole deal is predicting the next token—which is a kind of search. They’re evaluation all the tokens at every step, and in the end choose the most probable seeming one. Secondly, across-token search when prompted accordingly. Say “Please come up with 10 options for X, then rate them all according to Y, and select the best option” is something that current LLMs can perform very reliably—whether or not “within token search” exists as well. But then again, one might of course argue that search happening within a single forward pass, and maybe even a type of search that “emerged ” via SGD rather than being hard baked into the architecture, would be particularly interesting/important/dangerous. We just shouldn’t make the mistake of assuming that this would be the only type of search that’s relevant.
I think across-token search via prompting already has the potential to lead to the AGI like problems that we associate with mesa optimizers. Evidently the technology is not quite there yet because PoCs like AutoGPT basically don’t quite work, so far. But conditional on AGI being developed in the next few years, it would seem very likely to me that this kind of search would be the one that enables it, rather than some hidden “O(1)” search deeply within the network itself.
Edit: I should of course add a “thanks for the post” and mention that I enjoyed reading it, and it made some very useful points!
I’d take an agnostic view on whether LLMs are doing search internally. Crucially, though, I think the relevant output to be searching over is distributions of tokens, rather than the actual token that gets chosen. Search is not required to generate a single distribution over next tokens.
I agree that external search via scaffolding can also be done, and would be much easier to identify, but without understanding the internals it’s hard to know how powerful the search process will be.