Great post! Two thoughts that came to mind while reading it:
the post mostly discussed search happening directly within the network, e.g. within a single forward pass; but what can also happen e.g. in the case of LLMs is that search happens across token-generation rather than within. E.g. you could give ChatGPT a chess constellation and then ask it to list all the valid moves, and then check which move would lead to which state, and if that state looks better than the last one. This would be search depth 1 of course, but still a form of search. In practice it may be difficult because ChatGPT likes to give messages only of a certain length, so it probably stops prematurely if the search space gets too big, but still, search most definitely takes place in this case.
somewhat of a project proposal, ignoring my previous point and getting back to “search within a single forward pass of the network”: let’s assume we can “intelligent design” our way to a neural network that actually does implement some kind of small search to solve a problem. So we know the NN is on some pretty optimal solution for the problem it solves. What does (S)GD look like at or very near to this point? Would it stay close to this optimum, or maybe instantly diverge away, e.g. because the optimum’s attractor basin is so unimaginably tiny in weight space that it’s just numerically highly unstable? If the latter (and if this finding indeed generalizes meaningfully), then one could assume that even though search “exists” in parameter space, it’s impractical to ever be reached via SGD due to the unfriendly shape of the search space.
Great post! Two thoughts that came to mind while reading it:
the post mostly discussed search happening directly within the network, e.g. within a single forward pass; but what can also happen e.g. in the case of LLMs is that search happens across token-generation rather than within. E.g. you could give ChatGPT a chess constellation and then ask it to list all the valid moves, and then check which move would lead to which state, and if that state looks better than the last one. This would be search depth 1 of course, but still a form of search. In practice it may be difficult because ChatGPT likes to give messages only of a certain length, so it probably stops prematurely if the search space gets too big, but still, search most definitely takes place in this case.
somewhat of a project proposal, ignoring my previous point and getting back to “search within a single forward pass of the network”: let’s assume we can “intelligent design” our way to a neural network that actually does implement some kind of small search to solve a problem. So we know the NN is on some pretty optimal solution for the problem it solves. What does (S)GD look like at or very near to this point? Would it stay close to this optimum, or maybe instantly diverge away, e.g. because the optimum’s attractor basin is so unimaginably tiny in weight space that it’s just numerically highly unstable? If the latter (and if this finding indeed generalizes meaningfully), then one could assume that even though search “exists” in parameter space, it’s impractical to ever be reached via SGD due to the unfriendly shape of the search space.