I believe there is considerable low-hanging algorithmic fruit that can make LLMs better at reasoning tasks. I think these changes will involve modifications to the architecture + training objectives. One major example is highlighted by the work of https://arxiv.org/abs/2210.10749, which show that Transformers can only heuristically implement algorithms to most interesting problems in the computational complexity hierarchy. With recurrence (e.g. through CoT https://arxiv.org/abs/2310.07923) these problems can be avoided, which might lead to much better generic, domain-independent reasoning capabilities. A small number of people are already working on such algorithmic modifications to Transformers (e.g. https://arxiv.org/abs/2403.09629).
This is to say that we haven’t really explored small variations on the current LLM paradigm, and it’s quite likely that the “bugs” we see in their behavior could be addressed through manageable algorithmic changes + a few OOMs more of compute. For this reason, if they make a big difference, I could see capabilities changing quite rapidly once people figure out how to implement them. I think scaling + a little creativity is alive and well as a pathway to nearish-term AGI.
‘The Expressive Power of Transformers with Chain of Thought’ is extremely interesting, thank you! I’ve noticed a tendency to conflate the limitations of what transformers can do in a forward pass with what they can do under autoregressive conditions, so it’s great to see research explicitly addressing how the latter extends the former.
the “bugs” we see in their behavior could be addressed through manageable algorithmic changes + a few OOMs more of compute...I think scaling + a little creativity is alive and well as a pathway to nearish-term AGI.
I agree that this is plausible. I mentally lumped this sort of thing into the ‘breakthrough needed’ category in the ‘Why does this matter?’ section. Your point is well-taken that there are relatively small improvements that could make the difference, but to me that has to be balanced against the fact that there have been an enormous number of papers claiming improvements to the transformer architecture that then haven’t been adopted.
From outside the scaling labs, it’s hard to know how much of that is the improvements not panning out vs a lack of willingness & ability to throw resources at pursuing them. One the one hand I suspect there’s an incentive to focus on the path that they know is working, namely continuing to scale up. On the other hand, scaling the current architecture is an extremely compute-intensive path, so I would think that it’s worth putting resources into trying to see whether these improvements would work well at scale. If you (or anyone else) has insight into the degree to which the scaling labs are actually trying to incorporate the various claimed improvements, I’d be quite interested to know.
I believe there is considerable low-hanging algorithmic fruit that can make LLMs better at reasoning tasks. I think these changes will involve modifications to the architecture + training objectives. One major example is highlighted by the work of https://arxiv.org/abs/2210.10749, which show that Transformers can only heuristically implement algorithms to most interesting problems in the computational complexity hierarchy. With recurrence (e.g. through CoT https://arxiv.org/abs/2310.07923) these problems can be avoided, which might lead to much better generic, domain-independent reasoning capabilities. A small number of people are already working on such algorithmic modifications to Transformers (e.g. https://arxiv.org/abs/2403.09629).
This is to say that we haven’t really explored small variations on the current LLM paradigm, and it’s quite likely that the “bugs” we see in their behavior could be addressed through manageable algorithmic changes + a few OOMs more of compute. For this reason, if they make a big difference, I could see capabilities changing quite rapidly once people figure out how to implement them. I think scaling + a little creativity is alive and well as a pathway to nearish-term AGI.
‘The Expressive Power of Transformers with Chain of Thought’ is extremely interesting, thank you! I’ve noticed a tendency to conflate the limitations of what transformers can do in a forward pass with what they can do under autoregressive conditions, so it’s great to see research explicitly addressing how the latter extends the former.
I agree that this is plausible. I mentally lumped this sort of thing into the ‘breakthrough needed’ category in the ‘Why does this matter?’ section. Your point is well-taken that there are relatively small improvements that could make the difference, but to me that has to be balanced against the fact that there have been an enormous number of papers claiming improvements to the transformer architecture that then haven’t been adopted.
From outside the scaling labs, it’s hard to know how much of that is the improvements not panning out vs a lack of willingness & ability to throw resources at pursuing them. One the one hand I suspect there’s an incentive to focus on the path that they know is working, namely continuing to scale up. On the other hand, scaling the current architecture is an extremely compute-intensive path, so I would think that it’s worth putting resources into trying to see whether these improvements would work well at scale. If you (or anyone else) has insight into the degree to which the scaling labs are actually trying to incorporate the various claimed improvements, I’d be quite interested to know.