To me, the fact that the human brain basically implements SSL+RL is very very strong evidence that the current DL paradigm (with a bit of “engineering” effort, but nothing like fundamental breakthroughs) will kinda just keep scaling until we reach point-of-no-return. Does this broadly look correct to people here? Would really appreciate other perspectives.
I mostly think “algorithms that involve both SSL and RL” is a much broader space of possible algorithms than you seem to think it is, and thus that there are parts of this broad space that require “fundamental breakthroughs” to access. For example, both AlexNet and differentiable rendering can be used to analyze images via supervised learning with gradient descent. But those two algorithms are very very different from each other! So there’s more to an algorithm than its update rule.
See also 2nd section of this comment, although I was emphasizing alignment-relevant differences there whereas you’re talking about capabilities. Other things include the fact that if I ask you to solve a hard math problem, your brain will be different (different weights, not just different activations / context) when you’re halfway through compared to when you started working on it (a.k.a. online learning, see also here), and the fact that brain neural networks are not really “deep” in the DL sense. Among other things.
Makes sense. I think we’re using the terms differently in scope. By “DL paradigm” I meant to encompass the kind of stuff you mentioned (RL-directing-SS-target (active learning), online learning, different architecture, etc) because they really seemed like “engineering challenges” to me (despite them covering a broad space of algorithms) in the sense that capabilities researchers already seem to be working on & scaling them without facing any apparent blockers to further progress, i.e. in need of any “fundamental breakthroughs”—by which I was pointing more at paradigm shifts away from DL like, idk, symbolic learning.
I have a slightly different takeaway. Yes techniques similar to current techniques will most likely lead to AGI but it’s not literally ‘just scaling LLMs’. The actual architecture of the brain is meaningfully different from what’s being deployed right now. So different in one sense. On the other hand it’s not like the brain does something completely different and proposals that are much closer to the brain architecture are in the literature (I won’t name them here...). It’s plausible that some variant on that will lead to true AGI. Pure hardware scaling obviously increases capabilities in a straightforward way but a transformer is not a generally intelligent agent and won’t be even if scaled many more OOMs.
(I think Steven Byrnes has a similar view but I wouldn’t want to misrepresent his views)
a transformer is not a generally intelligent agent and won’t be even if scaled many more OOMs
So far as I can tell, a transformer has three possible blockers (that would need to stand undefeated together): (1) in-context learning plateauing at a level where it’s not able to do even a little bit of useful work without changing model weights, (2) terrible sample efficiency that asks for more data than is available on new or rare/situational topics, and (3) absence of a synthetic data generation process that’s both sufficiently prolific and known not to be useless at that scale.
A need for online learning and terrible sample efficiency are defeated by OOMs if enough useful synthetic data can be generated, which the anemic in-context learning without changing weights might turn out to be sufficient for. This is the case of defeating (3), with others falling as a result.
Another possibility is that much larger multimodal transformers (there is a lot of video) might suffice without synthetic data if a model learns superintelligent in-context learning. SSL is not just about imitating humans, the problems it potentially becomes adept at solving are arbitrarily intricate. So even if it can’t grow further and learn substantially new things within its current architecture/model, it might happen to already be far enough along at inference time to do the necessary redesign on its own. This is the case of defeating (1), leaving it to the model to defeat the others. And it should help with (3) even at non-superintelligent levels.
Failing that, RL demonstrates human level sample efficiency in increasingly non-toy settings, promising that saner amounts of useful synthetic data might suffice, defeating (2), though at this point it’s substantially not-a-transformer.
generating useful synthetic data and solving novel tasks with little correlation with training data is the exact issue here. Seems straightforwardly true that a transformer arcthiecture doesn’t do that?
I don’t know what superintelligent in-context learning is—I’d be skeptical that scaling a transformer a further 3 OOMS will suddenly make it do tasks that are very far from the text distribution it is trained on, indeed solutions to tasks that are not even remotely in the internet text data like building a recursively self-improving agent (if such a thing is possible...)? Maybe I’m misunderstanding what you’re claiming here.
Not saying it’s impossible, just seems deeply implausible. ofc LLMs being so impressive was also a prior implausible but this seems another OOM of implausibility bits if that makes sense?
generating useful synthetic data and solving novel tasks with little correlation with training data is the exact issue here. Seems straightforwardly true that a transformer arcthiecture doesn’t do that?
I’m imagining some prompts to generate reasoning, inferred claims about the world. You can’t generate new observations about the world, but you can reason about the observations available so far, and having those inferred claims in the dataset likely helps, that’s how humans build intuition about theory. If an average a 1000 inferred claims are generated for every naturally observed statement (or just those on rare/new/situational topics), that could close the gap of sample efficiency with humans. Might take the form of exercises or essays or something.
If this is all done with prompts, using a sufficiently smart order-following chatbot, then it’s straightforwardly just a transformer, with some superficial scaffolding. If this can work, it’ll eventually appear in distillation literature, though I’m not sure if serious effort to check was actually made with current SOTA LLMs, to pre-train exclusively on synthetic data that’s not too simplistically prompted. Possibly you get nothing for a GPT-3 level generator, and then something for GPT-4+, because reasoning needs to be good enough to preserve contact with ground truth. From Altman’s comments I get the impression that it’s plausibly the exact thing OpenAI is hoping for.
I don’t know what superintelligent in-context learning is
In-context learning is capability to make use of novel data that’s only seen in a context, not in pre-training, to do tasks that make use of this novel data, in ways that normally would’ve been expected to require it being seen in pre-training. In-context learning is a model capability, it’s learned. So its properties are not capped by those of the hardcoded model training algorithm, notably in principle in-context learning could have higher sample efficiency (which might be crucial for generating a lot of synthetic data out of a few rare observations). Right now it’s worse in most respects, but that could change with scale without substantially modifying the transformer architecture, which is the premise of this thread.
By superintelligent in-context learning I mean the capabilities of in-context learning significantly exceeding those of humans. Things like fully comprehending a new paper without changing any model weights, becoming able to immediately write the next one in the same context window. I agree that it’s not very plausible, and probably can’t happen without sufficiently deep circuits, which even deep networks don’t seem to normally develop. But it’s not really ruled out by anything that’s been tried so far. Recent stuff on essentially pre-trainingwith somefrozen weights without losing resulting performance suggests a trend of increasing feasible model size for given compute. So I’m not sure this can’t be done in a few years. Then there’s things like memory transformers, handing a lot more data than a context to a learned learning capability.
To me, the fact that the human brain basically implements SSL+RL is very very strong evidence that the current DL paradigm (with a bit of “engineering” effort, but nothing like fundamental breakthroughs) will kinda just keep scaling until we reach point-of-no-return. Does this broadly look correct to people here? Would really appreciate other perspectives.
I mostly think “algorithms that involve both SSL and RL” is a much broader space of possible algorithms than you seem to think it is, and thus that there are parts of this broad space that require “fundamental breakthroughs” to access. For example, both AlexNet and differentiable rendering can be used to analyze images via supervised learning with gradient descent. But those two algorithms are very very different from each other! So there’s more to an algorithm than its update rule.
See also 2nd section of this comment, although I was emphasizing alignment-relevant differences there whereas you’re talking about capabilities. Other things include the fact that if I ask you to solve a hard math problem, your brain will be different (different weights, not just different activations / context) when you’re halfway through compared to when you started working on it (a.k.a. online learning, see also here), and the fact that brain neural networks are not really “deep” in the DL sense. Among other things.
Makes sense. I think we’re using the terms differently in scope. By “DL paradigm” I meant to encompass the kind of stuff you mentioned (RL-directing-SS-target (active learning), online learning, different architecture, etc) because they really seemed like “engineering challenges” to me (despite them covering a broad space of algorithms) in the sense that capabilities researchers already seem to be working on & scaling them without facing any apparent blockers to further progress, i.e. in need of any “fundamental breakthroughs”—by which I was pointing more at paradigm shifts away from DL like, idk, symbolic learning.
I have a slightly different takeaway. Yes techniques similar to current techniques will most likely lead to AGI but it’s not literally ‘just scaling LLMs’. The actual architecture of the brain is meaningfully different from what’s being deployed right now. So different in one sense. On the other hand it’s not like the brain does something completely different and proposals that are much closer to the brain architecture are in the literature (I won’t name them here...). It’s plausible that some variant on that will lead to true AGI. Pure hardware scaling obviously increases capabilities in a straightforward way but a transformer is not a generally intelligent agent and won’t be even if scaled many more OOMs.
(I think Steven Byrnes has a similar view but I wouldn’t want to misrepresent his views)
So far as I can tell, a transformer has three possible blockers (that would need to stand undefeated together): (1) in-context learning plateauing at a level where it’s not able to do even a little bit of useful work without changing model weights, (2) terrible sample efficiency that asks for more data than is available on new or rare/situational topics, and (3) absence of a synthetic data generation process that’s both sufficiently prolific and known not to be useless at that scale.
A need for online learning and terrible sample efficiency are defeated by OOMs if enough useful synthetic data can be generated, which the anemic in-context learning without changing weights might turn out to be sufficient for. This is the case of defeating (3), with others falling as a result.
Another possibility is that much larger multimodal transformers (there is a lot of video) might suffice without synthetic data if a model learns superintelligent in-context learning. SSL is not just about imitating humans, the problems it potentially becomes adept at solving are arbitrarily intricate. So even if it can’t grow further and learn substantially new things within its current architecture/model, it might happen to already be far enough along at inference time to do the necessary redesign on its own. This is the case of defeating (1), leaving it to the model to defeat the others. And it should help with (3) even at non-superintelligent levels.
Failing that, RL demonstrates human level sample efficiency in increasingly non-toy settings, promising that saner amounts of useful synthetic data might suffice, defeating (2), though at this point it’s substantially not-a-transformer.
generating useful synthetic data and solving novel tasks with little correlation with training data is the exact issue here. Seems straightforwardly true that a transformer arcthiecture doesn’t do that?
I don’t know what superintelligent in-context learning is—I’d be skeptical that scaling a transformer a further 3 OOMS will suddenly make it do tasks that are very far from the text distribution it is trained on, indeed solutions to tasks that are not even remotely in the internet text data like building a recursively self-improving agent (if such a thing is possible...)? Maybe I’m misunderstanding what you’re claiming here.
Not saying it’s impossible, just seems deeply implausible. ofc LLMs being so impressive was also a prior implausible but this seems another OOM of implausibility bits if that makes sense?
I’m imagining some prompts to generate reasoning, inferred claims about the world. You can’t generate new observations about the world, but you can reason about the observations available so far, and having those inferred claims in the dataset likely helps, that’s how humans build intuition about theory. If an average a 1000 inferred claims are generated for every naturally observed statement (or just those on rare/new/situational topics), that could close the gap of sample efficiency with humans. Might take the form of exercises or essays or something.
If this is all done with prompts, using a sufficiently smart order-following chatbot, then it’s straightforwardly just a transformer, with some superficial scaffolding. If this can work, it’ll eventually appear in distillation literature, though I’m not sure if serious effort to check was actually made with current SOTA LLMs, to pre-train exclusively on synthetic data that’s not too simplistically prompted. Possibly you get nothing for a GPT-3 level generator, and then something for GPT-4+, because reasoning needs to be good enough to preserve contact with ground truth. From Altman’s comments I get the impression that it’s plausibly the exact thing OpenAI is hoping for.
In-context learning is capability to make use of novel data that’s only seen in a context, not in pre-training, to do tasks that make use of this novel data, in ways that normally would’ve been expected to require it being seen in pre-training. In-context learning is a model capability, it’s learned. So its properties are not capped by those of the hardcoded model training algorithm, notably in principle in-context learning could have higher sample efficiency (which might be crucial for generating a lot of synthetic data out of a few rare observations). Right now it’s worse in most respects, but that could change with scale without substantially modifying the transformer architecture, which is the premise of this thread.
By superintelligent in-context learning I mean the capabilities of in-context learning significantly exceeding those of humans. Things like fully comprehending a new paper without changing any model weights, becoming able to immediately write the next one in the same context window. I agree that it’s not very plausible, and probably can’t happen without sufficiently deep circuits, which even deep networks don’t seem to normally develop. But it’s not really ruled out by anything that’s been tried so far. Recent stuff on essentially pre-training with some frozen weights without losing resulting performance suggests a trend of increasing feasible model size for given compute. So I’m not sure this can’t be done in a few years. Then there’s things like memory transformers, handing a lot more data than a context to a learned learning capability.