I have noted the problem of catastrophic forgetting in the section “why it might not work”. In general I agree continual learning is obviously a thing, otherwise I would not have used the established terminology. What I believe however is that the problems we face in continual learning in e.g. a 100M BERT model may not be the same as what we observe in models that can now meaningfully self critique. We have explored this technique publicly, but have we tried it with GPT-4? The publicly part was really just a question of whether OpenAI actually did it on this model or not, and it would be an amazing data point if they could say “We couldn’t get it to work.”
Ah, so the point was whether that had been explored publicly on the very largest language models that exist, because of the whole “sometimes approaches that didn’t work at small scale start working when you throw enough compute at them” thing? Makes sense.
Essentially yes, heh. I take this as a learning experience for my writing, I don’t know what I was thinking, but it is obvious in hindsight that saying to just “switch on backprop” sounds very naive.
I also confess I haven’t done the due diligence to find out what the actual largest model that has been tried with this, whether someone has tried it with Pythia or LLaMa. I’ll do some more googling tonight.
One intuition why the largest models might be different, is that part of the training/fine-tuning going on will have to do with the model’s own output. The largest models are the ones where the model’s own output is not essentially word salad.
I have noted the problem of catastrophic forgetting in the section “why it might not work”. In general I agree continual learning is obviously a thing, otherwise I would not have used the established terminology. What I believe however is that the problems we face in continual learning in e.g. a 100M BERT model may not be the same as what we observe in models that can now meaningfully self critique. We have explored this technique publicly, but have we tried it with GPT-4? The publicly part was really just a question of whether OpenAI actually did it on this model or not, and it would be an amazing data point if they could say “We couldn’t get it to work.”
Ah, so the point was whether that had been explored publicly on the very largest language models that exist, because of the whole “sometimes approaches that didn’t work at small scale start working when you throw enough compute at them” thing? Makes sense.
Essentially yes, heh. I take this as a learning experience for my writing, I don’t know what I was thinking, but it is obvious in hindsight that saying to just “switch on backprop” sounds very naive.
I also confess I haven’t done the due diligence to find out what the actual largest model that has been tried with this, whether someone has tried it with Pythia or LLaMa. I’ll do some more googling tonight.
One intuition why the largest models might be different, is that part of the training/fine-tuning going on will have to do with the model’s own output. The largest models are the ones where the model’s own output is not essentially word salad.