Obviously this doesn’t work “from scratch”, you need enough training for the model to be able to distinguish good outputs from bad outputs and also ever produce good outputs on its own. We’re not going to get a ChatGPT-Zero. But I think this post does gesture in the general direction of something real.
While I do think the process you outlined in your post is more concrete and would probably work better and be easier than learning “from scratch”, I don’t think it’s completely obvious that something like this wouldn’t work from scratch. It was done for humans, albeit through billions of years of genetic evolution and thousands of years of cultural evolution. Something like ChatGPT-Zero would probably require many more orders of magnitude of compute than systems we are training today, and also some algorithmic/architectural improvements, but I don’t think it’s completely impossible.
I feel like your post is implying something similar, given the last sentence, so maybe I’m misinterpreting what exactly you’re saying won’t work.
The specific thing I think wouldn’t work is trying to start the process without a bunch of pretraining data for at least the initial judge (i.e. pure self play from a randomized initialization with no human-generated data or judgments enteringthetraining the training run at any point). Not super insightful I know, just addressing what I meant by “zero” in my hypothetical ChatGPT-Zero.
While I do think the process you outlined in your post is more concrete and would probably work better and be easier than learning “from scratch”, I don’t think it’s completely obvious that something like this wouldn’t work from scratch. It was done for humans, albeit through billions of years of genetic evolution and thousands of years of cultural evolution. Something like ChatGPT-Zero would probably require many more orders of magnitude of compute than systems we are training today, and also some algorithmic/architectural improvements, but I don’t think it’s completely impossible.
I feel like your post is implying something similar, given the last sentence, so maybe I’m misinterpreting what exactly you’re saying won’t work.
The specific thing I think wouldn’t work is trying to start the process without a bunch of pretraining data for at least the initial judge (i.e. pure self play from a randomized initialization with no human-generated data or judgments enteringthetraining the training run at any point). Not super insightful I know, just addressing what I meant by “zero” in my hypothetical ChatGPT-Zero.
Thanks for clarifying! I do agree that that wouldn’t work, at least if we wanted what was produced to be in any way useful or meaningful to humans.