So, if I’m understanding this right, we could fine-tune GPT-N in different ways. For instance, we can currently fine-tune GPT-3 to predict whether a movie review was positive or not. Similarly, we could fine-tune GPT-N for some sort of “Plausible science score” and then try to maximise that score in the year 2040, which would lead to a paper that GPT-N would consider maximally plausible as a blah studies paper in the year 2040. For a sufficiently powerful GPT-N, this would lead to actual scientific advancement, especially since we wouldn’t need anywhere close to a 100% hit rate for this to be effective.
In fact, we could do all of this right now, it’s just that GPT-3 isn’t powerful enough to produce actual scientific advancement and would instead create legible-sounding examples that didn’t actually bear up, or probably even have a truly coherent, detailed idea behind them.
“fine-tuning” isn’t quite the right word for this. Right now GPT-3 is trained by being given a sequence of words like <token1><token2><token3> … <TokenN>, and it’s trained to predict the next token. What I’m saying is that we can, for each piece of text that we use in the training set, look at its date of publication and provenance, and we can train a new GPT-3 where instead of just being given the tokens, we give it <date of publication><is scientific publication?><author><token1><token2>...<tokenN>. And then at inference time, we can choose <date of publication=2040> to make it simulate future progress.
Basically all human text containing the words “publication 2040” is science-fiction, and we want to avoid the model writing fiction by giving it data that helps it disambiguate fiction about the future and actual future text. If we give it a correct ground truth about the publication date of every one of its training data strings, then it would be forced to actually extrapolate its knowledge into the future. Similarly most discussions of future tech are done by amateurs, or again in science-fiction, but giving it the correct ground truth about the actual journal of publication avoids all of that. GPT only needs to predict that Nature won’t become a crank journal in 20 years, and it will then make an actual effort at producing high-impact scientific publications.
So, if I’m understanding this right, we could fine-tune GPT-N in different ways. For instance, we can currently fine-tune GPT-3 to predict whether a movie review was positive or not. Similarly, we could fine-tune GPT-N for some sort of “Plausible science score” and then try to maximise that score in the year 2040, which would lead to a paper that GPT-N would consider maximally plausible as a blah studies paper in the year 2040. For a sufficiently powerful GPT-N, this would lead to actual scientific advancement, especially since we wouldn’t need anywhere close to a 100% hit rate for this to be effective.
In fact, we could do all of this right now, it’s just that GPT-3 isn’t powerful enough to produce actual scientific advancement and would instead create legible-sounding examples that didn’t actually bear up, or probably even have a truly coherent, detailed idea behind them.
“fine-tuning” isn’t quite the right word for this. Right now GPT-3 is trained by being given a sequence of words like <token1><token2><token3> … <TokenN>, and it’s trained to predict the next token. What I’m saying is that we can, for each piece of text that we use in the training set, look at its date of publication and provenance, and we can train a new GPT-3 where instead of just being given the tokens, we give it <date of publication><is scientific publication?><author><token1><token2>...<tokenN>. And then at inference time, we can choose <date of publication=2040> to make it simulate future progress.
Basically all human text containing the words “publication 2040” is science-fiction, and we want to avoid the model writing fiction by giving it data that helps it disambiguate fiction about the future and actual future text. If we give it a correct ground truth about the publication date of every one of its training data strings, then it would be forced to actually extrapolate its knowledge into the future. Similarly most discussions of future tech are done by amateurs, or again in science-fiction, but giving it the correct ground truth about the actual journal of publication avoids all of that. GPT only needs to predict that Nature won’t become a crank journal in 20 years, and it will then make an actual effort at producing high-impact scientific publications.