The base models of GPT-3 already have the ability to “follow instructions”, it’s just veiled behind the more general interface. [...] you can see how it contains this capability somewhere.
This is a good point that I forgot. My mental model of this is that since many training samples are Q&A, in these cases, learning to complete implies learning how to answer.
InstructGPT also has the value of not needing the wrapper of “Q: [] A: []”, but that’s not really a qualitative difference.
I want to push back a little bit on the claim that this is not a qualitative difference; it does imply a big difference in output for identical input, even if the transformation required to get similar output between the two models is simple.
In other words, instruction following is not a new capability and the fine-tuning doesn’t really make any qualitative changes to the model. In fact, I think that you can get results [close to] this good if you prompt it really well (like, in the realm of soft prompts).
TIL about soft prompts. That’s really cool, and I’m not surprised it works (it also feels a little related to my second proposed experiment). My intuition here (transferred from RNNs, but I think it should mostly apply to unidirectional transformers as well) is that a successful prompt puts the NN into the right “mental state” to generate the desired output: fine-tuning for e.g. instruction following mostly pushes to get the model into this state from the prompts given (as opposed to e.g. for HHH behavior, which also adjusts the outputs from induced states); soft prompts instead search for and learn a “cheat code” that puts the model into a state such that the prompt is interpreted correctly. Would you (broadly) agree with this?
I want to push back a little bit on the claim that this is not a qualitative difference; it does imply a big difference in output for identical input, even if the transformation required to get similar output between the two models is simple.
That’s fair—I meant mainly on the abstract view where you think of the distribution that the model is simulating. It doesn’t take a qualitative shift either in terms of the model being a simulator, nor a large shift in terms of the distribution itself. My point is mainly that instruction following is still well within the realm of a simulator—InstructGPT isn’t following instructions at the model-level, it’s instantiating simulacra that respond to the instructions. Which is why prompt engineering still works with those models.
a successful prompt puts the NN into the right “mental state” to generate the desired output
Yeah. Prompts serve the purpose of telling GPT what world specifically it’s in on its learned distribution over worlds, and what processes it’s meant to be simulating. It’s zeroing in on the right simulacra, or “mental state” as you put it (though it’s at a higher level of abstraction than the model itself, being a simulated process, hence why simulacra evokes a more precise image to me).
fine-tuning for e.g. instruction following mostly pushes to get the model into this state from the prompts given (as opposed to e.g. for HHH behavior, which also adjusts the outputs from induced states); soft prompts instead search for and learn a “cheat code” that puts the model into a state such that the prompt is interpreted correctly. Would you (broadly) agree with this?
The way I think of it is that with fine-tuning, you’re changing the learned distribution (both in terms of shifting it and narrowing/collapsing it) to make certain simulacra much more accessible—even without additional information from the prompt to tell the model what kind of setting it’s in, the distribution can be shifted to make instruction-following simulacra much more heavily represented. As stated above, prompts generally give information to the model on what part of the learned prior the setting is in, so soft prompts are giving maximal information in the prompt to the model on what part of the prior to collapse probability onto. For stronger fine-tuning I would expect needing to pack more information into the prompt.
Take for example the case where you want to fine-tune GPT to write film scripts. You can do this with base GPT models too, it’d just be a lot harder because the simulacra you want (film writers) aren’t as easily accessible as in a fine-tuned model where those are the processes the prior is already focused on. But given enough prompt engineering, you can get pretty powerful performance anyway, which shows that the capability is present in this model which can traverse a wide region of concept space with varying levels of fidelity.
This is a good point that I forgot. My mental model of this is that since many training samples are Q&A, in these cases, learning to complete implies learning how to answer.
I want to push back a little bit on the claim that this is not a qualitative difference; it does imply a big difference in output for identical input, even if the transformation required to get similar output between the two models is simple.
TIL about soft prompts. That’s really cool, and I’m not surprised it works (it also feels a little related to my second proposed experiment). My intuition here (transferred from RNNs, but I think it should mostly apply to unidirectional transformers as well) is that a successful prompt puts the NN into the right “mental state” to generate the desired output: fine-tuning for e.g. instruction following mostly pushes to get the model into this state from the prompts given (as opposed to e.g. for HHH behavior, which also adjusts the outputs from induced states); soft prompts instead search for and learn a “cheat code” that puts the model into a state such that the prompt is interpreted correctly. Would you (broadly) agree with this?
That’s fair—I meant mainly on the abstract view where you think of the distribution that the model is simulating. It doesn’t take a qualitative shift either in terms of the model being a simulator, nor a large shift in terms of the distribution itself. My point is mainly that instruction following is still well within the realm of a simulator—InstructGPT isn’t following instructions at the model-level, it’s instantiating simulacra that respond to the instructions. Which is why prompt engineering still works with those models.
Yeah. Prompts serve the purpose of telling GPT what world specifically it’s in on its learned distribution over worlds, and what processes it’s meant to be simulating. It’s zeroing in on the right simulacra, or “mental state” as you put it (though it’s at a higher level of abstraction than the model itself, being a simulated process, hence why simulacra evokes a more precise image to me).
The way I think of it is that with fine-tuning, you’re changing the learned distribution (both in terms of shifting it and narrowing/collapsing it) to make certain simulacra much more accessible—even without additional information from the prompt to tell the model what kind of setting it’s in, the distribution can be shifted to make instruction-following simulacra much more heavily represented. As stated above, prompts generally give information to the model on what part of the learned prior the setting is in, so soft prompts are giving maximal information in the prompt to the model on what part of the prior to collapse probability onto. For stronger fine-tuning I would expect needing to pack more information into the prompt.
Take for example the case where you want to fine-tune GPT to write film scripts. You can do this with base GPT models too, it’d just be a lot harder because the simulacra you want (film writers) aren’t as easily accessible as in a fine-tuned model where those are the processes the prior is already focused on. But given enough prompt engineering, you can get pretty powerful performance anyway, which shows that the capability is present in this model which can traverse a wide region of concept space with varying levels of fidelity.