I want to push back a little bit on the claim that this is not a qualitative difference; it does imply a big difference in output for identical input, even if the transformation required to get similar output between the two models is simple.
That’s fair—I meant mainly on the abstract view where you think of the distribution that the model is simulating. It doesn’t take a qualitative shift either in terms of the model being a simulator, nor a large shift in terms of the distribution itself. My point is mainly that instruction following is still well within the realm of a simulator—InstructGPT isn’t following instructions at the model-level, it’s instantiating simulacra that respond to the instructions. Which is why prompt engineering still works with those models.
a successful prompt puts the NN into the right “mental state” to generate the desired output
Yeah. Prompts serve the purpose of telling GPT what world specifically it’s in on its learned distribution over worlds, and what processes it’s meant to be simulating. It’s zeroing in on the right simulacra, or “mental state” as you put it (though it’s at a higher level of abstraction than the model itself, being a simulated process, hence why simulacra evokes a more precise image to me).
fine-tuning for e.g. instruction following mostly pushes to get the model into this state from the prompts given (as opposed to e.g. for HHH behavior, which also adjusts the outputs from induced states); soft prompts instead search for and learn a “cheat code” that puts the model into a state such that the prompt is interpreted correctly. Would you (broadly) agree with this?
The way I think of it is that with fine-tuning, you’re changing the learned distribution (both in terms of shifting it and narrowing/collapsing it) to make certain simulacra much more accessible—even without additional information from the prompt to tell the model what kind of setting it’s in, the distribution can be shifted to make instruction-following simulacra much more heavily represented. As stated above, prompts generally give information to the model on what part of the learned prior the setting is in, so soft prompts are giving maximal information in the prompt to the model on what part of the prior to collapse probability onto. For stronger fine-tuning I would expect needing to pack more information into the prompt.
Take for example the case where you want to fine-tune GPT to write film scripts. You can do this with base GPT models too, it’d just be a lot harder because the simulacra you want (film writers) aren’t as easily accessible as in a fine-tuned model where those are the processes the prior is already focused on. But given enough prompt engineering, you can get pretty powerful performance anyway, which shows that the capability is present in this model which can traverse a wide region of concept space with varying levels of fidelity.
That’s fair—I meant mainly on the abstract view where you think of the distribution that the model is simulating. It doesn’t take a qualitative shift either in terms of the model being a simulator, nor a large shift in terms of the distribution itself. My point is mainly that instruction following is still well within the realm of a simulator—InstructGPT isn’t following instructions at the model-level, it’s instantiating simulacra that respond to the instructions. Which is why prompt engineering still works with those models.
Yeah. Prompts serve the purpose of telling GPT what world specifically it’s in on its learned distribution over worlds, and what processes it’s meant to be simulating. It’s zeroing in on the right simulacra, or “mental state” as you put it (though it’s at a higher level of abstraction than the model itself, being a simulated process, hence why simulacra evokes a more precise image to me).
The way I think of it is that with fine-tuning, you’re changing the learned distribution (both in terms of shifting it and narrowing/collapsing it) to make certain simulacra much more accessible—even without additional information from the prompt to tell the model what kind of setting it’s in, the distribution can be shifted to make instruction-following simulacra much more heavily represented. As stated above, prompts generally give information to the model on what part of the learned prior the setting is in, so soft prompts are giving maximal information in the prompt to the model on what part of the prior to collapse probability onto. For stronger fine-tuning I would expect needing to pack more information into the prompt.
Take for example the case where you want to fine-tune GPT to write film scripts. You can do this with base GPT models too, it’d just be a lot harder because the simulacra you want (film writers) aren’t as easily accessible as in a fine-tuned model where those are the processes the prior is already focused on. But given enough prompt engineering, you can get pretty powerful performance anyway, which shows that the capability is present in this model which can traverse a wide region of concept space with varying levels of fidelity.