Good point—I think I wasn’t thinking deeply enough about language modelling. I certainly agree that the model has to learn in the colloquial sense, especially if it’s doing something really impressive that isn’t well-explained by interpolating on dataset examples—I’m imagining giving GPT-X some new mathematical definitions and asking it to make novel proofs.
I think my confusion was rooted in the fact that you were replying to a section that dealt specifically with learning an inner RL algorithm, and the above sense of ‘learning’ is a bit different from that one. ‘Learning’ in your sense can be required for a task without requiring an inner RL algorithm; or at least, whether it does isn’t clear to me a priori.
Good point—I think I wasn’t thinking deeply enough about language modelling. I certainly agree that the model has to learn in the colloquial sense, especially if it’s doing something really impressive that isn’t well-explained by interpolating on dataset examples—I’m imagining giving GPT-X some new mathematical definitions and asking it to make novel proofs.
I think my confusion was rooted in the fact that you were replying to a section that dealt specifically with learning an inner RL algorithm, and the above sense of ‘learning’ is a bit different from that one. ‘Learning’ in your sense can be required for a task without requiring an inner RL algorithm; or at least, whether it does isn’t clear to me a priori.