What’s your model of “rephrasing the question”? Note that we never ask the “If you got this input, what would you have done?”, but always for some property of its behavior (“If you got this input, what is the third letter of your response?”) In that case, the rephrasing of the question would be something like “What is the third letter of the answer to the question <input>?”
I have the sense that being able to answer this question consistently correctly wrt to the models ground truth behavior on questions where that ground truth behavior differs from that of other models suggests (minimal) introspection
In that case, the rephrasing of the question would be something like “What is the third letter of the answer to the question <input>?”
That’s my current skeptical interpretation of how the fine-tuned models parse such questions, yes. They didn’t learn to introspect; they learned to, when prompted with queries of the form “If you got asked this question, what would be the third letter of your response?”, to just interpret them as “what is the third letter of the answer to this question?”. (Under this interpretation, the models’ non-fine-tuned behavior isn’t to ignore the hypothetical, but to instead attempt to engage with it in some way that dramatically fails, thereby leading to non-fine-tuned models appearing to be “worse at introspection”.)
In this case, it’s natural that a model M1 is much more likely to answer correctly about its own behavior than if you asked some M2 about M1, since the problem just reduces to “is M1 more likely to respond the same way it responded before if you slightly rephrase the question?”.
Note that I’m not sure that this is what’s happening. But (1) I’m a-priori skeptical of LLMs having these introspective abilities, and (2) the procedure for teaching LLMs introspection secretly teaching them to just ignore hypotheticals seems like exactly the sort of goal-misgeneralization SGD-shortcut that tends to happen. Or would this strategy actually do worse on your dataset?
What’s your model of “rephrasing the question”? Note that we never ask the “If you got this input, what would you have done?”, but always for some property of its behavior (“If you got this input, what is the third letter of your response?”) In that case, the rephrasing of the question would be something like “What is the third letter of the answer to the question <input>?”
I have the sense that being able to answer this question consistently correctly wrt to the models ground truth behavior on questions where that ground truth behavior differs from that of other models suggests (minimal) introspection
That’s my current skeptical interpretation of how the fine-tuned models parse such questions, yes. They didn’t learn to introspect; they learned to, when prompted with queries of the form “If you got asked this question, what would be the third letter of your response?”, to just interpret them as “what is the third letter of the answer to this question?”. (Under this interpretation, the models’ non-fine-tuned behavior isn’t to ignore the hypothetical, but to instead attempt to engage with it in some way that dramatically fails, thereby leading to non-fine-tuned models appearing to be “worse at introspection”.)
In this case, it’s natural that a model M1 is much more likely to answer correctly about its own behavior than if you asked some M2 about M1, since the problem just reduces to “is M1 more likely to respond the same way it responded before if you slightly rephrase the question?”.
Note that I’m not sure that this is what’s happening. But (1) I’m a-priori skeptical of LLMs having these introspective abilities, and (2) the procedure for teaching LLMs introspection secretly teaching them to just ignore hypotheticals seems like exactly the sort of goal-misgeneralization SGD-shortcut that tends to happen. Or would this strategy actually do worse on your dataset?