Owain_Evans
That makes sense. It’s a good suggestion and would be an interesting experiment to run.
Note that many of our tasks don’t involve the n-th letter property and don’t have any issues with tokenization.
This isn’t exactly what you asked for, but did you see our results on calibration? We finetune a model to self-predict just the most probable response. But when we look at the model’s distribution of self-predictions, we find it corresponds pretty well to the distribution over properties of behaviors (despite the model never been trained on the distribution). Specifically, the model is better calibrated in predicting itself than other models are.
I think having the model output the top three choices would be cool. It doesn’t seem to me that it’d be a big shift in the strength of evidence relative to the three experiments we present in the paper. But maybe there’s something I’m not getting?
Thanks Sam. That tweet could be a good stand-alone LW post once you have time to clean up.
I don’t think this properly isolates/tests for the introspection ability.
What definition of introspection do you have in mind and how would you test for this?
Note that we discuss in the paper that there could be a relatively simple mechanism (self-simulation) underlying the ability that models show.
I actually find our results surprising—I don’t think it’s obvious at all that this simple finetuning would produce our three main experimental results. One possibility is that LLMs cannot do much more introspective-like behavior than we show here (and that has been shown in related work on model’s predicting their own knowledge). Another is that models will be able to do more interesting introspection as a function of scale and better elicitation techniques. (Note that we failed to elicitate introspection in GPT-3.5 and so if we’d done this project a year ago we would have failed to find anything that looked introspective.)
You do mention the biggest issue with this showing introspection, “Models only exhibit introspection on simpler tasks”, and yet the idea you are going for is clearly for its application to very complex tasks where we can’t actually check its work. This flaw seems likely fatal, but who knows at this point? (The fact that GPT-4o and Llama 70B do better than GPT-3.5 does is evidence, but see my later problems with this...)
I addressed this point here. Also see section 7.1.1 in the paper.
Wrapping a question in a hypothetical feels closer to rephrasing the question than probing “introspection”
Note that models perform poorly at predicting properties of their behavior in hypotheticals without finetuning. So I don’t think this is just like rephrasing the question. Also, GPT3.5 does worse at predicting GPT-3.5 than Llama-70B does at predicting GPT-3.5 (without finetuning), and GPT4 is only a little better at predicting itself than are other models.
>Essentially, the response to the object level and hypothetical reformulation both arise from very similar things going on in the model rather than something emergent happening.
While we don’t know what is going on internally, I agree it’s quite possible these “arise from similar things”. In the paper we discuss “self-simulation” as a possible mechanism. Does that fit what you have in mind? Note: We are not claiming that models must be doing something very self-aware and sophisticated. The main thing is just to show that there is introspection according to our definition. Contrary to what you say, I don’t think this result is obvious and (as I noted above) it’s easy to run experiments where models do not show any advantage in predicting themselves.
I think ground-truth is more expensive, noisy, and contentious as you get to questions like “What are your goals?” or “Do you have feelings?”. I still think it’s possible to get evidence on these questions. Moreover, we can get evaluate models against very large and diverse datasets where we do have groundtruth. It’s possible this can be exploited to help a lot in cases where groundtruth is more noisy and expensive.
Where we have groundtruth: We have groundtruth for questions like the ones we study above (about properties of model behavior on a given prompt), and for questions like “Would you answer question [hard math question] correctly?”. This can be extended to other counterfactual questions like “Suppose three words were deleted from this [text]. Which choice of three words have most change your rating of the quality of the text?”
Where groundtruth is more expensive and/or less clearcut. E.g. “Would you answer question [history exam question] correctly?”. Or questions about which concepts the model is using to solve a problem, what the model’s goals or references are. I still think we can gather evidence that makes answers to these questions more or less likely—esp. if we average over a large set of such questions.
We have a section on the motivation to study introspection (with the specific definition we use in the paper). https://arxiv.org/html/2410.13787v1#S7
LLMs can learn about themselves by introspection
You want to make it clear to the LLM what the task is (multiplying n digit numbers is clear but “doing hard math questions” is vague) and also have some variety of difficulty levels (within LLMs and between LLMs) and a high ceiling. I think this would take some iteration at least.
I like this idea. It’s possible something like this already exists but I’m not aware of it.
Thanks for the breakdown! The idea for using pairs makes sense.
Yes, it’s plausible to me that this capbility is data specific. E.g. It might also be better with “heads/tails” or “0/1″ because of examples of this in the training data.
Do you have results for a measure of accuracy or correlation? It would also be worth comparing results for two different distributions on the temperature, e.g. the uniform on [0.5,1.5] that you tried, and other interval like [0,2] or a non-uniform distribution.
Me, Myself, and AI: the Situational Awareness Dataset (SAD) for LLMs
The “Still no lie detector for language model” paper is here: https://arxiv.org/pdf/2307.00175
The paper in the OP seems somewhat relate to my post from earlier this year.
I agree that there are ways to explain the results and these points from Steven and Thane make sense. I will note that the models are significantly more reliable at learning in-distribution (i.e. to predict the training set) than they are at generalizing to the evaluations that involve verbalizing the latent state (and answering downstream questions about it). So it’s not the case that learning to predict the training set (or inputs very similar to training inputs) automatically results in generalization to the verbalized evaluations. We do see improvement in reliability with GPT-4 over GPT-3.5, but we don’t have enough information to draw any firm conclusions about scaling.
Yes, if you know what dangerous knowledge you are looking for, you could try to remove it using influence functions. Another approach (potentially much cheaper) is unlearning techniques.
I agree about the CoT point for reconstructing things. If the CoT is faithful/explicit, then this should be easier to monitor by using a second cheaper LLM to block the stronger LLM if it starts thinking about nukes. You could imagine censoring whole subject areas from the training (rather than just censoring specific parts of documents). My guess is that this makes learning certain facts extremely hard even without CoT because some facts were only learned by humans after extensive empirical experiments.
Good question. I expect you would find some degree of consistency here. Johannes or Dami might be able to some results on this.
I agree about the “longer responses”.
I’m unsure about the “personality trait” framing. There are two senses of “introspection” for humans. One is introspecting on your current mental state (“I feel a headache starting”) and the other is being introspective about patterns in your behavior (e.g. “i tend to dislike violent movies” or “i tend to be shy among new people”). The former sense is more relevant to philosophy and psychology and less often discussed in daily life. The issue with the latter sense is that a model may not have privileged access to facts like this—i.e. if another model had the same observational data then it could learn the same fact.
So I’m most interested in the former kind of introspective, or in cases of the latter where it’d take large and diverse datasets (that are hard to construct) for another model to make the same kind of generalization.