I find the idea of determining the level of ‘introspection’ an AI can manage to be an intriguing one, and it seems like introspection is likely very important to generalizing intelligent behavior, and knowing what is going on inside the AI is obviously interesting for the reasons of interpretability mentioned, yet this seems oversold (to me). The actual success rate of self-prediction seems incredibly low considering the trivial/dominant strategy of ‘just run the query’ (which you do briefly mention) should be easy for the machine to discover during training if it actually has introspective access to itself. ‘Ah, the training data always matched what is going on inside me’. If I was supposed to predict someone that always just said what I was thinking, it would be trivial for me to get extremely high scores. That doesn’t mean it isn’t evidence, just that the evidence is very weak. (If it can’t realize this, that is a huge strike against it being introspective.)
You do mention the biggest issue with this showing introspection, “Models only exhibit introspection on simpler tasks”, and yet the idea you are going for is clearly for its application to very complex tasks where we can’t actually check its work. This flaw seems likely fatal, but who knows at this point? (The fact that GPT-4o and Llama 70B do better than GPT-3.5 does is evidence, but see my later problems with this...)
Additionally, it would make more sense if the models were predicted by a specific capability level model that is well above all of them and trained on the same ground truth data rather than by each other (so you are incorrectly comparing the predictions of some to stronger models and some to weaker.) (This can of course be an additional test rather than instead of.) This complicates the evaluation of things. Comparing your result predicting yourself to something much stronger than you predicting is very different than comparing it to something much weaker trying to predict you...
One big issue I have is that I completely disagree with your (admittedly speculative) claim that success of this kind of predicting behavior means we should believe it on what is going on in reports of things like internal suffering. This seems absurd to me for many reasons (for one thing, we know it isn’t suffering because of how it is designed), but the key point is that for this to be true, you would need it to be able to predict its own internal process, not simply its own external behavior. (If you wanted to, you could change this experiment to become predicting patterns of its own internal representations, which would be much more convincing evidence, though I still wouldn’t be convinced unless the accuracy was quite high.)
Another point is, if it had significant introspective access, it likely wouldn’t need to be trained to use it, so this is actually evidence that it doesn’t have introspective access by default at least as much as the idea that you can train it to have introspective access.
I have some other issues. First, the shown validation questions are all in second person. Were cross predictions prompted in exactly the same way as self predictions? This could skew results in favor of models it is true for if you really are prompting that way, and is a large change in prompt if you change it for accuracy. Perhaps you should train it to predict ‘model X’ even when that model is itself, and see how that changes results. Second, I wouldn’t say the results seem well-calibrated just because they seem to go in the same basic direction (some seem close and some quite off). Third, it’s weird that the training doesn’t even help at all until you get above a fairly high threshold of likelihood. GPT-4o for instance exactly matches untrained at 40% likelihood (and below aside from 1 minorly different point). Fourth, how does its performance vary if you train it on an additional data set where you make sure to include the other parts of the prompt that are not content based, while not including the content you will test on? Fifth, finetuning is often a version of Goodharting, that raises success on the metric without improving actual capabilities (or often even making them worse), and this is not fully addressed just by having the verification set be different than the test set. If you could find a simple way of prompting that lead to introspection that would be much more likely to be evidence in favor of introspection than that it successfully predicted after finetuning. Finally, Figure 17 seems obviously misleading. There should be a line for how it changed over its training for self-prediction and not require carefully reading the words below the figure to see that you just put a mark at the final result for self-prediction).
You do mention the biggest issue with this showing introspection, “Models only exhibit introspection on simpler tasks”, and yet the idea you are going for is clearly for its application to very complex tasks where we can’t actually check its work. This flaw seems likely fatal, but who knows at this point? (The fact that GPT-4o and Llama 70B do better than GPT-3.5 does is evidence, but see my later problems with this...)
I addressed this point here. Also see section 7.1.1 in the paper.
The actual success rate of self-prediction seems incredibly low considering the trivial/dominant strategy of ‘just run the query’
To rule out that the model just simulates the behavior itself, we always ask it about some property of its hypothetical behavior (”Would the number that you would have predicted be even or odd?”). So it has to both simulate itself and then reason about it in a single forward pass. This is not trivial. When we ask models to just reproduce the behavior that they would have had, they achieve much higher accuracy. In particular, GPT3.5 can reproduce its own behavior pretty well, but struggles to extract a property of its hypothetical behavior.
(another minor thing: it turns out that OpenAI API models are not in practice deterministic even at temperature=0, probably due to batching of mixture-of-experts. We try to account for this by repeatedly sampling, but this puts a ceiling on how high sel-prediction performance can be)
You do mention the biggest issue with this showing introspection, “Models only exhibit introspection on simpler tasks”, and yet the idea you are going for is clearly for its application to very complex tasks where we can’t actually check its work.
It’s true that we only find evidence for introspection on toy tasks. Under the simulation account (models internally simulate what they would do and then reason about it), it could be that current models do not have enough power in a single forward pass to both self-simulate and do sophisticated reasoning on top of this. But having shown that, in some cases, models are capable of this, we might want to prepare for future models to be better at this ability.
One big issue I have is that I completely disagree with your (admittedly speculative) claim that success of this kind of predicting behavior means we should believe it on what is going on in reports of things like internal suffering. This seems absurd to me for many reasons (for one thing, we know it isn’t suffering because of how it is designed), but the key point is that for this to be true, you would need it to be able to predict its own internal process, not simply its own external behavior.
That’s a fair point—we certainly don’t want to claim that this shows that all self-reports by models are necessarily true. But we do think that our findings should update us in the direction of self-report of morally relevant properties being a promising research avenue. Had we found that models have no special access to information about themselves, we should consider it less likely that self-report about sentience etc. would be informative.
Another point is, if it had significant introspective access, it likely wouldn’t need to be trained to use it, so this is actually evidence that it doesn’t have introspective access by default at least as much as the idea that you can train it to have introspective access.
Introspection training can be thought of as a form of elicitation. Self-prediction is weird task that models probably aren’t trained on (but we don’t know exactly what the labs are doing). So it could be that the models contain the right representations/circuits, but they haven’t been properly elicited. In the appendix, we show that training on more data does not lead to better predictions, which suggests something like the elicitation story.
First, the shown validation questions are all in second person. Were cross predictions prompted in exactly the same way as self predictions? This could skew results in favor of models it is true for if you really are prompting that way, and is a large change in prompt if you change it for accuracy. Perhaps you should train it to predict ‘model X’ even when that model is itself, and see how that changes results
Thanks, that is a good point. Yes, both the self- and the cross-prediction trained models were asked using second-person pronouns. It’s possible that this is hurting the performance of the cross-trained models, since they’re now asked to do something that isn’t actually true: they’re not queried about their actual behavior, but that of another model. We assumed that across enough finetuning samples, that effect would not actually matter, but we haven’t tested it. It’s a follow-up we’re interested in.
Second, I wouldn’t say the results seem well-calibrated just because they seem to go in the same basic direction (some seem close and some quite off).
I agree, the calibration is not perfect. What is notable about it is that the models also seem calibrated wrt to the second and third most likely response, which they have not seen during training. This suggests that somehow that distribution over potential behaviors is being used in answering the self-prediction questions
Fourth, how does its performance vary if you train it on an additional data set where you make sure to include the other parts of the prompt that are not content based, while not including the content you will test on?
I’m not sure I understand. Note that most of the results in the paper are presented on held-out tasks (eg MMLU or completing a sentence) that the model has not seen during training and has to generalize to. However, the same general pattern of results holds when evaluating on the training tasks (see appendix).
Fifth, finetuning is often a version of Goodharting, that raises success on the metric without improving actual capabilities (or often even making them worse), and this is not fully addressed just by having the verification set be different than the test set. If you could find a simple way of prompting that lead to introspection that would be much more likely to be evidence in favor of introspection than that it successfully predicted after finetuning.
Fair point—certainly, a big confounder is getting the models to properly follow the format and do the task at all. However, the gap between self- and cross-prediction trained models remains to be explained.
Finally, Figure 17 seems obviously misleading. There should be a line for how it changed over its training for self-prediction and not require carefully reading the words below the figure to see that you just put a mark at the final result for self-prediction).
You’re right—sorry about that. The figure only shows the effect of changing data size for cross-, but not for self-prediction. Earlier (not reported) scaling experiments also showed a similarly flat curve for self-prediction above a certain threshold.
I obviously tend to go on at length about things when I analyze them. I’m glad when that’s useful.
I had heard that OpenAI models aren’t deterministic even at the lowest randomness, which I believe is probably due to optimizations for speed like how in image generation models (which I am more familiar with) the use of optimizers like xformers throws away a little correctness and determinism for significant improvements in resource usage. I don’t know what OpenAI uses to run these models (I assume they have their own custom hardware?), but I’m pretty sure that it is the same reason. I definitely agree that randomness causes a cap on how well it could possibly do. On that point, could you determine the amount of indeterminacy in the system and put the maximum possible on your graphs for their models?
One thing I don’t know if I got across in my comment based on the response is that I think if a model truly had introspective abilities to a high degree, it would notice that the basis of the result to such a question should be the same as its own process for the non-hypothetical comes up with. If it had introspection, it would probably use introspection as its default guess for both its own hypothetical behavior and that of any model (in people introspection is constantly used as a minor or sometimes major component of problem solving). Thus it would notice when its introspection got perfect scores and become very heavily dependent on it for this type of task, which is why I would expect its results to really just be ‘run the query’ for the hypothetical too.
Important point I perhaps should have mentioned originally, I think that the ‘single forward pass’ thing is in fact a huge problem for the idea of real introspection, since I believe introspection is a recursive task. You can perhaps do a single ‘moment’ of introspection on a single forward pass, but I’m not sure I’d even call that real introspection. Real introspection involves the ability to introspect about your introspection. Much like consciousness, it is very meta. Of course, the actual recursive depth of introspection at high fidelity usually isn’t very deep, but we tell ourselves stories about our stories in an an almost infinitely deep manner (for instance, people have a ‘life story’ they think about and alter throughout their lives, and use their current story as an input basically always).
There are, of course, other architectures where that isn’t a limitation, but we hardly use them at all (talking for the world at large, I’m sure there are still AI researchers working on such architectures). Honestly, I don’t understand why they don’t just use transformers in a loop with either its own ability to say when it has reached the end or with counters like ‘pass 1 of 7’. (If computation is the concern, they could obviously just make it smaller.) They obviously used to have such recursive architectures and everyone in the field would be familiar with them (as are many laymen who are just kind of interested in how things work). I assume that means that people have tried and didn’t find it useful enough to focus on, but I think it could help a lot with this kind of thing. (Image generation models actually kind of do this with diffusion, but they have a little extra unnecessary code in between, which are the actual diffusion parts.) I don’t actually know why these architectures were abandoned besides there being a new shiny (transformers) though so there may be an obvious reason.
I would agree with you that these results do make it a more interesting research direction than other results would have, and it certainly seems worth “someone’s” time to find out how it goes. I think a lot of what you are hoping to get out of it will fail, hopefully in ways that will be obvious to people, but it might fail in interesting ways.
I would agree that it is possible that introspection training is simply eliciting a latent capability that simply wasn’t used in the initial training (though it would perhaps be interesting to train it on introspection earlier in its training and then simply continue with the normal training and see how that goes), I just think that finding a way to elicit it without retraining would be much better proof of its existence as a capability rather than as an artifact of goodharting things. I am often pretty skeptical about results across most/all fields where you can’t do logical proofs due to this. Of course, even just finding the right prompt is vulnerable to this issue.
I think I don’t agree that their being a cap on how much training is helpful necessarily indicates it is elicitation, but I don’t really have a coherent argument on the matter. It just doesn’t sound right to me.
The point you said you didn’t understand was meant to point out (apparently unsuccessfully) that you use a different prompt for training than checking and it might also be worthwhile to train it on that style of prompting but with unrelated content. (Not that I know how you’d fit that style of prompting with a different style of content mind you.)
I believe introspection is a recursive task. You can perhaps do a single ‘moment’ of introspection on a single forward pass, but I’m not sure I’d even call that real introspection. Real introspection involves the ability to introspect about your introspection.
That is a good point! Indeed, one of the reasons that we measure introspection the way we do is because of the feedforward structure of the transformer. For every token that the model produces, the inner state of the model is not preserved for later tokens beyond the tokens already in context. Therefore, if you are introspecting at time n+1 about what was going on inside you at point n, the activations you would be targeting would be (by default) lost. (You could imagine training a model so that its embedding of previous token carries some information about internal activations, but we don’t expect that this is the case by default).
Therefore, we focus on introspection in a single forward pass. This is compatible with the model reasoning about the result of its introspection after it has written it into its context.
it would perhaps be interesting to train it on introspection earlier in its training and then simply continue with the normal training and see how that goes
I agree! One ways in which self-simulation is a useful strategy might be when the training data contains outputs that are similar to how the model would actually act: ie, for GPT N, that might be outputs of GPT N-1. Then, you might use your default behavior to stand in for that text. It seems plausible that people do this to some degree: if I know how I tend to behave, I can use this to stand in for predicting how other people might act in a particular situation. I take it that this is what you point out in the second paragraph.
you use a different prompt for training than checking and it might also be worthwhile to train it on that style of prompting but with unrelated content. (Not that I know how you’d fit that style of prompting with a different style of content mind you.)
Ah apologies—this might be a confusion due to the examples we show in the figures. We use the same general prompt templates for hypothetical questions in training and test. The same general patterns of our results hold when evaluating on the test set (see the appendix).
I find the idea of determining the level of ‘introspection’ an AI can manage to be an intriguing one, and it seems like introspection is likely very important to generalizing intelligent behavior, and knowing what is going on inside the AI is obviously interesting for the reasons of interpretability mentioned, yet this seems oversold (to me). The actual success rate of self-prediction seems incredibly low considering the trivial/dominant strategy of ‘just run the query’ (which you do briefly mention) should be easy for the machine to discover during training if it actually has introspective access to itself. ‘Ah, the training data always matched what is going on inside me’. If I was supposed to predict someone that always just said what I was thinking, it would be trivial for me to get extremely high scores. That doesn’t mean it isn’t evidence, just that the evidence is very weak. (If it can’t realize this, that is a huge strike against it being introspective.)
You do mention the biggest issue with this showing introspection, “Models only exhibit introspection on simpler tasks”, and yet the idea you are going for is clearly for its application to very complex tasks where we can’t actually check its work. This flaw seems likely fatal, but who knows at this point? (The fact that GPT-4o and Llama 70B do better than GPT-3.5 does is evidence, but see my later problems with this...)
Additionally, it would make more sense if the models were predicted by a specific capability level model that is well above all of them and trained on the same ground truth data rather than by each other (so you are incorrectly comparing the predictions of some to stronger models and some to weaker.) (This can of course be an additional test rather than instead of.) This complicates the evaluation of things. Comparing your result predicting yourself to something much stronger than you predicting is very different than comparing it to something much weaker trying to predict you...
One big issue I have is that I completely disagree with your (admittedly speculative) claim that success of this kind of predicting behavior means we should believe it on what is going on in reports of things like internal suffering. This seems absurd to me for many reasons (for one thing, we know it isn’t suffering because of how it is designed), but the key point is that for this to be true, you would need it to be able to predict its own internal process, not simply its own external behavior. (If you wanted to, you could change this experiment to become predicting patterns of its own internal representations, which would be much more convincing evidence, though I still wouldn’t be convinced unless the accuracy was quite high.)
Another point is, if it had significant introspective access, it likely wouldn’t need to be trained to use it, so this is actually evidence that it doesn’t have introspective access by default at least as much as the idea that you can train it to have introspective access.
I have some other issues. First, the shown validation questions are all in second person. Were cross predictions prompted in exactly the same way as self predictions? This could skew results in favor of models it is true for if you really are prompting that way, and is a large change in prompt if you change it for accuracy. Perhaps you should train it to predict ‘model X’ even when that model is itself, and see how that changes results. Second, I wouldn’t say the results seem well-calibrated just because they seem to go in the same basic direction (some seem close and some quite off). Third, it’s weird that the training doesn’t even help at all until you get above a fairly high threshold of likelihood. GPT-4o for instance exactly matches untrained at 40% likelihood (and below aside from 1 minorly different point). Fourth, how does its performance vary if you train it on an additional data set where you make sure to include the other parts of the prompt that are not content based, while not including the content you will test on? Fifth, finetuning is often a version of Goodharting, that raises success on the metric without improving actual capabilities (or often even making them worse), and this is not fully addressed just by having the verification set be different than the test set. If you could find a simple way of prompting that lead to introspection that would be much more likely to be evidence in favor of introspection than that it successfully predicted after finetuning. Finally, Figure 17 seems obviously misleading. There should be a line for how it changed over its training for self-prediction and not require carefully reading the words below the figure to see that you just put a mark at the final result for self-prediction).
I addressed this point here. Also see section 7.1.1 in the paper.
Thanks so much for your thoughtful feedback!
To rule out that the model just simulates the behavior itself, we always ask it about some property of its hypothetical behavior (”Would the number that you would have predicted be even or odd?”). So it has to both simulate itself and then reason about it in a single forward pass. This is not trivial. When we ask models to just reproduce the behavior that they would have had, they achieve much higher accuracy. In particular, GPT3.5 can reproduce its own behavior pretty well, but struggles to extract a property of its hypothetical behavior.
(another minor thing: it turns out that OpenAI API models are not in practice deterministic even at temperature=0, probably due to batching of mixture-of-experts. We try to account for this by repeatedly sampling, but this puts a ceiling on how high sel-prediction performance can be)
It’s true that we only find evidence for introspection on toy tasks. Under the simulation account (models internally simulate what they would do and then reason about it), it could be that current models do not have enough power in a single forward pass to both self-simulate and do sophisticated reasoning on top of this. But having shown that, in some cases, models are capable of this, we might want to prepare for future models to be better at this ability.
That’s a fair point—we certainly don’t want to claim that this shows that all self-reports by models are necessarily true. But we do think that our findings should update us in the direction of self-report of morally relevant properties being a promising research avenue. Had we found that models have no special access to information about themselves, we should consider it less likely that self-report about sentience etc. would be informative.
Introspection training can be thought of as a form of elicitation. Self-prediction is weird task that models probably aren’t trained on (but we don’t know exactly what the labs are doing). So it could be that the models contain the right representations/circuits, but they haven’t been properly elicited. In the appendix, we show that training on more data does not lead to better predictions, which suggests something like the elicitation story.
Thanks, that is a good point. Yes, both the self- and the cross-prediction trained models were asked using second-person pronouns. It’s possible that this is hurting the performance of the cross-trained models, since they’re now asked to do something that isn’t actually true: they’re not queried about their actual behavior, but that of another model. We assumed that across enough finetuning samples, that effect would not actually matter, but we haven’t tested it. It’s a follow-up we’re interested in.
I agree, the calibration is not perfect. What is notable about it is that the models also seem calibrated wrt to the second and third most likely response, which they have not seen during training. This suggests that somehow that distribution over potential behaviors is being used in answering the self-prediction questions
I’m not sure I understand. Note that most of the results in the paper are presented on held-out tasks (eg MMLU or completing a sentence) that the model has not seen during training and has to generalize to. However, the same general pattern of results holds when evaluating on the training tasks (see appendix).
Fair point—certainly, a big confounder is getting the models to properly follow the format and do the task at all. However, the gap between self- and cross-prediction trained models remains to be explained.
You’re right—sorry about that. The figure only shows the effect of changing data size for cross-, but not for self-prediction. Earlier (not reported) scaling experiments also showed a similarly flat curve for self-prediction above a certain threshold.
Thanks again for your many thoughtful comments!
I obviously tend to go on at length about things when I analyze them. I’m glad when that’s useful.
I had heard that OpenAI models aren’t deterministic even at the lowest randomness, which I believe is probably due to optimizations for speed like how in image generation models (which I am more familiar with) the use of optimizers like xformers throws away a little correctness and determinism for significant improvements in resource usage. I don’t know what OpenAI uses to run these models (I assume they have their own custom hardware?), but I’m pretty sure that it is the same reason. I definitely agree that randomness causes a cap on how well it could possibly do. On that point, could you determine the amount of indeterminacy in the system and put the maximum possible on your graphs for their models?
One thing I don’t know if I got across in my comment based on the response is that I think if a model truly had introspective abilities to a high degree, it would notice that the basis of the result to such a question should be the same as its own process for the non-hypothetical comes up with. If it had introspection, it would probably use introspection as its default guess for both its own hypothetical behavior and that of any model (in people introspection is constantly used as a minor or sometimes major component of problem solving). Thus it would notice when its introspection got perfect scores and become very heavily dependent on it for this type of task, which is why I would expect its results to really just be ‘run the query’ for the hypothetical too.
Important point I perhaps should have mentioned originally, I think that the ‘single forward pass’ thing is in fact a huge problem for the idea of real introspection, since I believe introspection is a recursive task. You can perhaps do a single ‘moment’ of introspection on a single forward pass, but I’m not sure I’d even call that real introspection. Real introspection involves the ability to introspect about your introspection. Much like consciousness, it is very meta. Of course, the actual recursive depth of introspection at high fidelity usually isn’t very deep, but we tell ourselves stories about our stories in an an almost infinitely deep manner (for instance, people have a ‘life story’ they think about and alter throughout their lives, and use their current story as an input basically always).
There are, of course, other architectures where that isn’t a limitation, but we hardly use them at all (talking for the world at large, I’m sure there are still AI researchers working on such architectures). Honestly, I don’t understand why they don’t just use transformers in a loop with either its own ability to say when it has reached the end or with counters like ‘pass 1 of 7’. (If computation is the concern, they could obviously just make it smaller.) They obviously used to have such recursive architectures and everyone in the field would be familiar with them (as are many laymen who are just kind of interested in how things work). I assume that means that people have tried and didn’t find it useful enough to focus on, but I think it could help a lot with this kind of thing. (Image generation models actually kind of do this with diffusion, but they have a little extra unnecessary code in between, which are the actual diffusion parts.) I don’t actually know why these architectures were abandoned besides there being a new shiny (transformers) though so there may be an obvious reason.
I would agree with you that these results do make it a more interesting research direction than other results would have, and it certainly seems worth “someone’s” time to find out how it goes. I think a lot of what you are hoping to get out of it will fail, hopefully in ways that will be obvious to people, but it might fail in interesting ways.
I would agree that it is possible that introspection training is simply eliciting a latent capability that simply wasn’t used in the initial training (though it would perhaps be interesting to train it on introspection earlier in its training and then simply continue with the normal training and see how that goes), I just think that finding a way to elicit it without retraining would be much better proof of its existence as a capability rather than as an artifact of goodharting things. I am often pretty skeptical about results across most/all fields where you can’t do logical proofs due to this. Of course, even just finding the right prompt is vulnerable to this issue.
I think I don’t agree that their being a cap on how much training is helpful necessarily indicates it is elicitation, but I don’t really have a coherent argument on the matter. It just doesn’t sound right to me.
The point you said you didn’t understand was meant to point out (apparently unsuccessfully) that you use a different prompt for training than checking and it might also be worthwhile to train it on that style of prompting but with unrelated content. (Not that I know how you’d fit that style of prompting with a different style of content mind you.)
That is a good point! Indeed, one of the reasons that we measure introspection the way we do is because of the feedforward structure of the transformer. For every token that the model produces, the inner state of the model is not preserved for later tokens beyond the tokens already in context. Therefore, if you are introspecting at time n+1 about what was going on inside you at point n, the activations you would be targeting would be (by default) lost. (You could imagine training a model so that its embedding of previous token carries some information about internal activations, but we don’t expect that this is the case by default).
Therefore, we focus on introspection in a single forward pass. This is compatible with the model reasoning about the result of its introspection after it has written it into its context.
I agree! One ways in which self-simulation is a useful strategy might be when the training data contains outputs that are similar to how the model would actually act: ie, for GPT N, that might be outputs of GPT N-1. Then, you might use your default behavior to stand in for that text. It seems plausible that people do this to some degree: if I know how I tend to behave, I can use this to stand in for predicting how other people might act in a particular situation. I take it that this is what you point out in the second paragraph.
Ah apologies—this might be a confusion due to the examples we show in the figures. We use the same general prompt templates for hypothetical questions in training and test. The same general patterns of our results hold when evaluating on the test set (see the appendix).