Some quick thoughts (only skimmed the post, writing quickly), as you asked for feedback:
It looks like the main thing you’re testing is some variant of “when prompted to do goal directed behavior, how effective is the model at satisfying the goal?” That’s a reasonable thing to investigate, but I’m not sure it would be near the top of the list of “empirical research on goal-directed-ness that I want to see”. I’m probably mainly interested in the deceptive alignment motivation, read the rest of this comment as focusing on that.
Aside: To state it directly, I think the main reason to study goal-directedness in this lower-validity setting (of giving models goals in prompts) is that CoT-based goal-directedness might act as a precursor for in-forward-pass goal directedness (which seems far more worrying re deceptive alignment) — so we can study it earlier. So again, reasonable to study, but if you agree with me that this is the main reason for such experiments being valid, it’s an important frame to have when thinking about this kind of work: artificially inducing goal directedness is a model-organism approach rather than a natural experiment.
Thinking out loud, a list for goal-directedness work I want to see might be; sub-bullets are more detailed ideas:
Are base models goal directed? Are RLHF finetuned models goal directed? (naturalistic setting)
Could look like this recent work on the consistency of model answers to values questions, but more adapted to goals you hypothesize the models to have (like how consistently do models follow a particular behavior outlined in the Model Spec)
How do RLHFed models deal with conflicting goals — do they engage in sophisticated reasoning about this or instead seem to follow simple heuristics?
To the extent these models are goal directed (including because you induce this via prompting), is anything interesting going on:
Do they goal generalize the way we would expect? Similar to this recent work, but aimed at some risk other than reward hacking, I would be particularly interested in the time horizon over which the world is affected, as this is perhaps a proxy for a model having non-myopic goals.
As mentioned, maybe CoT and prompting provide an early warning sign for forward-pass goal-directed-ness. Is this true? How much of an early warning sign?
It looks like the settings in this post are sorta a general capability eval for a model accomplishing goals. I wonder if you think they add a ton of value over existing agent benchmarks like SWE-Bench? My intuition says you would be better off on trying to focus in on a narrower question that is particularly relevant to safety, like one of those I mentioned.
Sorry if this comment was rude or mean, it’s been a couple weeks and this post has no feedback even though you asked, I figured something might be better than nothing. It looks to me like your overall approach and ways of thinking about this are good!
Thank you for the detailed feedback, I found this very helpful and not at all rude or mean!
I suspect there are a few key disagreements between us that make me more optimistic about this project setup than you. I’d be curious about whether you agree on these points being important cruxes:
Though I agree that our work primarily belongs within the model organisms paradigm, I disagree that it’s only useful as a means to study in-forward-pass goal-directedness. I think there’s a considerable chance that the Translucent Thoughts hypotheses are true and AGI will be reached with an architecture very similar to current transformers, which I wouldn’t expect to be able to do dangerous goal-directed reasoning within a single forward pass due to the limited number of serial reasoning steps. If this is the case, CoT-based goal-directedness seems relevant to study for its own sake. We’re still studying an artificial scenario in the sense that we’re giving the models the goal ourselves, while in the deceptive alignment scenarios, the model would have to develop the goal on its own, but I don’t think we necessarily have to say anything about in-forward-pass goal-directedness for our work to be applicable to realistic threat models.
Studying the intrinsic goals of the models in naturalistic settings would also be interesting, but I expect this to be hard, and also less relevant to the threat models we’re worried about. I’m mainly concerned about models becoming more goal-directed with scale and someone figuring out long-horizon RL training on top of that, leading to models that are qualitatively different from current ones. To address this, I’d like to have a benchmark that decomposes goal-directedness into several relevant abilities and provides an accurate score for these abilities across model generations. This would help us better track how models become more goal-directed as they scale, if at all. Additionally, if a new model scores close to humans on many of these abilities, it could serve as a strong alarm bell that we are approaching dangerously goal-directed models. It seems easier to build this sort of benchmark without going through the complicated process of trying to infer the intrinsic goals of the models.
The aforementioned decomposition of goal-directedness into various relevant abilities would also be the main value added on top of existing agent benchmarks. We should maybe have been clearer in the post about planning to develop such a decomposition. Since it’s easy to evaluate for goal-fulfillment, that was our main focus in the early stages of the project, but eventually, we’re hoping to decompose goal-directedness into several abilities such as instrumental reasoning ability, generalization to OOD environments, coherence, etc, somewhat analogously to how the Situational Awareness Dataset decomposes situational awareness into self-knowledge, inferences, and actions.
I definitely agree that it would be interesting to compare the goal-directedness of base models and fine-tuned models, and this is something we’re planning to eventually do if our compute budget permits. Similarly, I strongly agree that it would be interesting to study whether anything interesting is going on in the situations where the models exhibit goal-directed behavior, and I’m very interested in looking further into your suggestions for that!
Yep, I basically agree with those being the cruxes!
On how much of the goal reasoning is happening out loud: Nowadays, I think about a lot of AI safety research as being aimed at an AI Control scenario where we are closely supervising what models are thinking about in CoT, and thus malign goal seeking must either happen in individual forward passes (and translated to look harmless most the time) or in an obfuscated way in CoT. (or from rogue deployments outside the control scheme)
Studying the intrinsic goals of the models in naturalistic settings would also be interesting, but I expect this to be hard, and also less relevant to the threat models we’re worried about. I’m mainly concerned about models becoming more goal-directed with scale and someone figuring out long-horizon RL training on top of that, leading to models that are qualitatively different from current ones.
By naturalistic, I mean “from a realistic training process, even if that training is designed to create goals”. Which sounds like what you said is the main threat model you’re worried about? If you have the time, I would push you harder on this: what is a specific story of AI catastrophe that you are trying to study/prevent?
I really appreciate the thoughtful replies and feedback—I also didn’t read any of your comments as rude or mean! I’d like to clarify a few points about our approach and its relevance:
We expect that good CoT reasoning helps current models better pursue and achieve goals. In the future, all of the reasoning needed to do this effectively might happen in the forward pass. However, we think it’s likely that AGI could be achieved with models similar to current SOTA models, where sophisticated reasoning for goal pursuit will still need to happen in CoT. Even if this bet is wrong and models can do this reasoning entirely in the forward pass, our evals should still reveal goal-directed behavior because we assess actions the model takes that are instrumental to achieving the goal (not just the sensibility of the reasoning traces).
Regarding AI control scenarios, we understand that with good oversight methods, deceptive goal-pursuit would need to happen either in the forward pass or through obfuscated reasoning in CoT. While our scoring of reasoning traces may become less relevant as models improve at internal reasoning, our other metrics focus on how well the model carries out steps and decides on instrumental actions that lead to goal achievement. We believe this serves as a good proxy for goal-directedness, even if we can’t directly observe how intrinsic goals arise or manifest from the training process.
A potential risk scenario we’re considering is one where a highly capable model is developed that doesn’t show obvious signs of misalignment in standard tests (e.g. typical use or general reasoning benchmarks) but has strong goal-pursuit capabilities. Without evaluations that specifically probe goal-directedness, such a model could potentially develop deceptive goals while becoming adept at hiding them. Our evals aim to surface whether models have the ability to perform the necessary reasoning to competently pursue goals, by measuring their capability to carry out the steps needed to achieve those goals and the circumstances under which this behavior is impacted.
We remain agnostic to what sort of training results in these capabilities and think it should be possible to get a good idea of goal-directedness by assessing aspects of how well models can carry out goals (I would imagine that a capable, goal-directed model with intrinsic goals would also be able to be goal-directed towards goals inscribed through prompting, since we train models to follow instructions. If this expectation is wrong, this would be a major limitation of our evals).
Our approach differs from typical capability evaluations in that we try to determine the utility of models’ actions towards goal fulfillment, situations or conditions under which models fail, the time horizons over which they can act and plan, and their corrigibility under adverse circumstances or when presented with conflicting goals. This allows us to assess aspects of goal-directedness that are distinct from general reasoning capabilities. I know it’s not touched on in the post, since we hadn’t started implementing these tasks when we wrote this, but we’re currently developing long-horizon tasks to better assess these aspects of goal-directedness, and expect to have preliminary results in a few weeks!
Some quick thoughts (only skimmed the post, writing quickly), as you asked for feedback:
It looks like the main thing you’re testing is some variant of “when prompted to do goal directed behavior, how effective is the model at satisfying the goal?” That’s a reasonable thing to investigate, but I’m not sure it would be near the top of the list of “empirical research on goal-directed-ness that I want to see”. I’m probably mainly interested in the deceptive alignment motivation, read the rest of this comment as focusing on that.
Aside: To state it directly, I think the main reason to study goal-directedness in this lower-validity setting (of giving models goals in prompts) is that CoT-based goal-directedness might act as a precursor for in-forward-pass goal directedness (which seems far more worrying re deceptive alignment) — so we can study it earlier. So again, reasonable to study, but if you agree with me that this is the main reason for such experiments being valid, it’s an important frame to have when thinking about this kind of work: artificially inducing goal directedness is a model-organism approach rather than a natural experiment.
Thinking out loud, a list for goal-directedness work I want to see might be; sub-bullets are more detailed ideas:
Are base models goal directed? Are RLHF finetuned models goal directed? (naturalistic setting)
Could look like this recent work on the consistency of model answers to values questions, but more adapted to goals you hypothesize the models to have (like how consistently do models follow a particular behavior outlined in the Model Spec)
How do RLHFed models deal with conflicting goals — do they engage in sophisticated reasoning about this or instead seem to follow simple heuristics?
To the extent these models are goal directed (including because you induce this via prompting), is anything interesting going on:
Do they goal generalize the way we would expect? Similar to this recent work, but aimed at some risk other than reward hacking, I would be particularly interested in the time horizon over which the world is affected, as this is perhaps a proxy for a model having non-myopic goals.
Is there specification gaming or ‘in context reward hacking’ across many different settings?
As mentioned, maybe CoT and prompting provide an early warning sign for forward-pass goal-directed-ness. Is this true? How much of an early warning sign?
It looks like the settings in this post are sorta a general capability eval for a model accomplishing goals. I wonder if you think they add a ton of value over existing agent benchmarks like SWE-Bench? My intuition says you would be better off on trying to focus in on a narrower question that is particularly relevant to safety, like one of those I mentioned.
Sorry if this comment was rude or mean, it’s been a couple weeks and this post has no feedback even though you asked, I figured something might be better than nothing. It looks to me like your overall approach and ways of thinking about this are good!
Thank you for the detailed feedback, I found this very helpful and not at all rude or mean!
I suspect there are a few key disagreements between us that make me more optimistic about this project setup than you. I’d be curious about whether you agree on these points being important cruxes:
Though I agree that our work primarily belongs within the model organisms paradigm, I disagree that it’s only useful as a means to study in-forward-pass goal-directedness. I think there’s a considerable chance that the Translucent Thoughts hypotheses are true and AGI will be reached with an architecture very similar to current transformers, which I wouldn’t expect to be able to do dangerous goal-directed reasoning within a single forward pass due to the limited number of serial reasoning steps. If this is the case, CoT-based goal-directedness seems relevant to study for its own sake. We’re still studying an artificial scenario in the sense that we’re giving the models the goal ourselves, while in the deceptive alignment scenarios, the model would have to develop the goal on its own, but I don’t think we necessarily have to say anything about in-forward-pass goal-directedness for our work to be applicable to realistic threat models.
Studying the intrinsic goals of the models in naturalistic settings would also be interesting, but I expect this to be hard, and also less relevant to the threat models we’re worried about. I’m mainly concerned about models becoming more goal-directed with scale and someone figuring out long-horizon RL training on top of that, leading to models that are qualitatively different from current ones. To address this, I’d like to have a benchmark that decomposes goal-directedness into several relevant abilities and provides an accurate score for these abilities across model generations. This would help us better track how models become more goal-directed as they scale, if at all. Additionally, if a new model scores close to humans on many of these abilities, it could serve as a strong alarm bell that we are approaching dangerously goal-directed models. It seems easier to build this sort of benchmark without going through the complicated process of trying to infer the intrinsic goals of the models.
The aforementioned decomposition of goal-directedness into various relevant abilities would also be the main value added on top of existing agent benchmarks. We should maybe have been clearer in the post about planning to develop such a decomposition. Since it’s easy to evaluate for goal-fulfillment, that was our main focus in the early stages of the project, but eventually, we’re hoping to decompose goal-directedness into several abilities such as instrumental reasoning ability, generalization to OOD environments, coherence, etc, somewhat analogously to how the Situational Awareness Dataset decomposes situational awareness into self-knowledge, inferences, and actions.
I definitely agree that it would be interesting to compare the goal-directedness of base models and fine-tuned models, and this is something we’re planning to eventually do if our compute budget permits. Similarly, I strongly agree that it would be interesting to study whether anything interesting is going on in the situations where the models exhibit goal-directed behavior, and I’m very interested in looking further into your suggestions for that!
Yep, I basically agree with those being the cruxes!
On how much of the goal reasoning is happening out loud: Nowadays, I think about a lot of AI safety research as being aimed at an AI Control scenario where we are closely supervising what models are thinking about in CoT, and thus malign goal seeking must either happen in individual forward passes (and translated to look harmless most the time) or in an obfuscated way in CoT. (or from rogue deployments outside the control scheme)
By naturalistic, I mean “from a realistic training process, even if that training is designed to create goals”. Which sounds like what you said is the main threat model you’re worried about? If you have the time, I would push you harder on this: what is a specific story of AI catastrophe that you are trying to study/prevent?
I really appreciate the thoughtful replies and feedback—I also didn’t read any of your comments as rude or mean! I’d like to clarify a few points about our approach and its relevance:
We expect that good CoT reasoning helps current models better pursue and achieve goals. In the future, all of the reasoning needed to do this effectively might happen in the forward pass. However, we think it’s likely that AGI could be achieved with models similar to current SOTA models, where sophisticated reasoning for goal pursuit will still need to happen in CoT. Even if this bet is wrong and models can do this reasoning entirely in the forward pass, our evals should still reveal goal-directed behavior because we assess actions the model takes that are instrumental to achieving the goal (not just the sensibility of the reasoning traces).
Regarding AI control scenarios, we understand that with good oversight methods, deceptive goal-pursuit would need to happen either in the forward pass or through obfuscated reasoning in CoT. While our scoring of reasoning traces may become less relevant as models improve at internal reasoning, our other metrics focus on how well the model carries out steps and decides on instrumental actions that lead to goal achievement. We believe this serves as a good proxy for goal-directedness, even if we can’t directly observe how intrinsic goals arise or manifest from the training process.
A potential risk scenario we’re considering is one where a highly capable model is developed that doesn’t show obvious signs of misalignment in standard tests (e.g. typical use or general reasoning benchmarks) but has strong goal-pursuit capabilities. Without evaluations that specifically probe goal-directedness, such a model could potentially develop deceptive goals while becoming adept at hiding them. Our evals aim to surface whether models have the ability to perform the necessary reasoning to competently pursue goals, by measuring their capability to carry out the steps needed to achieve those goals and the circumstances under which this behavior is impacted.
We remain agnostic to what sort of training results in these capabilities and think it should be possible to get a good idea of goal-directedness by assessing aspects of how well models can carry out goals (I would imagine that a capable, goal-directed model with intrinsic goals would also be able to be goal-directed towards goals inscribed through prompting, since we train models to follow instructions. If this expectation is wrong, this would be a major limitation of our evals).
Our approach differs from typical capability evaluations in that we try to determine the utility of models’ actions towards goal fulfillment, situations or conditions under which models fail, the time horizons over which they can act and plan, and their corrigibility under adverse circumstances or when presented with conflicting goals. This allows us to assess aspects of goal-directedness that are distinct from general reasoning capabilities. I know it’s not touched on in the post, since we hadn’t started implementing these tasks when we wrote this, but we’re currently developing long-horizon tasks to better assess these aspects of goal-directedness, and expect to have preliminary results in a few weeks!