Gradient hacking
“Gradient hacking” is a term I’ve been using recently to describe the phenomenon wherein a deceptively aligned mesa-optimizer might be able to purposefully act in ways which cause gradient descent to update it in a particular way. In Risks from Learned Optimization, we included the following footnote to reflect this possibility:
Furthermore, a deceptively aligned mesa-optimizer would be incentivized to cause there to be a systematic bias in the direction of preventing the base optimizer from modifying its mesa-objective. Thus, in the context of a local optimization process, a deceptive mesa-optimizer might try to “hack” its own gradient (by, for example, making itself more brittle in the case where its objective gets changed) to ensure that the base optimizer adjusts it in such a way that leaves its mesa-objective untouched.
Outside of that footnote, however, there hasn’t been much other discussion of this phenomenon, and I think that’s a shame, as I think it’s definitely something that’s worth looking into.
For example, here’s a gradient hacking story that I’m particularly concerned about. Suppose you’re training a model and it becomes deceptive. You’re not worried about that, though, because you have sufficient transparency tools that you’re confident you can detect any deceptive cognition it might be doing. If it’s thinking about how to deceive you, your transparency tools will pick that up, and you’ll know to throw out that model and try again. However, your transparency tools are quite expensive, so you can only afford to run them once, which you’re planning on doing after training before you deploy your model. If your model is deceptive, though, then it might know all of that—in particular, it might know that if it stays deceptive, it’ll get caught.
However, there is still something it can do in that situation to be able to defect against you and cause whatever it wants to happen during deployment. That is, it could hack its own gradient to produce a new model which will still exhibit the same deceptive behavior, but without doing any of the detectable deceptive cognition. Perhaps the new model exhibits the deceptive behavior not because it’s trying to act deceptive, but just because it’s a hard-coded part of its algorithm, or because it has some bad proxy, or any other defect that might be more difficult to detect than outright deception.
How might such a deceptive model accomplish such a feat of gradient hacking? Well, I don’t think it’s actually that difficult once your deceptive model comes up with it as a strategy. First, note that since gradient descent is a local optimization process, it doesn’t have to somehow change what the global minima might be, or even what the local minima are—it just has to create a ridge in the loss landscape that will push the gradient in the direction it wants.
The original footnote provides one example of this, which is for the model to check if its objective satisfies some criterion, and fail hard if it doesn’t. Now, if the model gets to the point where it’s actually just failing because of this, then gradient descent will probably just remove that check—but the trick is never to actually get there. By having such a check in the first place, the model makes it so that gradient descent won’t actually change its objective, since any change to its objective (keeping all the other parameters fixed, which is what gradient descent does since it computes partial derivatives) would lead to such a failure.
The case of a deceptive model trying to erase evidence of its deception is not quite as straightforward, though I think it would still be possible to do something quite similar in that situation as well. This is especially true if the model has the ability to read and write to some sort of state (e.g. an RNN or even a transformer which can attend to its previous output). For example, it could write out a pre-commitment to follow the hard-coded policy and then just have a policy of always following any pre-commitments it finds such that it never thinks about the deception that led it to that policy in the first place.
So, that’s the problem—how do we solve it? Well, there’s a simple solution in the case above, which is just to run your transparency tools throughout training—not just at the end—so that you can catch the deception as soon as it happens. I think this is the right solution,[1] but it’s worth noting that this adds an additional desideratum to your transparency tools, which is that they have to be cheap enough to run during the entire training process. That might not be that bad, however, if you don’t do very many gradient descent update steps, and instead just use massive models and huge batch sizes—and there is some evidence that such a procedure is actually the most compute-efficient way to train neural networks. That being said, I think there are some transparency techniques which lend themselves better to being applied across training than others. Thus, the possibility of gradient hacking suggests that we should focus more on those techniques which allow us to not just understand a single model, but also the entire history of how that model was produced.[2]
- ↩︎
This is one of the reasons I’m interested in optimization provenance, for example.
- ↩︎
As an example of a technique which I think translates well to this domain, one of the OpenAI clarity team’s current major research thrusts is “model diffing,” which is about trying to understand exactly how two similar models differ from each other. Compared to other approaches that are just focused on understanding a single model, model diffing could be quite useful in allowing one to understand the entirety of the training process, since you could diff the model at each gradient descent update step.
- An overview of 11 proposals for building safe advanced AI by May 29, 2020, 8:38 PM; 220 points) (
- Chris Olah’s views on AGI safety by Nov 1, 2019, 8:13 PM; 208 points) (
- When can we trust model evaluations? by Jul 28, 2023, 7:42 PM; 165 points) (
- Towards understanding-based safety evaluations by Mar 15, 2023, 6:18 PM; 164 points) (
- A transparency and interpretability tech tree by Jun 16, 2022, 11:44 PM; 163 points) (
- Alignment research exercises by Feb 21, 2022, 8:24 PM; 150 points) (
- Pretraining Language Models with Human Preferences by Feb 21, 2023, 5:57 PM; 135 points) (
- AI Alignment 2018-19 Review by Jan 28, 2020, 2:19 AM; 126 points) (
- Reward Is Not Enough by Jun 16, 2021, 1:52 PM; 123 points) (
- Circumventing interpretability: How to defeat mind-readers by Jul 14, 2022, 4:59 PM; 114 points) (
- How would a language model become goal-directed? by Jul 16, 2022, 2:50 PM; 113 points) (EA Forum;
- 2019 Review: Voting Results! by Feb 1, 2021, 3:10 AM; 99 points) (
- Does SGD Produce Deceptive Alignment? by Nov 6, 2020, 11:48 PM; 96 points) (
- Formal Inner Alignment, Prospectus by May 12, 2021, 7:57 PM; 95 points) (
- [Paper] Stress-testing capability elicitation with password-locked models by Jun 4, 2024, 2:52 PM; 85 points) (
- New report: “Scheming AIs: Will AIs fake alignment during training in order to get power?” by Nov 15, 2023, 5:16 PM; 80 points) (
- Against Boltzmann mesaoptimizers by Jan 30, 2023, 2:55 AM; 77 points) (
- My AGI Threat Model: Misaligned Model-Based RL Agent by Mar 25, 2021, 1:45 PM; 74 points) (
- New report: “Scheming AIs: Will AIs fake alignment during training in order to get power?” by Nov 15, 2023, 5:16 PM; 71 points) (EA Forum;
- AI Safety in a World of Vulnerable Machine Learning Systems by Mar 8, 2023, 2:40 AM; 70 points) (
- 3 levels of threat obfuscation by Aug 2, 2023, 2:58 PM; 69 points) (
- AGI safety from first principles: Alignment by Oct 1, 2020, 3:13 AM; 60 points) (
- Conditioning Generative Models for Alignment by Jul 18, 2022, 7:11 AM; 59 points) (
- Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios by May 12, 2022, 8:01 PM; 58 points) (
- Mundane solutions to exotic problems by May 4, 2021, 6:20 PM; 56 points) (
- The “no sandbagging on checkable tasks” hypothesis by Jul 31, 2023, 11:06 PM; 56 points) (
- Gradient Filtering by Jan 18, 2023, 8:09 PM; 55 points) (
- An issue with training schemers with supervised fine-tuning by Jun 27, 2024, 3:37 PM; 49 points) (
- An Introduction to AI Sandbagging by Apr 26, 2024, 1:40 PM; 45 points) (
- Will transparency help catch deception? Perhaps not by Nov 4, 2019, 8:52 PM; 43 points) (
- Interpretability Tools Are an Attack Channel by Aug 17, 2022, 6:47 PM; 42 points) (
- Understanding Gradient Hacking by Dec 10, 2021, 3:58 PM; 41 points) (
- Bayesian Evolving-to-Extinction by Feb 14, 2020, 11:55 PM; 40 points) (
- Towards Deconfusing Gradient Hacking by Oct 24, 2021, 12:43 AM; 39 points) (
- Gradient hacking: definitions and examples by Jun 29, 2022, 9:35 PM; 38 points) (
- Why You Should Care About Goal-Directedness by Nov 9, 2020, 12:48 PM; 38 points) (
- Simulators, constraints, and goal agnosticism: porbynotes vol. 1 by Nov 23, 2022, 4:22 AM; 37 points) (
- Getting up to Speed on the Speed Prior in 2022 by Dec 28, 2022, 7:49 AM; 36 points) (
- Random Thoughts on Predict-O-Matic by Oct 17, 2019, 11:39 PM; 35 points) (
- Value Formation: An Overarching Model by Nov 15, 2022, 5:16 PM; 34 points) (
- Thoughts on gradient hacking by Sep 3, 2021, 1:02 PM; 33 points) (
- 3 levels of threat obfuscation by Aug 2, 2023, 5:09 PM; 31 points) (EA Forum;
- Alignment Problems All the Way Down by Jan 22, 2022, 12:19 AM; 29 points) (
- Obstacles to gradient hacking by Sep 5, 2021, 10:42 PM; 28 points) (
- Training goals for large language models by Jul 18, 2022, 7:09 AM; 28 points) (
- Transparency Trichotomy by Mar 28, 2021, 8:26 PM; 25 points) (
- Mar 16, 2022, 6:43 PM; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- When does capability elicitation bound risk? by Jan 22, 2025, 3:42 AM; 25 points) (
- Precursor checking for deceptive alignment by Aug 3, 2022, 10:56 PM; 24 points) (
- Nov 18, 2023, 2:51 PM; 23 points) 's comment on I think I’m just confused. Once a model exists, how do you “red-team” it to see whether it’s safe. Isn’t it already dangerous? by (
- Nov 4, 2019, 11:48 PM; 21 points) 's comment on Will transparency help catch deception? Perhaps not by (
- Greed Is the Root of This Evil by Oct 13, 2022, 8:40 PM; 21 points) (
- An ML paper on data stealing provides a construction for “gradient hacking” by Jul 30, 2024, 9:44 PM; 21 points) (
- AI Safety in a World of Vulnerable Machine Learning Systems by Mar 8, 2023, 2:40 AM; 20 points) (EA Forum;
- AI Can be “Gradient Aware” Without Doing Gradient hacking. by Oct 20, 2024, 9:02 PM; 20 points) (
- Disentangling Perspectives On Strategy-Stealing in AI Safety by Dec 18, 2021, 8:13 PM; 20 points) (
- Superintelligence’s goals are likely to be random by Mar 13, 2025, 10:41 PM; 20 points) (
- Using predictors in corrigible systems by Jul 19, 2023, 10:29 PM; 19 points) (
- [AN #149]: The newsletter’s editorial policy by May 5, 2021, 5:10 PM; 19 points) (
- Nov 3, 2019, 8:16 PM; 19 points) 's comment on But exactly how complex and fragile? by (
- How Interpretability can be Impactful by Jul 18, 2022, 12:06 AM; 18 points) (
- (Extremely) Naive Gradient Hacking Doesn’t Work by Dec 20, 2022, 2:35 PM; 17 points) (
- Approaches to gradient hacking by Aug 14, 2021, 3:16 PM; 16 points) (
- Some real examples of gradient hacking by Nov 22, 2021, 12:11 AM; 15 points) (
- Is Fisherian Runaway Gradient Hacking? by Apr 10, 2022, 1:47 PM; 15 points) (
- Dec 30, 2020, 3:41 AM; 15 points) 's comment on Review Voting Thread by (
- May 29, 2023, 11:44 PM; 13 points) 's comment on Sentience matters by (
- A Taxonomy Of AI System Evaluations by Aug 19, 2024, 9:07 AM; 13 points) (
- Apr 18, 2023, 6:40 AM; 12 points) 's comment on No, really, it predicts next tokens. by (
- Mar 10, 2020, 10:23 PM; 12 points) 's comment on Zoom In: An Introduction to Circuits by (
- [AN #71]: Avoiding reward tampering through current-RF optimization by Oct 30, 2019, 5:10 PM; 12 points) (
- Gradient hacking via actual hacking by May 10, 2023, 1:57 AM; 12 points) (
- Mar 10, 2020, 9:33 PM; 11 points) 's comment on Zoom In: An Introduction to Circuits by (
- Apr 21, 2021, 5:06 PM; 11 points) 's comment on Gradations of Inner Alignment Obstacles by (
- The “no sandbagging on checkable tasks” hypothesis by Jul 31, 2023, 11:13 PM; 10 points) (EA Forum;
- Arguments for/against scheming that focus on the path SGD takes (Section 3 of “Scheming AIs”) by Dec 5, 2023, 6:48 PM; 10 points) (
- Nov 5, 2019, 12:10 AM; 9 points) 's comment on Will transparency help catch deception? Perhaps not by (
- A Taxonomy Of AI System Evaluations by Aug 19, 2024, 9:08 AM; 8 points) (EA Forum;
- The goal-guarding hypothesis (Section 2.3.1.1 of “Scheming AIs”) by Dec 2, 2023, 3:20 PM; 8 points) (
- Feb 15, 2020, 12:46 AM; 8 points) 's comment on Bayesian Evolving-to-Extinction by (
- Apr 25, 2021, 8:41 PM; 8 points) 's comment on Gradations of Inner Alignment Obstacles by (
- Arguments for/against scheming that focus on the path SGD takes (Section 3 of “Scheming AIs”) by Dec 5, 2023, 6:48 PM; 7 points) (EA Forum;
- Jan 11, 2021, 2:40 AM; 7 points) 's comment on Utility ≠ Reward by (
- The goal-guarding hypothesis (Section 2.3.1.1 of “Scheming AIs”) by Dec 2, 2023, 3:20 PM; 6 points) (EA Forum;
- Jan 13, 2021, 6:26 PM; 6 points) 's comment on Transparency and AGI safety by (
- Aug 21, 2024, 6:11 PM; 5 points) 's comment on Finding Deception in Language Models by (
- May 19, 2021, 10:50 AM; 5 points) 's comment on SGD’s Bias by (
- Dec 18, 2020, 4:03 PM; 4 points) 's comment on Richard Ngo’s Shortform by (
- Jul 11, 2023, 7:40 PM; 4 points) 's comment on OpenAI Launches Superalignment Taskforce by (
- Dec 29, 2021, 6:09 PM; 3 points) 's comment on shortplav by (
- Apr 12, 2023, 12:31 AM; 3 points) 's comment on Latent Adversarial Training by (
- Eliciting Credit Hacking Behaviours in LLMs by Sep 14, 2023, 3:07 PM; 3 points) (
- Mar 28, 2022, 11:12 PM; 2 points) 's comment on David Udell’s Shortform by (
- Oct 22, 2020, 11:46 AM; 2 points) 's comment on The Solomonoff Prior is Malign by (
- Mar 3, 2023, 3:06 PM; 2 points) 's comment on DragonGod’s Shortform by (
- Feb 3, 2023, 10:41 AM; 1 point) 's comment on alexrjl’s Shortform by (
- Jan 30, 2025, 7:48 PM; 1 point) 's comment on Erich_Grunewald’s Shortform by (
This is one of the scarier posts I’ve read on LW. I feel kinda freaked out by this post. It’s an important technical idea.
Gradient hacking seems important and I really didn’t think of this as a concrete consideration until this post came out.