The Three Levels of Goodhart’s Curse
Note: I now consider this post deprecated and instead recommend this updated version.
Goodhart’s curse is a neologism by Eliezer Yudkowsky stating that “neutrally optimizing a proxy measure U of V seeks out upward divergence of U from V.” It is related to many near by concepts (e.g. the tails come apart, winner’s curse, optimizer’s curse, regression to the mean, overfitting, edge instantiation, goodhart’s law). I claim that there are three main mechanisms through which Goodhart’s curse operates.
Goodhart’s Curse Level 1 (regressing to the mean): We are trying to optimize the value of , but since we cannot observe , we instead optimize a proxy , which is an unbiased estimate of . When we select for points with a high value, we will be biased towards points for which is an overestimate of .
As a simple example imagine and (for error) are independently normally distributed with mean 0 and variance 1, and . If we sample many points and take the one with the largest value, we can predict that will likely be positive for this point, and thus the value will predictably be an overestimate of the value.
In many cases, (like the one above) the best you can do without observing is still to take the largest value you can find, but you should still expect that this value overestimates .
Similarly, if is not necessarily an unbiased estimator of , but and are correlated, and you sample a million points and take the one with the highest value, you will end up with a value on average strictly less than if you could just take a point with a one in a million value directly.
Goodhart’s Curse Level 2 (optimizing away the correlation): Here, we assume and are correlated on average, but there may be different regions in which this correlation of stronger or weaker. When we optimize to be very high, we zoom in on the region of very large values. This region could in principle have very small values.
As a very simple example imagine is integer uniform between 0 and 1000 inclusive, and is equal to mod 1000. Overall, and are correlated. The point where is 1000 and is 0 is an outlier, but it is only one point and does not sway the correlation that much. However, when we apply a lot of optimization pressure, we through away all the points with low values, and left with a small number of extreme points. Since this is a small number of points, the correlation between and says little about what value will take.
Another more realistic example is that and are two correlated dimensions in a multivariate normal distribution, but we cut off the normal distribution to only include the disk of points in which for some large . This example represents a correlation between and in naturally occurring points, but also a boundary around what types of feasible that need not respect this correlation.
Imagine you were to sample points in the above example and take the one with the largest value. As you increase , at first, this optimization pressure lets you find better and better points for both and , but as you increase to infinity, eventually you sample so many points that you will find a point near . When enough optimization pressure was applied, the correlation between and stopped mattering, and instead the boundary of what kinds of points were possible at all decided what kind of point was selected.
Goodhart’s Curse Level 3 (adversarial correlations): Here we are selecting a world with a high value because we want a would with a high value, and we believe to a good proxy for . However, there is another agent who wants to optimize some other value . Assume that and are contradictory. Points with hight value necessarily have low value, since the demand using similar resources.
Since you are using as a proxy, this other agent is incentivized to make and correlated as much as it can. It wants to cause your process which selects a large value to also select a large value (and thus a small value).
Making and correlated may be difficult, but thanks to Level 2 of Goodhart’s Curse, the adversary need only make them correlated at the extreme values of .
For example if you run an company, and you have an programmer employee that you want to create a working product (). You incentivize the employee by selecting for or rewarding employees that produce a large number of lines of code (). The employee wants you to pay him to slack off all day (). and are contradictory. The employee is incentivized to make worlds with high also have high , and thus have low . Thus, the employee may adversarially write a script to generate a bunch of random lines of code that do nothing, giving himself more time to slack off.
Level 3 is the thing most behind the original Goodhart’s Law (although level 2 contributes as well.)
Level 3 also is the mechanism behind a superintelligent AI making a Treacherous Turn. Here, is doing what the human’s want forever. is doing what the humans want before in the training cases where the AI does not have enough power to take over, and is whatever the AI wants to do with the universe.
Finally, Level 3 is also behind the malignancy of the universal prior, where you want to predict well forever (V), so hypotheses might predict well for a while (U), so that they can manipulate the world with their future predictions (W).
- ‘X is not about Y’ is not about psychology by 11 Nov 2017 2:22 UTC; 54 points) (
- Robustness as a Path to AI Alignment by 10 Oct 2017 8:14 UTC; 45 points) (
- MIRI’s 2017 Fundraiser by 7 Dec 2017 21:47 UTC; 27 points) (
- MIRI’s 2017 Fundraiser by 1 Dec 2017 13:45 UTC; 19 points) (
- MIRI 2017 Fundraiser and Strategy Update by 1 Dec 2017 20:06 UTC; 6 points) (EA Forum;
- 30 Dec 2017 16:49 UTC; 6 points) 's comment on Announcing the AI Alignment Prize by (
- 21 Jan 2020 11:49 UTC; 3 points) 's comment on Potential downsides of using explicit probabilities by (EA Forum;
- 3 Oct 2017 0:36 UTC; 1 point) 's comment on Tests make creating AI hard by (
(x-posted from Arbital ==> Goodhart’s curse)
On “Conditions for Goodhart’s curse”:
It seems like with AI alignment the curse happens mostly when V is defined in terms of some high-level features of the state, which are normally not easily maximized. I.e., V is something like a neural network V:s↦V(s) where s is the state.
Now suppose U’ is a neural network which outputs the AI’s estimate of these features. The AI can then manipulate the state/input to maximize these features. That’s just the standard problem of adversarial examples.
So it seems like the conditions we’re looking for are generally met in the common setting were adversarial examples do work to maximize some loss function. One requirement there is that the input space is high-dimensional.
So why doesn’t the 2D Gaussian example go wrong? [This is about the example from Arbital ==> Goodhart’s Curse where there is no bound √n on V and U]. There’s no high-level features to optimize by using the flexibility of the input space.
On the other hand, you don’t need a flexible input space to fall prey to the winner’s curse. Instead of using the high flexibility of the input space you use the ‘high flexibility’ of the noise if you have many data points. The noise will take any possible value with enough data, causing the winner’s curse. If you care about a feature that is bounded under the real-world distribution but noise is unbounded, you will find that the most promising-looking data points are actually maximizing the noise.
There’s a noise-free (i.e. no measurement errors) variant of the winner’s curse which suggests another connection to adversarial examples. If you simply have n data points and pick the one that maximizes some outcome measure, you can conceptualize this as evolutionary optimization in the input space. Usually, adversarial examples are generated by following the gradient in the input space. Instead, the winner’s curse uses evolutionary optimization.
(also x-posted from https://arbital.com/p/goodharts_curse/#subpage-8s5)
Another, speculative point: If V and U were my utility function and my friend’s, my intuition is that an agent that optimizes the wrong function would act more robustly. If true, this may support the theory that Goodhart’s curse for AI alignment would be to a large extent a problem of defending against adversarial examples by learning robust features similar to human ones. Namely, the robust response may be because me and my friend have learned similar robust, high-level features; we just give them different importance.