The Importance of Goodhart’s Law
This article introduces Goodhart’s law, provides a few examples, tries to explain an origin for the law and lists out a few general mitigations.
Goodhart’s law states that once a social or economic measure is turned into a target for policy, it will lose any information content that had qualified it to play such a role in the first place. wikipedia The law was named for its developer, Charles Goodhart, a chief economic advisor to the Bank of England.
The much more famous Lucas critique is a relatively specific formulation of the same.
The most famous examples of Goodhart’s law should be the soviet factories which when given targets on the basis of numbers of nails produced many tiny useless nails and when given targets on basis of weight produced a few giant nails. Numbers and weight both correlated well in a pre-central plan scenario. After they are made targets (in different times and periods), they lose that value.
We laugh at such ridiculous stories, because our societies are generally much better run than Soviet Russia. But the key with Goodhart’s law is that it is applicable at every level. The japanese countryside is apparently full of constructions that are going on because constructions once started in recession era are getting to be almost impossible to stop. Our society centres around money, which is supposed to be a relatively good measure of reified human effort. But many unscruplous institutions have got rich by pursuing money in many ways that people would find extremely difficult to place as value-adding.
Recently GDP Fetishism by David henderson is another good article on how Goodhart’s law is affecting societies.
The way I look at Goodhart’s law is Guess the teacher’s password writ large. People and instituitions try to achieve their explicitly stated targets in the easiest way possible, often obeying the letter of the law.
A speculative origin of Goodhart’s law
The way I see Goodhart’s law work, or a target’s utility break down, is the following.
Superiors want an undefined goal G.
They formulate G* which is not G, but until now in usual practice, G and G* have correlated.
Subordinates are given the target G*.
The well-intentioned subordinate may recognise G and suggest G** as a substitute, but such people are relatively few and far inbetween. Most people try to achieve G*.
As time goes on, every means of achieving G* is sought.
Remember that G* was formulated precisely because it is simple and more explicit than G. Hence, the persons, processes and organizations which aim at maximising G* achieve competitive advantage over those trying to juggle both G* and G.
P(G|G*) reduces with time and after a point, the correlation completely breaks down.
The mitigations to Goodhart’s law
If you consider the law to be true, solutions to Goodhart’s law are an impossibility in a non-singleton scenario. So let’s consider mitigations.
Hansonian Cynicism
Better Measures
Solutions centred around Human Discretion
Hansonian Cynicism
Pointing out what most people would have in mind as G and showing that institutions all around are not following G, but their own convoluted G*s. Hansonian cynicism is definitely the second step to mitigation in many many cases (Knowing about Goodhart’s law is the first). Most people expect universities to be about education and hospitals to be about health. Pointing out that they aren’t doing what they are supposed to be doing creates a huge cognitive dissonance in the thinking person.
Better measures
Balanced scorecards
Taking multiple factors into consideration, trying to make G* as strong and spoof-proof as possible. The Scorecard approach is mathematically, the simplest solution that strikes a mind when confronted with Goodhart’s law.
Optimization around the constraint
There are no generic solutions to bridging the gap between G and G*, but the body of knowledge of theory of constraints is a very good starting point for formulating better measures for corporates.
Extrapolated Volition
CEV tries to mitigate Goodhart’s law in a better way than mechanical measures by trying to create a complete map of human morality. If G is defined fully, there is no need for a G*. CEV tries to do it for all humanity, but as an example, individual extrapolated volition should be enough. The attempt is incomplete as of now, but it is promising.
Solutions centred around Human discretion
Human discretion is the one thing that can presently beat Goodhart’s law because the constant checking and rechecking that G and G* match. Nobody will attempt to pull off anything as weird as the large nails in such a scenario. However, this is not scalable in a strict sense because of the added testing and quality control requirements.
Left Anarchist ideas
Left anarchist ideas about small firms and workgroups are based on the fact that hierarchy will inevitably introduce goodhart’s law related problems and thus the best groups are small ones doing simple things.
Hierarchical rule
On the other end of the political spectrum, Molbuggian hierarchical rule completely eliminates the mechanical aspects of the law. There is no letter of the law, its all spirit. I am supposed to take total care of my slaves and have total obedience to my master. The scalability is ensured through hierarchy.
Of all proposed solutions to the Goodhart’s law problem confronted, I like CEV the most, but that is probably a reflection on me more than anything, wanting a relatively scalable and automated solution. I’m not sure whether the human discretion supporting people are really correct in this matter.
Your comments are invited and other mitigations and solutions to Goodhart’s law are also invited.
- Bad Omens in Current Community Building by 12 May 2022 15:41 UTC; 582 points) (EA Forum;
- Gears-Level Models are Capital Investments by 22 Nov 2019 22:41 UTC; 175 points) (
- Say Wrong Things by 24 May 2019 22:11 UTC; 115 points) (
- If you value future people, why do you consider near term effects? by 8 Apr 2020 15:21 UTC; 95 points) (EA Forum;
- CFAR: Progress Report & Future Plans by 19 Dec 2019 6:19 UTC; 63 points) (
- Schelling Categories, and Simple Membership Tests by 26 Aug 2019 2:43 UTC; 59 points) (
- Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems by 10 May 2010 13:25 UTC; 56 points) (
- Self-Experiment: Does Working More Hours Increase My Output? by 3 Apr 2020 21:36 UTC; 52 points) (
- Introduction to Reducing Goodhart by 26 Aug 2021 18:38 UTC; 48 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- How time tracking can help you prioritize by 16 Dec 2019 17:07 UTC; 34 points) (
- 8 Jun 2015 19:30 UTC; 34 points) 's comment on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction by (
- 24 Nov 2019 19:46 UTC; 31 points) 's comment on The LessWrong 2018 Review by (
- 15 Jun 2010 1:01 UTC; 30 points) 's comment on Open Thread June 2010, Part 3 by (
- Beyond Optimization by Proxy by 27 May 2010 13:16 UTC; 24 points) (
- 2 Dec 2010 0:46 UTC; 23 points) 's comment on Cheat codes by (
- 9 Apr 2013 6:29 UTC; 21 points) 's comment on Problems in Education by (
- 26 Sep 2010 22:57 UTC; 16 points) 's comment on Vote Qualifications, Not Issues by (
- 15 Jun 2010 15:02 UTC; 16 points) 's comment on Open Thread June 2010, Part 3 by (
- 29 Jan 2019 10:40 UTC; 15 points) 's comment on Vox’s “Future Perfect” column frequently has flawed journalism by (EA Forum;
- Maximise Expected Utility, not Expected Perception of Utility by 26 Mar 2010 4:39 UTC; 14 points) (
- 28 Mar 2014 17:47 UTC; 13 points) 's comment on What colleges look for in extracurricular activities by (
- 3 Apr 2013 1:18 UTC; 13 points) 's comment on Open Thread, April 1-15, 2013 by (
- 18 Jan 2011 4:55 UTC; 12 points) 's comment on Statistical Prediction Rules Out-Perform Expert Human Judgments by (
- 6 Mar 2012 18:42 UTC; 8 points) 's comment on Which drives can survive intelligence’s self modification? by (
- 10 May 2010 16:17 UTC; 8 points) 's comment on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems by (
- 2 Nov 2010 8:30 UTC; 8 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 4 by (
- 29 Dec 2010 14:50 UTC; 8 points) 's comment on The Fallacy of Dressing Like a Winner by (
- 4 Jun 2010 3:11 UTC; 7 points) 's comment on Hacking the CEV for Fun and Profit by (
- 13 Apr 2010 20:00 UTC; 7 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- 8 Sep 2010 18:33 UTC; 7 points) 's comment on Humans are not automatically strategic by (
- Why do you downvote EA Forum posts & comments? by 29 May 2019 22:52 UTC; 6 points) (EA Forum;
- 8 Jun 2015 13:38 UTC; 6 points) 's comment on [Discussion] What does winning look like? by (EA Forum;
- 16 Jan 2014 18:21 UTC; 6 points) 's comment on Division of cognitive labour in accordance with researchers’ ability by (
- Value Stability and Aggregation by 6 Feb 2011 18:30 UTC; 5 points) (
- Epistemic security: example from experimental physics by 17 Feb 2012 0:48 UTC; 5 points) (
- Imperfect Levers by 17 Nov 2010 19:12 UTC; 5 points) (
- Terrorist leaders are not about Terror by 3 May 2011 17:57 UTC; 4 points) (
- 7 Jan 2019 22:26 UTC; 4 points) 's comment on Optimizing for Stories (vs Optimizing Reality) by (
- 24 Nov 2019 21:29 UTC; 4 points) 's comment on The LessWrong 2018 Review by (
- 19 Jan 2011 19:27 UTC; 3 points) 's comment on Statistical Prediction Rules Out-Perform Expert Human Judgments by (
- 6 Aug 2010 23:20 UTC; 3 points) 's comment on Open Thread, August 2010 by (
- 29 Mar 2010 19:22 UTC; 3 points) 's comment on The Shabbos goy by (
- 22 Mar 2010 2:23 UTC; 3 points) 's comment on The scourge of perverse-mindedness by (
- 9 Jul 2013 0:09 UTC; 3 points) 's comment on Open Thread, July 1-15, 2013 by (
- 26 Mar 2010 22:14 UTC; 2 points) 's comment on Newcomb’s problem happened to me by (
- 18 Mar 2010 21:07 UTC; 2 points) 's comment on Undiscriminating Skepticism by (
- 28 Apr 2010 16:13 UTC; 2 points) 's comment on Possibilities for converting useless fun into utility in Online Gaming by (
- 18 Nov 2018 21:45 UTC; 2 points) 's comment on Is Clickbait Destroying Our General Intelligence? by (
- 25 Jul 2012 4:29 UTC; 2 points) 's comment on Heuristics and Biases in Charity by (
- 7 Nov 2011 5:28 UTC; 1 point) 's comment on [SEQ RERUN] Lost Purposes by (
- 8 Jun 2013 3:18 UTC; 1 point) 's comment on Social Impact, Effective Altruism, and Motivated Cognition by (
- 2 Jul 2015 4:33 UTC; 1 point) 's comment on In praise of gullibility? by (
- Short Termism by 4 Mar 2017 12:18 UTC; 1 point) (
- 23 Aug 2013 14:04 UTC; 1 point) 's comment on Are ‘Evidence-based’ policies damaging policymaking? by (
- 24 Mar 2013 23:30 UTC; 0 points) 's comment on Don’t Get Offended by (
- 22 Jun 2010 19:50 UTC; 0 points) 's comment on Surface syllogisms and the sin-based model of causation by (
- 18 Nov 2010 2:12 UTC; 0 points) 's comment on Imperfect Levers by (
- 21 Jul 2015 17:20 UTC; 0 points) 's comment on Prediction Markets are Confounded—Implications for the feasibility of Futarchy by (
- 29 Jul 2011 20:36 UTC; 0 points) 's comment on exists(max(performance(pay))) by (
- 4 Aug 2013 6:58 UTC; -1 points) 's comment on Arguments Against Speciesism by (
- 4 Aug 2013 7:33 UTC; -3 points) 's comment on Arguments Against Speciesism by (
A good example from my own history of doing this is when I worked for an ISP and persuaded them to eliminate “cases closed” as a performance measurement for customer service and tech support people, because it was causing email-based cases to be closed without any actual investigation. People would email back and create a new case, and then a rep would get credit for closing that one without investigation either.
The replacement metric was one I derived via the Theory of Constraints, inspired by Goldratt’s “throughput-dollar-days” measurement. The replacement metric was “customer-satisfaction-waiting-hours”—a measurement of collective work-in-progress inventory at the team level, and a measurement of priority at the ticket level.
I also made it impossible to truly “close” a case—you could say, “I think this is done”, but the customer could still email into it and it would jump right back to its old place in the queue, due to the accumulated “satisfaction waiting hours” on the ticket.
Of course, the toughest part in some ways was educating new service managers that, no, you can’t have a measurement of cases closed on a per-rep basis. Instead, you’re going to have to actually pay attention to a rep’s work in order to know if they’re doing the job. (Of course, the system I developed also had ways to make it easy to see what people are working on, not only at the managerial but the team level—peer pressure is a useful co-ordination tool, if done right.)
I have no idea how well the system fared since I left the company, since it’s entirely possible they found programmers since then to give them new metrics that would f**k it up, although I did design the database in such a way as to make it as close to impossible as I could manage. ;-)
Anyway, the theory of constraints positively rocks for business performance optimization, and its Thinking Processes are generally useful tools for any rationalist. They were also a big inspiration for me developing other thinking processes and ultimately mindhacking techniques, in that they showed that it’s possible to think systematically even about some of the vaguest and most ill-defined problems imaginable, rigorously hone in on key leverage points, resolve conflicts between goals, and generally overcome our brains’ processing limitations for analysis and planning.
[Edit to add: the Wikipedia page on thinking processes doesn’t really show why a rationalist would be interested in the processes; it’s useful to know that a key element of the processes are something called the “categories of legitimate reservation”, which have to do with logical proof and well-formedness of argument. They are a key part of constructing and critiquing the semantic maps that are created by the thinking processes.
For example, ToC’s conflict resolution method effectively maps out certain implicit assumptions in a conflict, and then invites you to logically disprove these assumptions in order to break the conflict. (That is, if you can find a circumstance where one of those assumptions is false, then the conflict will no longer exist under that circumstance—and you have a potential way out of your dilemma.)
So, in short, ToC thinking processes are mostly about constructing past, present, or future semantic maps of a situation, and applying systematic logic to validating (or invalidating) the maps’ well-formedness, as a way of solving problems, creating plans, etc. Very core rationalist stuff, from an instrumental-rationality POV.]
That’s really impressive. I always wonder how customer service works in big business. Companies like Moo (their chat support on the web) and Optus (phone calls) are blissful, whereas most places are terrible. I’m curious about the determinants. I’m also curious about how companies that are receptive to insight from within their ranks, like yours, fair objectively and in terms of what subjective experience I’d get.
I am reminded of one of Dijkstra’s sayings:
So, in short: incentives can have unintented consequences, as the incentives influence whatever you want to influence with them.
There are a lot of examples of this in e.g. Dan Ariely’s book and Freakonomics.
But the best example must be the bizarre 1994 footbal (soccer) match between Barbados and Grenada. Barbados needed to win with a two goal difference.
The special incentive here was that any goal scored in the extra time would count double. Now, shortly before the end of the regular time, it was 2-1 for Barbados. Imagine what happened...
(edit: added the note about the two-goal difference, thanks Hook)
It’s an important note for the soccer game that Barbados needed to win by two points in order to advance to the finals. Otherwise, Grenada would go to the finals. Now people have a chance of imagining what happened.
Goodhart’s Law starts some other way. It’s not quite right to say:
Mathematically speaking, the problem can’t be that G is undefined. If G were really undefined in any absolute sense, then superiors would be indifferent to all possible outcomes, or would choose their utility function literally at random. That rarely happens.
Instead, the problem could be that G is difficult to articulate. It is “undefined” only in the sense that people have had trouble coming up with an explicit verbal definition for it. i know what I want and how to get it, but I don’t know how to communicate that want to you ex ante. For example, maybe I want you (the night shift manager) to page me (the owner) whenever there’s a decision to make that could affect whether our business keeps a client, but I’ve never taken any business classes and don’t quite have the vocab to say that, so instead I say to only page me if it’s “important.” “Important” is vague, but “important’ is just a map, and the map is not the territory.
Alternatively, the problem could be that G is difficult to commit to. I can define my goal in words just fine today, but I know (or you suspect) that later I will be tempted to evaluate you by some other criterion. For example, I would like to give a raise to whichever police officer does the most to keep his beat safe, and, as a thoughtful and experienced police chief, I know exactly what the difference is between a safe neighborhood and an unsafe neighborhood, and I’m happy to explain it to anyone who’s interested. As one of my employees, though, you can’t verify that I’m actually rewarding people for making neighborhoods safe, and not, say, giving raises to people who bring in the most money for drug busts, or who artificially lower their crime statistics, or who give me a kickback. It might make more sense for me to just announce that I’ll pay people based on hours worked and complaints lodged, because that announcement is more verifiable, and thus more credible, so at least I’ll be viewed as evenhanded.
Finally, as you’ve already pointed out, the problem could be that G is difficult or expensive to measure. Alternative measures of GDP that take into account factors like health, leisure, and environmental quality have gotten pretty good about specifying what health is, and it’s easy enough to pass laws that commit agencies to valuing health in a particular way, but it’s expensive to measure health, especially in any broad sense. A physical is $60; an exercise fitness exam is another $45; an STD test runs about $20; a battery of prophylactic tests for cancer and heart disease and so on is another $100 or so; a mental health exam is another $80, and then you multiply all that by the size of a valid random sample and we’re talking real money. In my opinion, it would be money very, very well spent, but one can understand why GDP—which can be measured just by asking the IRS for a copy of its tax receipts—is such a popular metric. It’s cheap to use.
I partly disagree. Simple metrics are used in place of complex goals, for good reason; https://www.ribbonfarm.com/2016/06/09/goodharts-law-and-why-measurement-is-hard/
Then the fact that the goal is too simply defined allows flexibility to be abused; https://www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/
G is a variable. It must be undefined by definition, or it is not a variable. A variable’s definition changes by context, therefore outside of context it is always undefined.
That’s why we use X instead of the number 2 in algebraic formulas. You wouldn’t say 2 − 3 = 8, solve for 2, that’s clearly stupid. You must use the undefined variable X (or any other mathematically irrelevant symbol), and then define it in context of the rest of the formula. Move X to a different formula, and it has a different definition. Isolate X without the context of a formula, and it is always undefined (X = ?).
In this instance, G is a variable without context. We aren’t making nails of a certain size, we are just talking about G and the ways G can be used to create a metric once G is known.
CEV, until designed and defined properly, is just a black box that everyone universally agrees is ‘good’, but has little else in term of defining features.
See This paper for a relatively decent account of what CEV is getting at.
If the question is “what should we want?” then CEV is much better than a black box, because it fleshes out some of the intuitions behind the magical category “want.”
If the question is “how should we measure what we want?” then CEV is just a black box, because it doesn’t solve or suggest a method for solving any of our measurement problems. We know we want coherent, extrapolated, volitional thingies, but we have no idea, for example, how to rigorously define “volitional.” We likewise have no idea how far into the future we should be extrapolating things, nor how many facets of a personality or society can reasonably be expected to converge/cohere.
It’s not so much a box, but a method of filling the box. We just haven’t filled the box yet.
The fact that students who are motivated to get good scores in exams very often get better scores than students who are genuinely interested in the subject is probably also an application of Goodhart’s Law?
Partially; but a lot of what is being tested is actually skills correlated with being good in exams—working hard, memorisation, bending youself to the rules, ability to learn skill sets even if you don’t love them, gaming the system—rather than interest in the subject.
But those skills don’t correlate with doing good science, or with good use of the subject of the exams in general, nearly so well, and they are easy to test in other ways.
Goodhart’s law seems very applicable to natural selection: the Blind Idiot God wants creatures to have higher fitness (G), and so creates targets that are correlated with fitness in the ancestral habitat (e.g., pleasure-seeking and pain-avoidance (G*)). Once you get creatures that are self-aware (us), they figure out G-star, and start optimizing for that instead of G.
Relevant.
In software development, this is (or ought to be) known as the Mini-Van Law.
It made me think of the Tree Swing, insofar as it represents how difficult it can be to create and follow a good G* through the process.
The Minivan Law would have made a good name for Goodhart’s Law for multiple reasons.
Example:
http://www.statschat.org.nz/2011/11/16/goodharts-law-and-brazilian-hamburgers/
http://marginalrevolution.com/marginalrevolution/2011/11/sentences-to-ponder-32.html
Getting back to trying to propose practical mitigation strategies for goodhart’s law, I propose a fairly simple solution: Choose a G*, evaluate performance based on it, but KEEP IT SECRET. This of course wouldn’t really work for national scale, GDP-esque kind of situations, but for corporate management situations it seems like it could work well enough. If only upper management knows what G* is, it becomes impossible to optimize for it, and everyone has to just keep working under the assumption they’re being evaluated on G.
Taking it a step further, to hedge against employees eventually figuring out G* and surreptitiously optimizing for it, you could have a bounty on guessing G* - the first employee who figures out what the mystery metric G* really is gets a prize, and as soon as it’s claimed, you switch to using G**
The hedge is absolutely necessary, elsewise, a manager will just tell subordinates what G* is in order to look impressive for managing a high-performing group.
Andrew Grove (of Intel fame) wrote a book, High Output Management, suggesting that management needs two opposing metrics to avoid this problem. For example, measure productivity and number of defects, and score people on the combined results.
BTW the large nail/little nail joke has a third part. Soviet management eventually got a clue and started measuring by the value of the nails produced… and the result was the world’s first solid-gold-nail factory.
Presumably finding arbitrarily many basic G*‘s will be hard. Two ideas for dealing with this: 1. Even if you only have finitely many and they’re all known, you could select one at random each time there’s a switch. 2. Each time there’s a switch, select a somehow-random linear (or some other sort, if you like) combination of your basic G*’s. (That would make guessing it in the first place quite hard, actually...)
if management are doing that then are neglecting a powerful tool in their tool-kit, because announcing a G will surely cause G to fall, and experience says that to begin with a well-chosen G and G remain correlated (because many of the things to do to reduce G also reduce G). It is only over time that G* and G detach.
Pretty much every trick in organization design or management can be thought of as a partial solution to this problem. Listing “anarchy” or “absolute authority” explicitly on a short list of solutions is therefore a bit misleading.
At work a large part of my job involves choosing G , and I can report that Goodhart’s Law is very powerful and readily observable.
Further : rational players in the workspace know full-well that management desire G, and the G is not well-correlated with G, but nonethelss if they are rewarded on G*, then that’s what they will focus on.
The best solution—in my experience—is mentioned in the post: the balanced scorecard. Define several measures G1 G2 G3 and G4 that are normally correlated with G. The correlation is then more persistent : if all four measures improve it is likely that G will improve.
G1 G2 G3 G4 may be presented as simulaneous measures, or if setting four measures in one go is too confusing for people trying to prioritise (the frwer the measures the more powerful) they can be sequential. IE If you hope to improve G over 2 years, then measure G1 for two quarters, then switch the measurement to G2 for the next two and so on. (obviously you don’t tell people in advance). NB this approach can eb effective, but will make you very unpopular.
Why obviously? Are you so afraid that people would do the right thing without immediate incentives?
I think I’d measure G1 first, but would tell in advance that next quarter we will measure that one of G1,G2,G3,… which will be most critical at the beginning of that quarter.
Goodhart’s Law is a very nice corollary to the Snafu Principle: Communication is impossible in a hierarchy.
Temple Grandin has written about the importance of finding relevant, measurable standards—the example she gives is the number of cattle falling down on the way to slaughter. Not falling down means the genes, food, lighting, walking surface etc. are all good enough.
Thing to check: Do measures used as targets for policy always become completely useless, or do they sometimes become increasingly less useful, but not totally useless? Does culture matter? I suspect that the amount of judgement which people are allowed to mix into the system varies a lot.
She seems to believe that thinking visually will be more likely to produce such standards than (as most people do) verbally.
I’m not convinced of that—dog and cat show standards are an example of well-defined visual standards not producing reliably good results.
I have no idea if it’s even possible to have that good a standard for financial markets.
I’ve heard “you can’t manage what you can’t measure”, but I think “you can’t manage what you can’t perceive” is better. Is it possible to generalize the idea of the king traveling incognito to see how the kingdom is doing?
Once management recognizes that there is something to measure, I think they do an OK job measuring it—secret shoppers come to mind. But there’s something more subtle about when you take for granted that G = G* and don’t even think to verbalize your true values, so can’t measure them.
The secret shoppers are a variant of “the king going incognito”—but not as good in some ways because they may be tasked with evaluating according to a checklist, and thus could still be trapped by G vs G*.
I believe that the problem isn’t that true values aren’t verbalized, it’s that they can’t be fully verbalized. Language is too low-bandwidth to capture all the aspects of a situation.
The point of a king going incognito isn’t just to enforce existing, verbalized rules, it’s to see how things are in the kingdom. It’s a bit easier for a king than an AI because a king is more like a subject than an AI is like people.
Does culture matter?
Yes. Culture is partly, a process of making people behave in a predictable fashion. If you make your subordinate as similar to you as possible, then there is a good chance that he/she will perceive G instead of G . But you have to be committed to juggling G and G and take the risk that someone actively pursuing G* is not getting ahead of you.
That’s not even true, when you measure results Soviet Union was ran about as well as any other country. They were just ran differently.
Anyway, it doesn’t even seem mathematically obvious to me that optimizing for G* will reduce correlation between G and G*.
Let G=market value of nails, G*=number of nails. Once G* target is introduced everyone switches from making medium nails to making tiny nails, but correlation between market value of nails and number of nails among different factories is still very very high.
There is some loss of value from optimizing for G* as opposed to optimizing for G, but no incentive system is perfect, and incentive systems have costs, so simple incentive scheme might still be better than having to pay the cost of more accurate but far more complicated scheme.
blogospheroid: When you pick a metric of success, countries will game it by doing well on that metric, yet not achieving what is really meant by success. A good example would be the Soviet Union, whose leaders constantly made sure they did well by the metrics, yet were actually far from successful.
taw: Not true—the Soviet Union did about average by those metrics, so they had about average success.
me: *falls out of chair*
Key question: are the metrics optimized by the Soviet Union identical to the metrics suggested to evaluate the success of the Soviet Union?
You misread me. GDP is one of those really-hard-to-game high-correlation-with-everything-meaningful metrics, and Soviet Union did ok with other metrics like access to clean water, electricity etc.; life expectancy, child mortality, and pretty much everything else you can think of.
People’s claim that Soviet Union was a disaster as if it was a well established fact, while it was not. South America was a disaster. India was a disaster. Indonesia was a disaster. Africa was a disaster. Soviet Union and other Communist countries were fairly average.
What you’re saying is basically “Soviet Union was unsuccessful and I base it on my feelings about it and no metrics of any kind”.
The Soviet Union’s GDP was approximately half military spending. In other words, at least half of Soviet GDP was an almost complete dead-weight loss to the citizens. GDP is just an aggregation of total spending, if money is spent on pure dross, it still shows up as an improvement in GDP.
The Soviet Union killed tens of millions of its people through work camps, starvation and purges. How is that not a disaster?
And GDP is not hard to game if you’re centrally planning the economy. And even that excludes simply lying about your figures.
This is a really good point. It kind of goes to the heart of the emptiness of GDP as a statistic. What was the death toll of a 5% increase in GDP? The Soviet economy also was infamous for overproducing big capital goods but failing to produce consumer goods people actually wanted to buy.
Similarly, Ceauşescu supposedly managed to clear Romania’s national debt, but, in doing so, impoverished the people and crushed their spirit. The socio-cultural damage he did to the country cannot be overstated. (btw I’m half-Romanian and lived there during his reign).
But did they game GDP?
Your insinuation seems to predict that there was a dramatic drop after the fall of communism. It did fall by half, but that just brought it back to levels reported in 1985. Some communist countries, such as Poland and Hungary, barely had any dip.
Russia continued to tumble, which is consistent with what was going on. If you trust western GDP figures, the most skeptical position I can imagine is taking the 1997 figures as a proxy for the 1985 figures, and concluding a 1.5x fudging. Which is not much for taw’s purposes.
That accounts for deception, but not the difference between GDP and true economic value added, which is the whole reason GDP was raised as an example. Communist countries game GDP by getting people to produce large quantities of worthless goods (like gigantic nails). Those giant nails added to GDP, but they make anybody better off?
GDP is an approximate measure of material well being. If your economic success metric classifies a society with mass starvation and routine shortages of basic goods as being as successful as a society that doesn’t, then your metric is busted.
At least, that is the cover story that Naily uses to hide his tracks. Clippy, start taking notes!
There are several points here. What I endorse is what I took to be TAW’s original point: people laugh at these stories and reinforce basically false beliefs about Soviet efficiency. The stories about tiny nails are true, but they are not representative. For these purposes, it is irrelevant if the goal of the efficiency was military production. The work camps are relevant if that is how they achieved efficiency, but I don’t think that’s a popular belief.
Also, people compiling GDP, like the CIA, try not to count worthless goods. They also compiled civilian consumption, if you’d like to try to exclude military spending, but I don’t know where the data is.
I’m not sure I endorse the use of GDP for general success of society. It is very convenient to talk about relative changes in GDP, though. No one is claiming that the USSR was a rich society, only that its GDP was multiplied by a reasonable number over the course of the century. But I am claiming that it didn’t suffer mass starvation after Stalin.
Do I have a source for this? Every thing I can find seems to point towards it being a joke.
The story of the giant nail is a joke, appearing in Krokodil, c1960. I switched back to the tiny nails because it was pretty close to anecdotes I’ve heard that I’m pretty sure were not jokes. But those were oral, so I can’t cite them. Do you accept anecdotes from Alec Nove? I see quoted from p94 of his 1977 Soviet Economic System “It is notorious that Soviet sheet steel has been heavy and thick, for this sort of reason. Sheet glass was too heavy when it was planned in tons, and paper too thick.” On p355 of the his 1969 Economic History of the USSR (or p365 of the 1993 edition):
Ok, that makes sense.
Here’s the original nail joke. bigger
– Кому нужен такой гвоздь?
– Это пустяки! Главное – мы сразу выполнили план по гвоздям...
Efficiency isn’t just the stuff you produce, in economics its allocative efficiency (roughly the value of the stuff you produce), not mere technical efficiency that matters. GDP data is collected at a pretty high level, and I’d be surprised if the CIA could adjust effectively for low-value production. Even just looking at civilian production won’t do because it doesn’t account for mismatches of supply and demand e.g. twice as many shirts and half as many shoes as people demand.
Its true that the USSR grew a lot in the 1950s and 1960s, and it would be implausible to suggest it was all wastage. But that can be explained by convergence, specifically the increase in capital stock over that period. Lots of countries managed to industrialise without communism, so I can’t really attribute this growth to communism per se. I’d be willing to accept this as evidence that communism wasn’t a total failure (since it did produce positive side effects), but not that it was a success.
Whether mass starvation happened after Stalin is besides the point. Stalin was part of the system. There’s no reason why the USSR should have had famine when western countries had no difficulty, so I think any starvation is attributable to communism.
Is that really true?
http://en.wikipedia.org/wiki/Suppressed_research_in_the_Soviet_Union#Statistics
By the numbers, Russia fell off a cliff when the USSR dissolved; I’ve always wondered how much of that was genuinely due to transition troubles, kleptocracy etc. and how much was just poor USSR performance finally showing up in the statistics.
It was genuinely due to transition troubles. Many former Communist countries did reasonably well in transition—usually those that were close enough to the west that they could switch their trade patterns effectively and not be caught up in the mess; and you really have no way to fake life expectancy and such—which suffered a lot in the most transition-affected countries like Russia.
citation needed
The USSR was a dump. Saying things like
“That’s not even true, when you measure results Soviet Union was ran about as well as any other country. They were just ran differently.”
is revisionism of the worst kind. I think a simple but informative hypothesis is that Putin’s Russia is mostly the same place with the same institutions in charge, but sans the explicit communist ideology.
If you just consider the endpoint, maybe. But why would you do that? What would you be trying to show?
IMHO, if we consider the time period 1918-1990, South America, India, Indonesia, and Africa—not to mention China, Japan, Mexico, and, gee, that’s pretty much the whole world, isn’t it? - all made more progress than the Soviet Union did. East Germany and large parts of eastern Europe probably made negative economic progress. It doesn’t impress me that they were still better-off than parts of Africa after 50 years of decline.
When discussing the Soviet Union, and more specifically Russia, you have to also consider the beginning point as well. It should be noted how far behind Russia was compared to the rest of Europe in 1918. Coming out of abject serfdom bordering on generalized slavery, they actually made tremendous progress in both abstract metrics and tangible result in quality of life up until the late 50′s or early 60′s. Over time that then declined.
In any case, to an extent their G was “production”, measurable production, in the sample case: nails. Their G was not the market value of nails, their G was “progress through central planning”, but they didn’t know how to measure “progress” except through the early-capitalist metrics of “production”. Thus: produce more = progress, in their practice. Our G is GDP. People seem so happy with our GDP, without reflecting on things like income disparity, striation of wealth, etc. If we allow it, we can G* ourselves into a mutant 3rd world nation, with great GDP performance but declining quality of life generally. G is quality of life. Economists, and lay people, generally equate the two, and the correlate generally, but they are not irrevocably entangled.
I’m far from an expert on economic history, but I don’t think you can reasonably say that South America in general did better economically. You say that the endpoint for the Soviet Union was better, lets say they were about equal. But South America at the beginning of the 20th century was reasonably well developed economically, Argentinia in particular was pretty much on the same level as western Europe, far ahead of Russia which was still rather backwards, even though it was rapidly developing (largely based on mostly French loans/investments). I’m not so sure about south America as a whole, probably slightly ahead or about even. And then Russia got thoroughly wrecked by two world wars while South America was untouched. I think the Soviet Union wins, even though by how much isn’t clear.
EDIT: Looked up some figures, seems like the Russian Empire had about 2.5 times the population and 2.5 times the GDP of Latin America, so about even was right.
Most people on this supposedly rationalist site don’t even bother looking at the data when it comes to Soviet Union—they get instant emotional reaction. In case you’re one of those who actually care about the data, here I made it easier for you.
How exactly have you determined the instant emotional reaction of most of the people on this site in response to the Soviet Union? I haven’t seen most people even comment on the subject, much less display obvious evidence emotional involvement.
Did you actually think through your estimates of soviet-emotionalism in the population or is this a case of “the pot calling a non representative sample of kettles black”?
I’ll agree that “most people [...] don’t even bother looking at the data [...]”—I, in particular, am not sufficiently invested in this argument to go to the inconvenience of reading a PDF. The effect of modifiers “this site” and “Soviet Russia” I have no interesting opinion on.
(By the way: horrible format for Internet content. If you can read this, please don’t upload your information to the Internet in PDF format. Make an HTML file.)
Here’s an HTMLized version, albeit one that still looks like a PDF (though one you don’t have to download, doesn’t use any browser plugins, and can’t give you a virus).
We have to learn to live with PDFs as virtually all research is formatted as PDFs. Sane (single column portrait-only) PDFs like the linked paper are not particularly worse than constant-width websites. You are exaggerating the inconvenience.
The problem are PDFs which do things that make sense only on paper—like double column / alternating portrait-landscape—these are really really bad for reading on screen. But—what stops PDF readers from having some hacks to make them bearable? I cannot think of any reason. And it would definitely be easier to hack PDF readers than to make all researchers and all research journals in the world switch to HTML.
Related problem of tables being in appendix as opposed to floating seems harder to solve, but it’s nowhere near as bad as double columns PDFs.
The biggest three problems with PDFs as a format for Internet content are:
The text display does not adapt to your window.
Viewing the content requires running additional processes, adding CPU and memory usage.
PDF viruses.
You pointed out (1), but (2) is no less annoying to me personally. That said: yeah, I got no control over this.
(1) is genuine (but then many websites assume constant width, so it’s not PDF-exclusive issue), but (2) and (3) sound totally made-up to me. Browsers have had far more security vulnerabilities, and use far more CPU/memory than PDF readers.
PDF viruses exist.
Indeed, I’ve been infected by PDF-based viruses more than once. Updating Acrobat and turning off JavaScript in PDFs isn’t enough to keep you safe, either; I finally added NoScript to Firefox in order to prevent any PDFs from being displayed without an extra enabling click, so that only PDFs I trust are ever downloaded.
Of course, this has little relevance to scientific papers: the PDFs that you need to worry about are the ones that you never intended to download in the first place, that are downloaded in the background via JavaScript or an iframe embedded in an ad on a random webpage. (I once caught one from Kaj Sotala’s LiveJournal page, for example… just visiting the page was enough to infect my machine.)
But a browser alone will have fewer vulnerabilities (and probably use less resources) than a browser + a PDF reader.
Nearly echoing FAWS, a browser alone will have less CPU/memory usage than a browser+a PDF reader. More importantly, there is no delay to load the PDF viewer when visiting an HTML page, where there is for PDFs.
I’m not particularly invested in the issue but it seems like you’re underestimating the importance of the Eastern and Western Germany diversion. That is about as close as we’re ever going to get to having actually experimental conditions to test this hypothesis. We have one nation, divided it in half, structured their economies in accordance with the leading theories of the time, let them develop, and found a clear winner. Of course there are possible sources of error and maybe there are reasons to think the lessons learned in Germany don’t apply to the rest of the Soviet bloc, but this is about as compelling as evidence gets in economics.
It’s very compelling evidence that common knowledge is wrong, as GDP per capita ratios of West and East Germany were virtually identical in 1950 and 1990.
All the difference happened during the war (East Germany suffered from incomparably more fighting and destruction than West Germany) and earliest years of occupation (Soviets plundered everything they could and destroyed the rest; while Western Allies gave massive levels of economic aid in form of the Marshal Plan).
German experience is a great proof that the difference in economic performance between Communism and Capitalism is minor.
I was going to raise two of the same points (Marshal plan and Soviet looting), but I would consider only managing to not fall even further behind a relative failure. And that’s with both the FRG (e. g. giving the GDR access to the EC market) and the Soviets ( can’t find a source right now but IIRC they tried to prove that the socialist system could allow a high standard of living) trying to prop up their economy towards the end of that period.
The paper I’ve linked to so many times deals with this converge question. There seems to be no evidence for any kind of global economic convergence—or any worldwide correlation between economic levels and economic growth—you seem to only converge to levels of your geographically close trading partners. East Germany mostly traded with countries even poorer than itself. West Germany traded mostly with very rich countries.
Of course you could ask question like “so why didn’t the trade more with the Western Europe and USA etc.”, but you could be asking the same question about Mexico, Argentina, Indonesia, New Zealand, and countless other countries which did worse than Communist average.
Overall, evidence for Communism being an economic failure is shockingly underwhelming relative to how widely and strongly it is believed.
I think recovery after war and plundering is a bit different than normal convergence. Wrecked developed nations don’t behave like developing nations of the same GDP. Since a main difference was East Germany being even more wrecked more of their GDP growth should have been of the easier rebuilding/recovery sort.
Heh, the third paragraph sounds rather familiar :).
The Soviet Union’s GDP was approximately half military spending. In other words, at least half of Soviet GDP was an almost complete dead-weight loss to the citizens.
See Greg Lewis’s post here: https://www.lesswrong.com/posts/dC7mP5nSwvpL65Qu5/why-the-tails-come-apart and Scott Alexander’s discussion here: http://slatestarcodex.com/2018/09/25/the-tails-coming-apart-as-metaphor-for-life/
Also see our paper formalizing the other Goodhart’s Law failure modes: https://arxiv.org/abs/1803.04585
1) Examples of G* should be given a cost-benefit analysis. Yeah, scammers and parasites exist, but societies that use money still seem to better off than societies that try to get rid of it.
2) It’s unclear to me why you list CEV as one of the solutions. We use money to allocate limited resources. If magic nano-AI appears and resources become unlimited, why keep score at all? If it doesn’t and resources stay limited, how does CEV help you distribute bread, and would you really like it to replace money? (I wouldn’t. No caring daddies for me, please.)
In the case of a FAI G would be friendliness and G the friendliness definition. Avoiding a Goodhart’s Law effect on G is pretty much the core of the friendliness problem in a nutshell. An example of such a Goodhart’s Law effect would be the molecular smiley faces scenario.
Ah, sorry. I’ve read the post as saying something different from what it actually says.
Good discussion.
The point I wanted to make was about Extrapolated volition as a strategy to avoid Goodhart’s law issues. If you extrapolate the volition of a person towards the “person he/she wants to be” and put a resulting goal as G*, it will be pretty much close to G as can be. I presented CEV as an example, since the audience is more familiar with it.
And FAWS, your definition of G and G* in the friendliness scenario is perfect. I’ve nothing more to add there.
I noticed this tendency in British running of hospitals, schools and police forces. The gov got hooked on the idea of targets and not on medicine, education and public order.
And yet, correlation between government targets and results is probably a lot higher than correlation between teachers/doctors/policemen fuzzy ideas how their job should be done and results.
Maybe, but how probably and how much? Why do you think that? For which governments? By what measure?
I wish there were some examples (other than the Soviet nails) … if I had some better idea of what G and G* might actually represent, I’d be able to more easily get my head around the rest of the post.
I’m surprised no one has yet brought up (G*) the LW karma system as a proxy for (G) contributing to “refining the art of human rationality”.
LW karma is an interesting example because no one has direct access to the karma giving algorithm.
It’s a bit like telling the nail factory that you’re going to evaluate them on something, but not telling them whether its nail mass or number or something else until the end of the evaluation period.
If the one being evaluated knows nothing about how he’s going to be evaluated except that it’s going to be a proxy for goodness, then he can’t really cheat. However, they might know that it’s going to be very simple criteria so they make a very massive nail and many miniature ones.
This reminds me of the way I hear they do state censorship in China. The censoring agencies don’t actually give out any specific guidelines on what is allowed and what isn’t, instead just clamping down on cases they do consider to be over the line. As a consequence, everyone self-censors more than they might with specific guidelines: with the guidelines, you could always try to twist their letter to violate their spirit. Instead, people are constantly unsure of just exactly what will get you in trouble, so they err on the side of caution.
While I strongly oppose state censorship, I can’t help but admire the genius in the system.
Also, unlike Saudi Arabia, they don’t make many efforts to block pornography. As a result, the average Chinese teen is less likely to know how to access blocked sites than the average Saudi teen is (or so I read; I’m not aware of any study on that).
Or section 28 , which didn’t forbid the discussion of homosexuality in the classroom, only its promotion....but since promotion wasn’t defined, schools erred on the side of not mentioning it.
Depressing. This would mean that most informal norms of censorship are much more resilient and effective than most formal laws censoring material.
Arguably this makes them much harder to dislodge than even the intentionally vague Chinese law. Since I guess you can’t really be prosecuted under it by pointing out there is a censorship law right?
I never thought of the LW karma system a proxy for that.
What is your interpretation of it? It seems a pretty plausible hypothesis to me that it’s a proxy for something, and has come to be relied upon as such. If we think Goodhart’s Law applies in the case of karma, the final prediction in the “speculative origin” section might be something to be concerned about.
I think of it as a proxy for “valued member of the community”—if someone has karma, then people like their posts and comments. I’m mostly here to have fun and pass the time, and I happen to find discussing rationality to be fun. I don’t really expect refining the art of human rationality to be well-correlated with a popularity contest.
And do you think Goodhart’s Law, as presented in the post, applies here? That is, we should expect that eventually people (through gaming the system) end up with high karma without that in fact reliably correlating with being valued members of the community?
As a data point, one thing I’ve noticed that seems to give a disproportionate amount of karma is arguing with someone who’s wrong and unwilling to listen. It’s easy to think they might come around eventually, and each point you make against them is worth a few points of karma from the amused onlookers or fellow arguers—which might tell you that you’re making a valuable contribution, and so encourage you to keep arguing with trolls. This is my impression, at least.
Edit: (The problem being—determining the point of diminishing returns.)
Except we’re like the self-employed in this regard. You can’t do anything with karma. It won’t impress your boss. It is just a way of quantifying how valued you are by the community. An employee doesn’t really care about G at all. She cares about G because that’s what impresses the boss which furthers her own goals. But if you are your own boss you do care about G, G is just an easy way to measure it. For me at least, this is the case with karma. I can’t do anything with the number but it suggests that people like me.
So perhaps revenue sharing is a way to help address the problem. Instead of trying to come up with ways to measure what you care about, make the people beneath you care about it too. Of course this is a lot easier with money than it is with values.
My boss cares about karma.
Only if people care about having high karma. It’s probably fairly easy to game karma by making multiple accounts and voting yourself up, but why bother?
What? You mean Karma doesn’t reliably correlate with objective worth of the individual? Damn.
In education, this is one of the criticisms of high-stakes testing: you’ll just get schools teaching to the test, in ways that aren’t correlated to real learning (the test is G*, real knowledge/learning is G). People say the same thing about the SAT and test prep—kids get into better colleges because they paid to learn tricks for answering multiple choice questions. The Wire does a great job of showing the police force’s efforts to “juke the stats” (e.g. counting robberies as larcenies) so that crime statistics (G*) look better even while crime (G) is getting worse. Athletes get criticized for playing for their stats (G*), or trying to pad their stats, instead of playing to win, when the stats are supposed to be a measure of how much a player has contributed to his team’s chances of winning (G). I’m not sure if it’s historically accurate, but I’ve heard that body count (G*) was used by the US as one of the main metrics of success (G) in the Vietnam war, and as a result we ended up with a bunch of dead bodies but a misguided war.
In general, any time you measure something you care about in order to incentivize people, or to hold people accountable, or to keep track of what’s going on, and the thing you measure isn’t exactly the same as the thing that you care about, there’s a risk of figuring out ways to improve the measurement that don’t translate into improvements on the thing that you care about.
The health and/or beauty of a woman (G) and her scale reported weight (G*) which might be somewhat correlated under some circumstances, but are definitely not identical and can diverge rather sharply due to crazy diets.
Here’s a few.
Well there’s a few described here, for instance: http://lesswrong.com/lw/le/lost_purposes/
Products that are good for humanity, and products that are profitable
Call time (G) or calls taken (G) in a call center, where what they care about is customer satisfaction (G) (at least inasmuch as it serves profitability).
Thanks,
I suggest editing a “summary break” into this post to create a “continue reading” link on the frontpage. It’s the 6th button from the left atop the editing interface.
Someone already did it for me. But I will note it from next time. Thanks.
Diversity of an ecosystem is a way to reduce the impact of Goodhart’s law. If different universities would use very different G* for their hiring decisions it would be harder for young researchers to optimize for any particular G*.
I don’t see how hierarchical rule is a solution. Hierarchy requires the people at the top of the hierarchy to give order to people at the bottom to achieve certain outcomes and measure whether those outcomes are achieved.
Often a goal set is not based on a single set of arguments justifying it, but because it is a good compromise point between multiple arguments, motivations or interest groups. For example human rights formulations don’t perfectly fulfill any groups desires (utilitarians, egalitarians, deontological groups, religious motivations etc.) but are a point of overlap between their goal sets (both utilitarians and deontologists both think torture and murder are generally bad). Similarly with GDP, economic growth is a shared interest of several groups in society.
So some instances of goodhearts law may be an observation that particular sets of goals are not being perfectly fulfilled.
I’ve had some interest in TOC, could you please expand on how it works to get G* closer to G?
Generally I’ve found TOC to be some really interesting semi-scientific stuff mixed with a ton of self promotion by goldratt.
See also, Campbell’s Law
I think your link needs to be fixed. If you mark it up explicitly (brackets/parens notation), that should fix it.
One method is to have no G*. Tell people some of the things you’ll be looking at, but don’t give them specific targets or tell them exactly how you will be judging G.
This is the method currently in use to assess the quality of research in departments of universities in the UK. Every department that wants to be assessed must supply certain very detailed information (e.g. for each member of staff declared as doing research, a list of their recent publications, grants held, awards received, etc.). The actual assessment is carried out behind closed doors. They give general guidelines about their criteria, but the process is one of “expert review, informed by indicators where appropriate”.
This is done every few years. The data requested changes every time, and sometimes even the name of the exercise. As government funding depends on the outcome, every research-active department is desperate to get a good rating.
The same issue came up many years ago in the work of H Edwards Deming in quality control.
Any data used to reward and/or punish people will become useless for managing the organization.
I like this article a lot. My solution (borrowed from Nassim Taleb) would be skin in the game. Any potential outcome resulting from the actions of the agent should be also affecting the agent.
Interesting: Left Anarchist, Right Libertarian, and Distributist ideals are fundamentally the same. While Right-Libertarians pay a form of lip service to the idea of hierarchical corporate capitalism, scratch them a bit and you find they long for SV startups or farmers on the American Frontier, as presented in books like The Moon Is A Harsh Mistress—family businesses or small, egalitarian workgroups like the 3 guys who founded YouTube. And Left-Anarchism and Disrtributism are pretty much the same, the difference is LA putting a mainly socially liberal sauce on it, while Distributism putting a socially reactionary medievalist-catholic sauce on it (medieval artisans loosely cooperating in guilds being their ideal).
From this angle, a lot of different people want a small-business world, apparently we can have both socially liberal and socially reactionary versions of it, we just don’t know how to deal with the Economies of Scale. But apparently if we could figure that out, it would be an economic model usable by many different people. Distributist Catholics could worship the idea of the Family (as a productive economic unit), Left-Anarchist would have their non-hiearchy, and Right-Libertarians could have people get off their lawn.
CEV isn’t even an idea yet. It’s a collection of phrases stringing together undefined terms. What isn’t undefined is self-contradictory or incoherent. I very much doubt that it is possible of being refined into anything coherent and consistent; it has contradictions built into its core.
I think the Lucas Critique is actually more general than Goodhart’s Law. Goodhart is a specific form of Lucas!
It’s true that GDP is not identical to national welfare. And you can come up with anecdotes where some welfare measure isn’t fully captured by GDP (both positive and negative).
But GDP is useful, because it is very hard to game. The examples in your “fetishism” link are very weak. Unlike the nails example, where we can all agree that the factory made the wrong choice for society, it is far from clear that the GDP examples resulted in the wrong policy, even if GDP is only an approximation for welfare.
GDP is not a good example of Goodhart’s Law. It’s nothing at all like the (broken) correlation between inflation and unemployment, which varies widely depending on policy choices.
I’m not sure whether it’s that hard to game GDP, but I am sure that it just measures the money economy. If people need to spend more on repairing damage, or on something which is useless for them, then the GDP goes up just as if they were getting more of what they want.
Example of wheel-spinning: tax law becomes more complex. People need to spend more on help with their taxes, and possibly work longer hours to afford it. More economic activity, bur are their lives better?
You are correct, that not all activity recorded in GDP is welfare-enhancing. (Note that GDP also underreports some positive welfare activities.)
But that’s not the important point. The important point is: does the difference between the GDP measure, and some more accurate measure, have any implications for economic policy? The answer seems to be no: attempts have been made to define more precise welfare-tracking measures of national welfare, and the result seems to be that they track GDP very closely, and that there is basically no implication for policy decisions.
(Perhaps try an analogy? To oversimplify, in dermatology there are millions of different infections you might get on your skin, but roughly speaking only a small handful of possible treatments. A good doctor doesn’t attempt expensive diagnostics to figure out exactly what you have; the proper medical treatment is only to distinguish what general class of infection you have, so that the correct treatment can be applied from the small possibilities. Similarly, GDP is easy to measure, and results in pretty much the correct policy suggestions. Anecdotes of how it is not perfect, are only important in so far as they would imply different policy choices, when they usually don’t.)
I’m skeptical. Can I get a link to one or more of the alternative welfare-tracking measures?
That’s a fair point. I haven’t studied proposed alternative measures.
I like using immigration and emigration as rough measures for comparing how good places are to live, with an allowance for willingness to risk dying to move.
Have the immigration and emigration stats ever been out of sync with GDP?