Unconscious Economics
Here’s an insight I had about how incentives work in practice, that I’ve not seen explained in an econ textbook/course.
There are at least three ways in which incentives affect behaviour: 1) via consciously motivating agents, 2) via unconsciously reinforcing certain behaviour, and 3) via selection effects. I think perhaps 2) and probably 3) are more important, but much less talked about.
Examples of 1) are the following:
When content creators get paid for the number of views their videos have… they will deliberately try to maximise view-count, for example by crafting vague, clickbaity titles that many people will click on.
When salespeople get paid a commision based on how many sales they do, but do not lose any salary due to poor customer reviews… they will selectively boast and exaggerate the good aspects of a product and downplay or sneakily circumvent discussion of the downsides.
When college admissions are partly based on grades, students will work really hard to find the teacher’s password and get good grades, instead of doing things like being independently curious, exploratory and trying to deeply understand the subject
One objection you might have to this is something like:
Look at those people without integrity, just trying so hard to optimise whatever their incentives tell them to! I myself, and indeed most people, wouldn’t behave that way.
On the one hand, I would make videos I think are good, and honestly sell products the way I would sell something to a friend, and make sure I understand my textbook instead of just memorising things. I’m not some kind of microeconomic robot!
And on the other hand, even if things were not like this… it’s just really hard to creatively find ways of maximising a target. I don’t know what appeals to ‘the kids’ on YouTube, and I don’t know how to find out except by paying for some huge survey or something… human brains aren’t really designed for doing maximising like that. I couldn’t optimise in all these clever ways even if I wanted to.
One response to this is:
Without engaging with your particular arguments, we know empirically that the conclusion is false. There’s a wealth of econometrics and micro papers showing how demand shifts in response to price changes. I could dig out plenty of references for you… but heck, just look around.
There’s a $10.000/year daycare close to where I live, and when the moms there take their kids to the cinema, they’ll tell them to pretend they’re 6 and not 7 years old just to get a $3 discount on the tickets.
And I’m pretty confident you’ve had persuasive salespeople peddle you something, and then went home with a lingering sense of regret in your belly…
Or have you ever seen your friend in a queue somewhere and casually slid in right behind them, just to get into the venue 5 minutes earlier?
All in all, if you give people an opportunity to earn some money or time… they’ll tend to take it!
This might or might not be a good reply.
However, by appealing to 2) and 3), we don’t have to make this response at all. The effects of incentives on behaviour don’t have to be consciously mediated. Rather...
When content creators get paid for the number of views their videos have, those whose natural way of writing titles is a bit more clickbait-y will tend to get more views, and so over time accumulate more influence and social capital in the YouTube community, which makes it harder for less clickbait-y content producers to compete. No one has to change their behaviour/or their strategies that much—rather, when changing incentives you’re changing the rules of game, and so the winners will be different. Even for those less fortunate producers, those of their videos which are on the clickbait end of things will tend to give them more views and money, and insofar as they just “try to make videos they like, seeing what happens, and then doing more of what worked”, they will be pushed in this direction
When salespeople get paid a commission based on how many sales they do, but do not lose any salary due to poor customer reviews… employees of a more Machiavellian character will tend to perform better, which will give them more money and social capital at work… and this will give Machiavellian characteristics more influence over that workplace (before even taking into account returns to scale of capital). They will then be in positions of power to decide on which new policies get implemented, and might choose those that they genuinely think sound most reasonable and well-evidenced. They certainly don’t have to mercilessly optimise for a Machiavellian culture, yet because they have all been pre-selected for such personality traits, they’ll tend to be biased in the direction of choosing such policies. As for their more “noble” colleagues, they’ll find that out of all the tactics they’re comfortable with/able to execute, the more sales-y ones will lead them to get more hi-fives from the high-status people in the office, more room in the budget at the end of the month, and so forth
When college admissions are partly based on grades… the case is left as an exercise for the reader.
If this is true and important, why doesn’t standard econ textbooks/courses explain this?
I have some hypotheses which seem plausible, but I don’t think they are exhaustive.
1. Selection pressure for explanations requiring the fewest inferential steps
Microeconomics is pretty counterintuitive (for more on the importance of this, see e.g. this post by Scott Sumner). Writing textbooks that explain it to hundreds of thousands of undergrads, even just using consciously scheming agents, is hard. Now both “selection effects” and “reinforcement learning” are independently difficult concepts, which the majority of students will not have been exposed to, and which aren’t the explanatory path of least resistance (even if they might be really important to a small subset of people who want to use econ insights to build new organisations that, for example, do better than the dire state of the attention economy. Such as LessWrong).
2. Focus on mathematical modelling
I did half an MSc degree in economics. The focus was not on intuition, but rather on something like “acquire mathematical tools enabling you to do a PhD”. There was a lot of focus on not messing up the multivariable calculus when solving strange optimisation problems with solutions at the boundary or involving utility functions with awkward kinks.
The extent of this mathematisation was sometimes scary. In a finance class I asked the tutor what practical uses there were of some obscure derivative, which we had spend 45 mins and several pages of stochastic calculus proving theorems about. “Oh” he said, “I guess a few years ago it was used to scheme Italian grandmas out of their pensions”.
In classes when I didn’t bother asking, I mostly didn’t find out what things were used for.
3. Focus on the properties of equilibria, rather than the processes whereby systems move to equilibria
Classic econ joke:
There is a story that has been going around about a physicist, a chemist, and an economist who were stranded on a desert island with no implements and a can of food. The physicist and the chemist each devised an ingenious mechanism for getting the can open; the economist merely said, “Assume we have a can opener”!
Standard micro deals with unbounded rational agents, and its arsenal of fixed point theorems and what-not reveals the state of affairs after all maximally rational actions have already been taken. When asked how equilibria manifest themselves, and emerge, in practice, one of my tutors helplessly threw her hands in the air and laughed “that’s for the macroeconomists to work out!”
There seems to be little attempts to teach students how the solutions to the unbounded theorems are approximated in practice, whether via conscious decision-making, selection effects, reinforcement learning, memetics, or some other mechanism.
Thanks to Niki Shams and Ben Pace for reading drafts of this.
- A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform by Jun 16, 2022, 4:40 PM; 303 points) (EA Forum;
- Worlds Where Iterative Design Fails by Aug 30, 2022, 8:48 PM; 208 points) (
- Simulacrum 3 As Stag-Hunt Strategy by Jan 26, 2021, 7:40 PM; 183 points) (
- Potential Bottlenecks to Taking Over The World by Jul 6, 2021, 7:34 PM; 154 points) (
- Long-Term Future Fund: April 2019 grant recommendations by Apr 23, 2019, 7:00 AM; 142 points) (EA Forum;
- Rant on Problem Factorization for Alignment by Aug 5, 2022, 7:23 PM; 102 points) (
- 2019 Review: Voting Results! by Feb 1, 2021, 3:10 AM; 99 points) (
- Long-Term Future Fund: August 2019 grant recommendations by Oct 3, 2019, 6:46 PM; 79 points) (EA Forum;
- Why Large Bureaucratic Organizations? by Aug 27, 2024, 6:30 PM; 68 points) (
- Incentive Problems With Current Forecasting Competitions. by Nov 10, 2020, 9:40 PM; 56 points) (EA Forum;
- Long Term Future Fund: April 2019 grant decisions by Apr 8, 2019, 2:05 AM; 53 points) (
- Incentive Problems With Current Forecasting Competitions. by Nov 9, 2020, 4:20 PM; 44 points) (
- Aug 6, 2022, 5:23 AM; 43 points) 's comment on Rant on Problem Factorization for Alignment by (
- Long-Term Future Fund: August 2019 grant recommendations by Oct 3, 2019, 8:41 PM; 35 points) (
- Confucianism in AI Alignment by Nov 2, 2020, 9:16 PM; 33 points) (
- Nov 2, 2019, 5:09 PM; 31 points) 's comment on AlphaStar: Impressive for RL progress, not for AGI progress by (
- Sunday August 9, 1pm (PDT) — talks by elityre, jacobjacob, Ruby by Aug 6, 2020, 10:50 PM; 31 points) (
- Forecasting Newsletter: June 2021 by Jul 1, 2021, 8:59 PM; 29 points) (EA Forum;
- AI & wisdom 1: wisdom, amortised optimisation, and AI by Oct 28, 2024, 9:02 PM; 29 points) (
- Could we solve this email mess if we all moved to paid emails? by Aug 11, 2019, 4:31 PM; 29 points) (
- Mar 16, 2022, 6:43 PM; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- Scope Insensitivity Judo by Jul 19, 2019, 5:33 PM; 22 points) (
- May 17, 2024, 3:06 PM; 21 points) 's comment on Is There Really a Child Penalty in the Long Run? by (
- Positive Feedback → Optimization? by Mar 16, 2020, 6:48 PM; 19 points) (
- AI & wisdom 1: wisdom, amortised optimisation, and AI by Oct 29, 2024, 1:37 PM; 14 points) (EA Forum;
- Forecasting Newsletter: June 2021 by Jul 1, 2021, 9:35 PM; 13 points) (
- Feb 9, 2021, 7:59 PM; 8 points) 's comment on Why I Am Not in Charge by (
- Dec 30, 2020, 3:52 AM; 8 points) 's comment on Review Voting Thread by (
- Jun 23, 2019, 5:29 PM; 6 points) 's comment on Tal Yarkoni: No, it’s not The Incentives—it’s you by (
- Nov 4, 2022, 4:37 PM; 6 points) 's comment on Clarifying AI X-risk by (
- Dec 21, 2019, 3:46 PM; 6 points) 's comment on Quadratic voting for the 2018 Review by (
- Sep 30, 2020, 11:50 AM; 5 points) 's comment on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs by (EA Forum;
- Sep 30, 2023, 8:22 AM; 4 points) 's comment on Aim for conditional pauses by (EA Forum;
- The Engines of Cognition, Volume 3 - Los Angeles LW/ACX Meetup #169 (Wednesday, January 26th) by Jan 23, 2022, 11:31 PM; 4 points) (
- Jul 7, 2021, 10:57 PM; 4 points) 's comment on Potential Bottlenecks to Taking Over The World by (
- Jan 28, 2023, 8:47 AM; 3 points) 's comment on My highly personal skepticism braindump on existential risk from artificial intelligence. by (EA Forum;
- Jun 17, 2022, 6:28 PM; 2 points) 's comment on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform by (EA Forum;
- Dec 13, 2020, 6:34 PM; 2 points) 's comment on Parable of the Dammed by (
- Oct 14, 2022, 11:13 AM; 1 point) 's comment on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform by (EA Forum;
I think it’s a common notion that if you were just good enough you wouldn’t respond to incentives. I used to think it more myself. It’s a key element to realize that the system will still create these outcomes even if nobody is consciously choosing to fall prey to them, and that to ‘avoid incentives’ in a system like that you’d need to actually model what the incentives are and what outcome they’re systematically choosing, via selection effects, by correctly optimizing according to feedback in a complex domain, and so on. One cannot have model-free integrity.
So it feels to me like a very fundamental insight, written up well. (It’s also related I think to some of the discussion about lying vs unconscious bias between Scott and Zack and Jessica and so on.)
In order to apply economic reasoning in the real world, this is an indispensable concept, and this post is my go-to link for it.