Credit
An enormous amount of credit goes to johnswentworth who made this new post possible.
This is a framing practicum post. We’ll talk about what incentives are, how to recognize incentives in the wild, and what questions to ask when you find them. Then, we’ll have a challenge to apply the idea.
Today’s challenge: come up with 3 examples of incentives which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).
Expected time: ~15-30 minutes at most, including the Bonus Exercise.
What Are Incentives?
At the beginning of a sowing season, the Government of India announces a list of guaranteed purchase prices for certain crops, e.g., rice, wheat, cotton, etc., to support farmers. In case the market price for a crop falls below the guaranteed purchase price, the government agencies purchase the entire quantity from farmers if the crop quality meets a minimum quality threshold. From an Indian farmer’s perspective, the farmer is encouraged to produce price supported crops with quality just above the threshold level set by the government—and no higher.
This is an economic incentive: There is a reward signal in the system. Farmers are rewarded for producing crops just above the quality threshold. On the other hand, they are not rewarded for producing higher quality crops. Here we see the defining features of incentives: A system (a farmer) “wants” some resource (money), and can get more of that resource in return for some actions (producing crops with quality just above the threshold level) than others (producing crops with quality well above the threshold level).
Another example, with a direct notion of “reward”: cash incentive for taking Covid-19 vaccine shots. Some states in the US are offering rewards for Covid-19 vaccination in the form of direct cash or lottery programs. We can identify a clear reward signal in the system, which is people get rewarded for taking vaccines. Here again we see the defining features of incentives: A system (a human) “wants” some resource (money), and can get more of that resource in return for some actions (taking Covid-19 vaccines) than others (not taking Covid-19 vaccines).
What To Look For
In general, incentives should come to mind whenever there is some kind of reward signal. A system “wants” some resource, and can get more of that resource in return for some actions than others.
Useful Questions To Ask
In the farmers support price example, the Government of India announces a minimum quality requirement for the crops to be purchased. Crops with lower quality will not qualify for the support program. On the other hand, farmers are not rewarded for having high quality crops. As a result, farmers will not only avoid the work required to produce higher quality crops, they will even make their crops’ quality worse: farmers with high quality crops will mix small rocks, or leftover crops from previous years into their harvested crops to increase the total quantity of “crop”, and thus total revenue. Obviously the Government of India did not intend for farmers to throw gravel into their crops, but they accidentally incentivized it anyway.
In general, whenever we see incentives, we should ask:
What actions are getting rewarded?
What counterintuitive or unintended actions achieve high reward?
What about the cash-reward-for-Covid-19-vaccine example? If there is someone who is urgently in need of money, that person might fake the vaccination status in order to receive rewards more than once.
The Challenge
Come up with 3 examples of incentives which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).
Any answer must include at least 3 to count, and they must be novel to you. That’s the challenge. We’re here to challenge ourselves, not just review examples we already know.
However, they don’t have to be very good answers or even correct answers. Posting wrong things on the internet is scary, but a very fast way to learn, and I will enforce a high bar for kindness in response-comments. I will personally default to upvoting every complete answer, even if parts of it are wrong, and I encourage others to do the same.
Post your answers inside of spoiler tags. (How do I do that?)
Celebrate others’ answers. This is really important, especially for tougher questions. Sharing exercises in public is a scary experience. I don’t want people to leave this having back-chained the experience “If I go outside my comfort zone, people will look down on me”. So be generous with those upvotes. I certainly will be.
If you comment on someone else’s answers, focus on making exciting, novel ideas work — instead of tearing apart worse ideas. Yes, And is encouraged.
I will remove comments which I deem insufficiently kind, even if I believe they are valuable comments. I want people to feel encouraged to try and fail here, and that means enforcing nicer norms than usual.
If you get stuck, look for:
Environments in which there exists some kind of reward signal.
Systems that “want” certain actions to be taken
Agents that “want” some resource, and can get more of that resource in return for some actions than others.
Bonus Exercise: for each of your three examples from the challenge, explain:
What other counterintuitive actions are getting rewarded?
This bonus exercise is great blog-post fodder!
Motivation
Using a framing tool is sort of like using a trigger-action pattern: the hard part is to notice a pattern, a place where a particular tool can apply (the “trigger”). Once we notice the pattern, it suggests certain questions or approximations (the “action”). This challenge is meant to train the trigger-step: we look for novel examples to ingrain the abstract trigger pattern (separate from examples/contexts we already know).
The Bonus Exercise is meant to train the action-step: apply whatever questions/approximations the frame suggests, in order to build the reflex of applying them when we notice incentives.
Hopefully, this will make it easier to notice when an incentive frame can be applied to a new problem you don’t understand in the wild, and to actually use it.
Aqueducts. Water “wants” to flow downhill. Humans want the water to flow from remote mountain springs into our cities and homes. So, we provide an incentive gradient: the water can go (locally) downhill fastest by following our aqueduct.
Could the water go down faster by some other route? Well, it could spray through a leak in the pipe/channel, for instance.
Yudkowsky claims that every cause wants to become a cult—i.e. there is a positive feedback loop which amplifies cult-like aspects of causes. Leaders are incentivized to play along with this—i.e. to “give the cause what it wants” in exchange for themselves being in charge. Note that this incentive pressure applies regardless of whether the leaders actually want their cause to become a cult.
What would it look like for a cause’s leaders satisfy this incentive via some other strategy? Basically, they could take the “extreme” members who want to push that positive feedback loop, and give them some position/outlet which satisfies the relevant group-status needs without actually pushing marginal people out of the group.
Filters (the physical kind, like coffee filters). Filters select for very small particles, so if we look at what makes it through, it’s “incentivized” to be small.
But things could satisfy the incentive (i.e. sneak through the filter) in other ways—e.g. a microorganism could literally eat its way through, or weakly-soluble salts could dissolve and re-precipitate on the other side.
The second one is a good example for selection incentive: the incentives are there regardless of what we want. I like the counter intuitive actions in the third example: organisms are deliberately trying to achieve high reward by taking counterintuitive actions.
It’s also interesting to consider that the organisms that eat through filters aren’t always doing it deliberately. Some organisms may be designed to eat through filters without knowing what’s on the other side. They may have evolved to attack a particular filter, possibly after recognizing it, because filters tend to exist when there is some valuable resource on the other side.
Nectar. Flowers that attract pollinators survive better, and they accomplish this by providing a reward for behavior that enhances their reproductive function. This is an interesting distinction in thinking about symbiosis. Symbiotic relationships can be “accidental,” in that behavior that benefits organism A also happens to benefit organism B, or “incentivized,” where organism B has evolved to produce a reward to motivate organism A’s beneficial behavior. An example is the red-billed oxpecker, which eats ticks and other insects off the backs of black rhinos. There is no need for an evolved incentive to motivate the red-billed oxpecker to engage in this behavior. An unintended consequence is that apiculture for honey leads to human cultivation of flowers and their pollinators, increasing the reward for high nectar-producing flowers.
The hedonistic treadmill. Short-lived hits of pleasure keep you motivated to continue working, so that you can afford more and bigger hits. We’re highly familiar with the problematic aspects of this psychological structure. What if instead, we sought to use it for good? This suggests that we’d try to actively pursue more small, somewhat costly hits of pleasure throughout the day, in order to motivate ourselves to work harder. Instead of encouraging people to increase their wealth by saving and austerity, we’d encourage them to spend on themselves more often—to create their own carrot to chase.
Angiogenesis. Various signaling molecules can trigger the production of new blood vessels, which supply nutrients to the local cell population—a reward simply for announcing their need for more resources. Cancer cells secrete VEGF and growth factors to stimulate angiogenesis. The body seems to rely on the immune system to police itself for cancerous growth, “trusting” that cells are requesting angiogenesis only when needed.
I particularly like the first & third one. John and I talked about incentive mechanisms in biological world (organisms, biological evolution, etc.) and these are really good examples of it.