People do this as well. They wanted to eliminate corruption from public construction projects in a certain country, and created a numbers-based evaluation systems of tenders. The differences in price offered were taken into account with a weight of 1 and the differences in penalties / liquidated damage with a weight of 6. I am not sure what is the best English term for the later, but basically it was the construction company saying if the project is late I am willing to pay X amount of penalty per day. Usually most companies offer something like 0,1% of the price. One company offered 2% which means if they are like 10-15 days late their whole profits are gone, and as this was to be taken into account with a weight of 6, they could offer an outrageous price and the rules still forced the government to accept their offer. It turned out, it was not just a bold gaming of the rules, it was corruption as well: there was no such law that such a penalty offered must also be really enforced in case of late delivery, the government’s man can decide to demand less penalty if he feels the vendor is not entirely at fault. So most likely they simply planned to bribe that guy in case if they are late. Thus the new rules simply moved the bribery into a different stage of the process.
When humans are motivated by entirely external incentives like fsck everything let’s make as much money on this project as possible, they behave just like the vibrating AI-Messi.
Which means—maybe we need to figure out what the heck is an inner motivation in humans that makes them want to the sensible and how to emulate it.
Another, famous, example. At one time, somewhere in India there were a lot of cobras, which are dangerous. So the government (it happened to be the British Raj at the time) decided to offer a bounty for dead cobras. That worked for a while, until people figured out that they could breed cobras for the bounty. Then the government worked out what was going on, and cancelled the bounty. So then the cobra breeders released all their now-valueless cobras into the wild.
(According to Wikipedia this particular instance isn’t actually well documented, but a similar one involving rats in Hanoi is.)
When people want to change the behavior of others, they find some policy and incentive that would encourage the change they desire, but never stop to ask how else people might react to that change in incentives.
Anyone ever come across any catchy name or formulation for this particular failure mode?
Cobra effect (see the Wikipedia page I linked before). Law of unintended consequences. (Perhaps the former is a little too narrow and the latter a little too broad.)
Isn’t this an example of a reflection problem? We induce this change in a system, in this case an evaluation metric, and now we must predict not only the next iteration but the stable equilibria of this system.
A Red Queen’s Race is an evolutionary competition in which absolute position does not change. The classic example is the arms race between foxes and rabbits that results in both becoming faster in absolute terms, but the rate of predation stays fixed. (The origin is Lewis Carrol: “It takes all the running you can do, just to stay in the same place.”)
A Red Queen’s Race is an evolutionary competition in which absolute position does not change.
You mean relative, not absolute.
I’ve also seen a more general interpretation: the Red Queen situation is where staying still (doing nothing) makes you worse off as time passes; you need to run forward just to stay in the same place.
I think this is analogous to what’s happening here—you create better incentives, they create better ways to get around those incentives, nothing changes. I didn’t know that this wasn’t the common usage, as I got it from this Overcoming Bias post:
People do this as well. They wanted to eliminate corruption from public construction projects in a certain country, and created a numbers-based evaluation systems of tenders. The differences in price offered were taken into account with a weight of 1 and the differences in penalties / liquidated damage with a weight of 6. I am not sure what is the best English term for the later, but basically it was the construction company saying if the project is late I am willing to pay X amount of penalty per day. Usually most companies offer something like 0,1% of the price. One company offered 2% which means if they are like 10-15 days late their whole profits are gone, and as this was to be taken into account with a weight of 6, they could offer an outrageous price and the rules still forced the government to accept their offer. It turned out, it was not just a bold gaming of the rules, it was corruption as well: there was no such law that such a penalty offered must also be really enforced in case of late delivery, the government’s man can decide to demand less penalty if he feels the vendor is not entirely at fault. So most likely they simply planned to bribe that guy in case if they are late. Thus the new rules simply moved the bribery into a different stage of the process.
When humans are motivated by entirely external incentives like fsck everything let’s make as much money on this project as possible, they behave just like the vibrating AI-Messi.
Which means—maybe we need to figure out what the heck is an inner motivation in humans that makes them want to the sensible and how to emulate it.
Another, famous, example. At one time, somewhere in India there were a lot of cobras, which are dangerous. So the government (it happened to be the British Raj at the time) decided to offer a bounty for dead cobras. That worked for a while, until people figured out that they could breed cobras for the bounty. Then the government worked out what was going on, and cancelled the bounty. So then the cobra breeders released all their now-valueless cobras into the wild.
(According to Wikipedia this particular instance isn’t actually well documented, but a similar one involving rats in Hanoi is.)
This effect also exists in software development:
http://thedailywtf.com/articles/The-Defect-Black-Market
Famous Dilbert cartoon on this topic.
I see this failure in analysis all the time.
When people want to change the behavior of others, they find some policy and incentive that would encourage the change they desire, but never stop to ask how else people might react to that change in incentives.
Anyone ever come across any catchy name or formulation for this particular failure mode?
Perverse incentives.
Cobra effect (see the Wikipedia page I linked before). Law of unintended consequences. (Perhaps the former is a little too narrow and the latter a little too broad.)
Isn’t this an example of a reflection problem? We induce this change in a system, in this case an evaluation metric, and now we must predict not only the next iteration but the stable equilibria of this system.
Goodhart’s Law
Oops, double post; V_V already said that.
I believe this is called a “red queen race”
This is not correct, at least in common usage.
A Red Queen’s Race is an evolutionary competition in which absolute position does not change. The classic example is the arms race between foxes and rabbits that results in both becoming faster in absolute terms, but the rate of predation stays fixed. (The origin is Lewis Carrol: “It takes all the running you can do, just to stay in the same place.”)
You mean relative, not absolute.
I’ve also seen a more general interpretation: the Red Queen situation is where staying still (doing nothing) makes you worse off as time passes; you need to run forward just to stay in the same place.
Yes, yes I did. Thanks for the correction.
I think this is analogous to what’s happening here—you create better incentives, they create better ways to get around those incentives, nothing changes. I didn’t know that this wasn’t the common usage, as I got it from this Overcoming Bias post:
http://www.overcomingbias.com/2014/06/bias-is-a-red-queen-game.html
This is known as Goodhart’s law or Campbell’s law.