Using revealed preferences to treat people as expected-utility maximizers seems to drop some very important information about people.
I’m imagining a multiplayer game that has settled into a bad-equilibrium, and there are multiple superior equilibrium points, but they are far away. If we looked at the revealed preferences of all of the actors involved, it would probably look like everyone “prefers” to be in the bad-equilibrium.
If your thinking about how to intervene on this game, the revealed preferences frame results in “No work to be done here, people are all doing what they actually care about.” Where as if you asked the actors what they wanted, you might learn something about superior equilibriums that everybody would prefer.
In the revealed preference framework it doesn’t look like people “prefer” to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.
I think the revealed preference frame is more useful if you don’t phrase it as “this is what people actually care about” but rather “this is what actually motivates people”. People can care about things that they aren’t much motivated by, and be motivated by things they don’t much care about (e.g. the lotus thread). In that interpretation, I don’t think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that’s not what it’s trying to measure.
Okay, yeah, using the revealed preference framework doesn’t inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the “true payoff matrix” to a revealed preference investigation, when I should have been comparing it to “ask people what the payoff matrix looks like”.
There still come to find several counterproductive whys I can imagine someone claiming, “People don’t actually care about X”, but I no longer think that’s specifically a problem of the revealed preference frame.
Using revealed preferences to treat people as expected-utility maximizers seems to drop some very important information about people.
I’m imagining a multiplayer game that has settled into a bad-equilibrium, and there are multiple superior equilibrium points, but they are far away. If we looked at the revealed preferences of all of the actors involved, it would probably look like everyone “prefers” to be in the bad-equilibrium.
If your thinking about how to intervene on this game, the revealed preferences frame results in “No work to be done here, people are all doing what they actually care about.” Where as if you asked the actors what they wanted, you might learn something about superior equilibriums that everybody would prefer.
In the revealed preference framework it doesn’t look like people “prefer” to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.
I think the revealed preference frame is more useful if you don’t phrase it as “this is what people actually care about” but rather “this is what actually motivates people”. People can care about things that they aren’t much motivated by, and be motivated by things they don’t much care about (e.g. the lotus thread). In that interpretation, I don’t think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that’s not what it’s trying to measure.
Okay, yeah, using the revealed preference framework doesn’t inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the “true payoff matrix” to a revealed preference investigation, when I should have been comparing it to “ask people what the payoff matrix looks like”.
There still come to find several counterproductive whys I can imagine someone claiming, “People don’t actually care about X”, but I no longer think that’s specifically a problem of the revealed preference frame.