In the revealed preference framework it doesn’t look like people “prefer” to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.
I think the revealed preference frame is more useful if you don’t phrase it as “this is what people actually care about” but rather “this is what actually motivates people”. People can care about things that they aren’t much motivated by, and be motivated by things they don’t much care about (e.g. the lotus thread). In that interpretation, I don’t think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that’s not what it’s trying to measure.
Okay, yeah, using the revealed preference framework doesn’t inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the “true payoff matrix” to a revealed preference investigation, when I should have been comparing it to “ask people what the payoff matrix looks like”.
There still come to find several counterproductive whys I can imagine someone claiming, “People don’t actually care about X”, but I no longer think that’s specifically a problem of the revealed preference frame.
In the revealed preference framework it doesn’t look like people “prefer” to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.
I think the revealed preference frame is more useful if you don’t phrase it as “this is what people actually care about” but rather “this is what actually motivates people”. People can care about things that they aren’t much motivated by, and be motivated by things they don’t much care about (e.g. the lotus thread). In that interpretation, I don’t think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that’s not what it’s trying to measure.
Okay, yeah, using the revealed preference framework doesn’t inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the “true payoff matrix” to a revealed preference investigation, when I should have been comparing it to “ask people what the payoff matrix looks like”.
There still come to find several counterproductive whys I can imagine someone claiming, “People don’t actually care about X”, but I no longer think that’s specifically a problem of the revealed preference frame.