I agree that he’s using it differently, but I think I’m used to it, and I think all the logic still applies. When economists talk about revealed preference in general I think they’re using it mostly the way that Robin is, and I often use it that way as well.
Isn’t the point of the revealed preference framework to try to view humans as expected-utility-maximizers, even if it’s not possible to do this perfectly? So it seems to me you can’t object that humans aren’t actually expected-utility-maximizers, while remaining in that framework. In particular it seems to me that in the sense of revealed preference, people find the social costs of IQ tests to be higher than their benefits, and the costs of learning more about whether far future investment would be a good idea to be higher than its benefits.
Using revealed preferences to treat people as expected-utility maximizers seems to drop some very important information about people.
I’m imagining a multiplayer game that has settled into a bad-equilibrium, and there are multiple superior equilibrium points, but they are far away. If we looked at the revealed preferences of all of the actors involved, it would probably look like everyone “prefers” to be in the bad-equilibrium.
If your thinking about how to intervene on this game, the revealed preferences frame results in “No work to be done here, people are all doing what they actually care about.” Where as if you asked the actors what they wanted, you might learn something about superior equilibriums that everybody would prefer.
In the revealed preference framework it doesn’t look like people “prefer” to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.
I think the revealed preference frame is more useful if you don’t phrase it as “this is what people actually care about” but rather “this is what actually motivates people”. People can care about things that they aren’t much motivated by, and be motivated by things they don’t much care about (e.g. the lotus thread). In that interpretation, I don’t think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that’s not what it’s trying to measure.
Okay, yeah, using the revealed preference framework doesn’t inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the “true payoff matrix” to a revealed preference investigation, when I should have been comparing it to “ask people what the payoff matrix looks like”.
There still come to find several counterproductive whys I can imagine someone claiming, “People don’t actually care about X”, but I no longer think that’s specifically a problem of the revealed preference frame.
Taken completely and literally, yes, the framework posits that humans are utility maximizers. The problem of course is that they aren’t. But that doesn’t mean I can’t use revealed preference strategies to find out information, it’s quite useful—sort of a fake framework, sort of not. And doing so doesn’t require me to actually make the mistake of treating people as utility maximizers, and especially of doing so once we pop out of the framework. That’s a trap.
Noticing that humans have a choice to do X or Y (or to do or not do X), and that they usually do X, is great information, and that’s a big insight and a useful technique. So when we do what you did here, and word our findings carefully, we find we can extract useful information—they’ve decided that X is or isn’t a good idea. But the difference between your very good wording, of the costs of learning more about whether far future investment would be a good idea, rather than the actual costs of future investment, is key here, same with recognizing that IQ tests have social costs rather than direct financial costs as the main barrier. These are mistakes I think Hanson and Caplan do make in their post/book.
OK, that makes sense. Though I want to be clear, I don’t think it’s obvious that most people think the costs of learning more about whether far future investment would be a good idea are higher than its benefits, I think most people just don’t think about the issue and aren’t motivated to. So the revealed preference framework is useful, but it tells you more about what motivates people than about what they care about.
I agree that he’s using it differently, but I think I’m used to it, and I think all the logic still applies. When economists talk about revealed preference in general I think they’re using it mostly the way that Robin is, and I often use it that way as well.
Isn’t the point of the revealed preference framework to try to view humans as expected-utility-maximizers, even if it’s not possible to do this perfectly? So it seems to me you can’t object that humans aren’t actually expected-utility-maximizers, while remaining in that framework. In particular it seems to me that in the sense of revealed preference, people find the social costs of IQ tests to be higher than their benefits, and the costs of learning more about whether far future investment would be a good idea to be higher than its benefits.
Using revealed preferences to treat people as expected-utility maximizers seems to drop some very important information about people.
I’m imagining a multiplayer game that has settled into a bad-equilibrium, and there are multiple superior equilibrium points, but they are far away. If we looked at the revealed preferences of all of the actors involved, it would probably look like everyone “prefers” to be in the bad-equilibrium.
If your thinking about how to intervene on this game, the revealed preferences frame results in “No work to be done here, people are all doing what they actually care about.” Where as if you asked the actors what they wanted, you might learn something about superior equilibriums that everybody would prefer.
In the revealed preference framework it doesn’t look like people “prefer” to be in the bad equilibrium, since no one has the choice between the bad equilibrium and a better equilibrium. The only way the revealed preference framework could compare two different equilibria is by extrapolation: figure out what people value based on the choices they make when they are in control, and then figure out which of the two equilibria is ranked higher according to those revealed values. Of course this may or may not be possible in any given circumstance, just like it may or may not be possible to get good answers by asking people.
I think the revealed preference frame is more useful if you don’t phrase it as “this is what people actually care about” but rather “this is what actually motivates people”. People can care about things that they aren’t much motivated by, and be motivated by things they don’t much care about (e.g. the lotus thread). In that interpretation, I don’t think it makes sense to criticize revealed preference for not taking into account all information about what people care about, since that’s not what it’s trying to measure.
Okay, yeah, using the revealed preference framework doesn’t inherently lead to not being able to differentiate between equilibrium. In my head, I was comparing seeing the “true payoff matrix” to a revealed preference investigation, when I should have been comparing it to “ask people what the payoff matrix looks like”.
There still come to find several counterproductive whys I can imagine someone claiming, “People don’t actually care about X”, but I no longer think that’s specifically a problem of the revealed preference frame.
Taken completely and literally, yes, the framework posits that humans are utility maximizers. The problem of course is that they aren’t. But that doesn’t mean I can’t use revealed preference strategies to find out information, it’s quite useful—sort of a fake framework, sort of not. And doing so doesn’t require me to actually make the mistake of treating people as utility maximizers, and especially of doing so once we pop out of the framework. That’s a trap.
Noticing that humans have a choice to do X or Y (or to do or not do X), and that they usually do X, is great information, and that’s a big insight and a useful technique. So when we do what you did here, and word our findings carefully, we find we can extract useful information—they’ve decided that X is or isn’t a good idea. But the difference between your very good wording, of the costs of learning more about whether far future investment would be a good idea, rather than the actual costs of future investment, is key here, same with recognizing that IQ tests have social costs rather than direct financial costs as the main barrier. These are mistakes I think Hanson and Caplan do make in their post/book.
OK, that makes sense. Though I want to be clear, I don’t think it’s obvious that most people think the costs of learning more about whether far future investment would be a good idea are higher than its benefits, I think most people just don’t think about the issue and aren’t motivated to. So the revealed preference framework is useful, but it tells you more about what motivates people than about what they care about.