Taken completely and literally, yes, the framework posits that humans are utility maximizers. The problem of course is that they aren’t. But that doesn’t mean I can’t use revealed preference strategies to find out information, it’s quite useful—sort of a fake framework, sort of not. And doing so doesn’t require me to actually make the mistake of treating people as utility maximizers, and especially of doing so once we pop out of the framework. That’s a trap.
Noticing that humans have a choice to do X or Y (or to do or not do X), and that they usually do X, is great information, and that’s a big insight and a useful technique. So when we do what you did here, and word our findings carefully, we find we can extract useful information—they’ve decided that X is or isn’t a good idea. But the difference between your very good wording, of the costs of learning more about whether far future investment would be a good idea, rather than the actual costs of future investment, is key here, same with recognizing that IQ tests have social costs rather than direct financial costs as the main barrier. These are mistakes I think Hanson and Caplan do make in their post/book.
OK, that makes sense. Though I want to be clear, I don’t think it’s obvious that most people think the costs of learning more about whether far future investment would be a good idea are higher than its benefits, I think most people just don’t think about the issue and aren’t motivated to. So the revealed preference framework is useful, but it tells you more about what motivates people than about what they care about.
Taken completely and literally, yes, the framework posits that humans are utility maximizers. The problem of course is that they aren’t. But that doesn’t mean I can’t use revealed preference strategies to find out information, it’s quite useful—sort of a fake framework, sort of not. And doing so doesn’t require me to actually make the mistake of treating people as utility maximizers, and especially of doing so once we pop out of the framework. That’s a trap.
Noticing that humans have a choice to do X or Y (or to do or not do X), and that they usually do X, is great information, and that’s a big insight and a useful technique. So when we do what you did here, and word our findings carefully, we find we can extract useful information—they’ve decided that X is or isn’t a good idea. But the difference between your very good wording, of the costs of learning more about whether far future investment would be a good idea, rather than the actual costs of future investment, is key here, same with recognizing that IQ tests have social costs rather than direct financial costs as the main barrier. These are mistakes I think Hanson and Caplan do make in their post/book.
OK, that makes sense. Though I want to be clear, I don’t think it’s obvious that most people think the costs of learning more about whether far future investment would be a good idea are higher than its benefits, I think most people just don’t think about the issue and aren’t motivated to. So the revealed preference framework is useful, but it tells you more about what motivates people than about what they care about.