My local rationality group assigned this post as reading for our meetup this week, and it generated an interesting discussion.
I’m not an AI or decision theory expert. My only goal here is to argue that some of these claims are poor descriptions of actual human behavior. In particular, I don’t think humans have consistent preferences about rare and negative events. I argue this by working backwards from the examples in the discussion on the Axiom of Continuity. I still think this post is valuable in other ways.
Let’s look at an example: If you prefer $50 in your pocket to $40, the axiom says that there must be some small such that you prefer a probability of of $50 and a probability of of dying today to a certainty of $40. Some critics seem to see this as the ultimate reductio ad absurdum for the VNM theory; they seem to think that no sane human would accept that deal.
(1) This is a good deal to take for some humans in some situations (e.g. people in extreme poverty) and some of those people would take the deal.
(2) People who wouldn’t take the deal would do so not because they did an expected-utility calculation, but because of the aversive emotional (irrational) reaction to the possibility of dying, regardless of how small epsilon is, because the psychological disturbance caused by simply considering the possibility isn’t worth $10. This is most people.
Eliezer was surely not the first to observe that this preference is exhibited each time someone drives an extra mile to save $10.
I haven’t read The “Intuitions” Behind “Utilitarianism”, and purely because this claim is only one sentence, I assume it somehow unintentionally misrepresents Eliezer’s views. Regardless, I think it’s incorrect.
I’d bet that almost no one (myself included) knows an accurate estimate for the risk of a car accident when driving that extra mile, and this can be shown empirically by simply asking people*.
Thus, since people don’t know the basic data needed to do the expected-value calculation, we must conclude instead that people are making an emotional decision based on their irrational, conditioned priors and associations with things like ‘taking pride in saving money’, ‘overall comfort with driving’, ‘does $10 feel like a lot of money to me’**, etc. Moreover, car accidents happen infrequently enough that the conditioning mechanisms in our brains will never have the opportunity to build a meaningful association between driving a marginal mile, and possibly getting into an accident. So this is not a “preference” of the kind a rational economic agent can exhibit.
I do think something like this is true for most situations where real humans have to make a decision involving very small chances of very bad outcomes. Either you go rat-brained, make a spreadsheet, and start reading papers on car crash statistics, or you simply feel out a guess based on emotional priors and heuristics that—due to the rarity of the bad outcome—likely have very little grounding in its genuine probability. So I don’t think it even makes sense to apply the Axiom of Continuity to real people, because real people don’t have consistent-enough preferences towards rare events. Our brains couldn’t effectively evolve to make that kind of judgement because rare events are… well, rare.
* Survey people with question(s) like: “If you leave your home now, drive for 1 mile, and then return home, what is the chance that you will get into an accident during your drive?”—that said, there are a lot of measurement issues to argue with here.
** Compared to what a rational decision theory would predict, I think there are too many actual people who become (more) wealthy but, despite that, continue to spend time to “save money” they don’t need to save. It seems like the only way to explain this is that these people have a preference for the feeling of “saving money”. But it seems a mistake to call that rational.
My local rationality group assigned this post as reading for our meetup this week, and it generated an interesting discussion.
I’m not an AI or decision theory expert. My only goal here is to argue that some of these claims are poor descriptions of actual human behavior. In particular, I don’t think humans have consistent preferences about rare and negative events. I argue this by working backwards from the examples in the discussion on the Axiom of Continuity. I still think this post is valuable in other ways.
(1) This is a good deal to take for some humans in some situations (e.g. people in extreme poverty) and some of those people would take the deal.
(2) People who wouldn’t take the deal would do so not because they did an expected-utility calculation, but because of the aversive emotional (irrational) reaction to the possibility of dying, regardless of how small epsilon is, because the psychological disturbance caused by simply considering the possibility isn’t worth $10. This is most people.
I haven’t read The “Intuitions” Behind “Utilitarianism”, and purely because this claim is only one sentence, I assume it somehow unintentionally misrepresents Eliezer’s views. Regardless, I think it’s incorrect.
I’d bet that almost no one (myself included) knows an accurate estimate for the risk of a car accident when driving that extra mile, and this can be shown empirically by simply asking people*.
Thus, since people don’t know the basic data needed to do the expected-value calculation, we must conclude instead that people are making an emotional decision based on their irrational, conditioned priors and associations with things like ‘taking pride in saving money’, ‘overall comfort with driving’, ‘does $10 feel like a lot of money to me’**, etc. Moreover, car accidents happen infrequently enough that the conditioning mechanisms in our brains will never have the opportunity to build a meaningful association between driving a marginal mile, and possibly getting into an accident. So this is not a “preference” of the kind a rational economic agent can exhibit.
I do think something like this is true for most situations where real humans have to make a decision involving very small chances of very bad outcomes. Either you go rat-brained, make a spreadsheet, and start reading papers on car crash statistics, or you simply feel out a guess based on emotional priors and heuristics that—due to the rarity of the bad outcome—likely have very little grounding in its genuine probability. So I don’t think it even makes sense to apply the Axiom of Continuity to real people, because real people don’t have consistent-enough preferences towards rare events. Our brains couldn’t effectively evolve to make that kind of judgement because rare events are… well, rare.
* Survey people with question(s) like: “If you leave your home now, drive for 1 mile, and then return home, what is the chance that you will get into an accident during your drive?”—that said, there are a lot of measurement issues to argue with here.
** Compared to what a rational decision theory would predict, I think there are too many actual people who become (more) wealthy but, despite that, continue to spend time to “save money” they don’t need to save. It seems like the only way to explain this is that these people have a preference for the feeling of “saving money”. But it seems a mistake to call that rational.