Of course, but in relative terms he’s still right, it’s just easier to see when you are thinking from the point of the hungry hobo (or peasant in the developing world).
Standing from the point of view of a middle class person in a rich country looking at hypothetical bets where the potential loss is usually tiny relative to our large net worth+human capital value of >4-500k, then of course we don’t feel like we can mostly dismiss utility over a few hundred thousand k, because we’re already there.
Consider a bet with the following characteristics: You are a programmer making 60k ish a year a couple years out of school. You have a 90% probability of winning. If you win, you will win 10 million dollars in our existing world. If you lose (10%) you will swapped into parallel universe where your skills are completely worthless, you know no-one, and you would essentially be in the position of the hungry hobo. You don’t actually lose your brain, so you could potentially figure out how to make ends meet and even become wealthy in this new society, but you start with zero human capital—you don’t know how to get along in it, any better than someone who was raised in a mumbai slum to typical poor parents does in this world.
So do you take that bet? I certainly wouldn’t.
Is there any amount of money we could put in the win column that would mean you take the bet?
When you start considering bets where a loss actually puts you in the Hungry hobo position, it becomes clearer that utility of money over a few hundred thousand dollars is pretty small beer, compared to what’s going on at the lower tiers of Maslow’s hierarchy.
Which is another way of saying that pretty much everyone who can hold down a good job in the rich world has it really freaking good. The difference between $500k and $50 million (enough to live like an entertainer or big-time CEO without working) from the point of view of someone with very low human capital looks a lot like the famed academics having bitter arguments over who gets the slightly nicer office.
This also means that even log utility or log(log) utility isn’t risk averse enough for most people when it comes to bets with a large probability mass of way over normal middle class net worth + human capital values, and any significant probability of dropping below rich-country above-poverty net worth+ human capital levels.
Fortunately, for most of the bets we are actually offered in real life, linear is a good enough approximation for small ones, and log or log-log utility is a plenty good enough approximation for even the largest swings (like starting a startup vs. a salaried position), as long as we attach some value to directing wealth we would not consume, and there is a negligible added probability of the kind of losses that would take us completely out of our privileged status.
In most real life cases any problems with the model are overwhelmed by our uncertainties in mapping the probability distribution.
Beware of the typical mind fallacy :-) I will take the bet.
Note that, say, a middle-class maker of camel harnesses who is forced to flee his country of Middlestan because of a civil war and who finds himself a refugee in the West is more or less in the position of your “hungry hobo”.
This also means that even log utility or log(log) utility isn’t risk averse enough for most people
This is true, but that’s because log utility is not sufficient to explain risk aversion.
Fortunately, for most of the bets we are actually offered in real life, linear is a good enough approximation for small ones, and log or log-log utility is a plenty good enough approximation for even the largest swings
I disagree. Consider humans outside of middle and upper-middle classes in the sheltered West, that is, the most of humanity.
In most real life cases any problems with the model are overwhelmed by our uncertainties in mapping the probability distribution.
log utility is not sufficient to explain risk aversion.
In fact it’s pretty well established that typical levels of risk aversion cannot be explained by any halfway-credible utility function. A paper by Matthew Rabin shows, e.g., that if you decline a bet where you lose $100 or gain $110 with equal probability (which many people would) and this is merely because of the concavity of your utility function, then subject to rather modest assumptions you must also decline a bet where you lose $1000 or gain all the money in the world with equal probability.
There was some discussion of that paper and its ideas on LW in 2012. Vaniver suggests that the results may be more a matter of eliciting people’s preferences in a lazy way that doesn’t get at their real, hopefully better thought out, preferences. (But I fear people’s actual behaviour matches that lazy preference-elicitation pretty well.) There are some other interesting comments there, too.
Prior or posterior to the evidence provided by the other person’s willingness to offer the bet? ;-)
rather modest assumptions
Such as assuming that that person would also decline the bet even if they had 10 times as much money to start with? That doesn’t sound like a particularly modest assumption.
I don’t think I’d take an equivalent bet now, though. Compared with the hypothetical twentysomething earning $60k/year I’m older, hence less time to recover if I get unlucky, and richer, hence gaining $10M is a smaller improvement, and I have a family who would suffer if transported with me into the parallel world and whom I would miss if they weren’t.
Of course, but in relative terms he’s still right, it’s just easier to see when you are thinking from the point of the hungry hobo (or peasant in the developing world).
Standing from the point of view of a middle class person in a rich country looking at hypothetical bets where the potential loss is usually tiny relative to our large net worth+human capital value of >4-500k, then of course we don’t feel like we can mostly dismiss utility over a few hundred thousand k, because we’re already there.
Consider a bet with the following characteristics: You are a programmer making 60k ish a year a couple years out of school. You have a 90% probability of winning. If you win, you will win 10 million dollars in our existing world. If you lose (10%) you will swapped into parallel universe where your skills are completely worthless, you know no-one, and you would essentially be in the position of the hungry hobo. You don’t actually lose your brain, so you could potentially figure out how to make ends meet and even become wealthy in this new society, but you start with zero human capital—you don’t know how to get along in it, any better than someone who was raised in a mumbai slum to typical poor parents does in this world.
So do you take that bet? I certainly wouldn’t.
Is there any amount of money we could put in the win column that would mean you take the bet?
When you start considering bets where a loss actually puts you in the Hungry hobo position, it becomes clearer that utility of money over a few hundred thousand dollars is pretty small beer, compared to what’s going on at the lower tiers of Maslow’s hierarchy.
Which is another way of saying that pretty much everyone who can hold down a good job in the rich world has it really freaking good. The difference between $500k and $50 million (enough to live like an entertainer or big-time CEO without working) from the point of view of someone with very low human capital looks a lot like the famed academics having bitter arguments over who gets the slightly nicer office.
This also means that even log utility or log(log) utility isn’t risk averse enough for most people when it comes to bets with a large probability mass of way over normal middle class net worth + human capital values, and any significant probability of dropping below rich-country above-poverty net worth+ human capital levels.
Fortunately, for most of the bets we are actually offered in real life, linear is a good enough approximation for small ones, and log or log-log utility is a plenty good enough approximation for even the largest swings (like starting a startup vs. a salaried position), as long as we attach some value to directing wealth we would not consume, and there is a negligible added probability of the kind of losses that would take us completely out of our privileged status.
In most real life cases any problems with the model are overwhelmed by our uncertainties in mapping the probability distribution.
Beware of the typical mind fallacy :-) I will take the bet.
Note that, say, a middle-class maker of camel harnesses who is forced to flee his country of Middlestan because of a civil war and who finds himself a refugee in the West is more or less in the position of your “hungry hobo”.
This is true, but that’s because log utility is not sufficient to explain risk aversion.
I disagree. Consider humans outside of middle and upper-middle classes in the sheltered West, that is, the most of humanity.
That is also true.
In fact it’s pretty well established that typical levels of risk aversion cannot be explained by any halfway-credible utility function. A paper by Matthew Rabin shows, e.g., that if you decline a bet where you lose $100 or gain $110 with equal probability (which many people would) and this is merely because of the concavity of your utility function, then subject to rather modest assumptions you must also decline a bet where you lose $1000 or gain all the money in the world with equal probability.
There was some discussion of that paper and its ideas on LW in 2012. Vaniver suggests that the results may be more a matter of eliciting people’s preferences in a lazy way that doesn’t get at their real, hopefully better thought out, preferences. (But I fear people’s actual behaviour matches that lazy preference-elicitation pretty well.) There are some other interesting comments there, too.
Yep, I’ve been mentioning that on LW over and over again, but people seem reluctant to accept that.
Some of those conclusions are not as absurd as Rabin appears to believe; I think he’s typical-minding. Most people will pick a 100% chance of $500 over a 15% chance of $1M.
Prior or posterior to the evidence provided by the other person’s willingness to offer the bet? ;-)
Such as assuming that that person would also decline the bet even if they had 10 times as much money to start with? That doesn’t sound like a particularly modest assumption.
I’m pretty sure I would also take that bet.
I don’t think I’d take an equivalent bet now, though. Compared with the hypothetical twentysomething earning $60k/year I’m older, hence less time to recover if I get unlucky, and richer, hence gaining $10M is a smaller improvement, and I have a family who would suffer if transported with me into the parallel world and whom I would miss if they weren’t.