As expected value ≠ expected utility, it’s not the case that you should always buy a ticket if expected value is positive. It’s a standard result that people actually treat the utility of wealth roughly logarithmically: i.e. that it’s better to have a net worth of $1,000,000,000 than $100,000,000, but not that much better compared to how much better $100,000,000 is than $1000 net worth.
To simplify the lottery situation in the case of extreme probabilities and payouts, say that Omega offers a lottery only to you (no worries about split jackpots), in which there are exactly 1,000,000 tickets, each costing $1, and among them there is one winning ticket that pays out $2,000,000.
Now if you can scrounge up a million dollars to buy every ticket, you make a tidy $1 million profit (less interest from your backers) with zero risk, so the expected utility is very positive for this strategy.
If, however, you can only get $100,000 together, you shouldn’t buy any tickets (unless you’re a millionaire to start), since the utility to you of a 90% chance of losing $100,000 (and having a pretty crappy life being so far in debt) outweighs the utility of a 10% chance of winning $2 million (and a nice standard of living).
Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don’t like that deal.
Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.
People are often risk-averse in terms of utility. That is, they would sometimes not take a choice with positive expected value in utility because of the possible risk.
For instance, if you have to choose between A and B, where A is a definite gain of 1 utile and B is a 50% chance of staying the same, and a 50% chance of gaining 2 utiles, both choices have the same expected value, but a risk-averse person would prefer choice A because it has smaller risk.
No, the whole point is that people can be risk averse of utility. This seems to be confusing people (my original post got voted down to −2 for some reason), so I’ll try spelling it out more clearly:
Choice X: gain of 1 utile.
Choice Y: no gain or loss.
Choice Z: gain of 2 utiles.
Choice B was a 50% chance of Y and a 50% chance of Z. To calculate the utility of choice B, we can’t just take the expected value of the utility of choice B, because that doesn’t include the risk. For a risk-averse person, choice B has a utility of less than 1, although the expected value of choice B is 1.
This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.
But the utility is the output of your utility function. If you’re not including the risk-aversion cost of choosing B in its expected value in utiles, then you’re not listing the expected value in utiles properly.
That’s just plain false. Risk-aversion is a valid preference, and can be included as a term in a utility function (at slight risk of circularity, but that’s not really a problem).
ETA: well, the stated units were utils, so risk-aversion should be included, so I think you’re correct.
The expected value of choice B is 1, but the utility of choice B to a risk-averse person would be less than 1. Risk-averse people just don’t equate utility of a choice with the expected value of that choice.
I don’t think opportunities to make choices are usually considered to be in the domain of a utility function. (If I’m wrong, educate me. I’d appreciate it.)
Ok, I looked it up and it looks like you and thomblake (ETA: and Technologos. Thanks for correcting me!) are right: the usual way of doing it is to include risk aversion in the utility function. Sorry about that.
Wikipedia on risk-neutral measures does discuss the possibility of adjusting the probabilities, rather than the utility, when calculating the expected value of a choice, but it looks like that’s usually done for ease of financial calculation.
So, one explanation for why people don’t take the “half or double” gamble is that they do have the log(x) utility function, but don’t behave accordingly because of loss aversion (as opposed to risk aversion).
I’m confused about what is uncomfortable about this, or what function of wealth you would measure utility by.
Naively it seems that logarithmic functions would be more risk averse than nth root functions which I have seen Robin Hanson use. How would a u-function be more sensitive to current wealth?
I think the uncomfortable part is that bill’s (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.
I’d suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))… If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.
As expected value ≠ expected utility, it’s not the case that you should always buy a ticket if expected value is positive. It’s a standard result that people actually treat the utility of wealth roughly logarithmically: i.e. that it’s better to have a net worth of $1,000,000,000 than $100,000,000, but not that much better compared to how much better $100,000,000 is than $1000 net worth.
To simplify the lottery situation in the case of extreme probabilities and payouts, say that Omega offers a lottery only to you (no worries about split jackpots), in which there are exactly 1,000,000 tickets, each costing $1, and among them there is one winning ticket that pays out $2,000,000.
Now if you can scrounge up a million dollars to buy every ticket, you make a tidy $1 million profit (less interest from your backers) with zero risk, so the expected utility is very positive for this strategy.
If, however, you can only get $100,000 together, you shouldn’t buy any tickets (unless you’re a millionaire to start), since the utility to you of a 90% chance of losing $100,000 (and having a pretty crappy life being so far in debt) outweighs the utility of a 10% chance of winning $2 million (and a nice standard of living).
or is it just a standard assumption? I’ve never heard anything more precise than declining marginal utility.
Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don’t like that deal.
That’s only a requirement for risk-neutral people. Most people you know are not risk-neutral.
Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.
People are often risk-averse in terms of utility. That is, they would sometimes not take a choice with positive expected value in utility because of the possible risk.
For instance, if you have to choose between A and B, where A is a definite gain of 1 utile and B is a 50% chance of staying the same, and a 50% chance of gaining 2 utiles, both choices have the same expected value, but a risk-averse person would prefer choice A because it has smaller risk.
Nitpick: you put the values in utiles, which should include risk-aversion. If you put the values in dollars or something, I would agree.
No, the whole point is that people can be risk averse of utility. This seems to be confusing people (my original post got voted down to −2 for some reason), so I’ll try spelling it out more clearly:
Choice X: gain of 1 utile. Choice Y: no gain or loss. Choice Z: gain of 2 utiles.
Choice B was a 50% chance of Y and a 50% chance of Z. To calculate the utility of choice B, we can’t just take the expected value of the utility of choice B, because that doesn’t include the risk. For a risk-averse person, choice B has a utility of less than 1, although the expected value of choice B is 1.
This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.
But the utility is the output of your utility function. If you’re not including the risk-aversion cost of choosing B in its expected value in utiles, then you’re not listing the expected value in utiles properly.
I would say that such a person doesn’t have preferences representable by a utility function.
That’s just plain false. Risk-aversion is a valid preference, and can be included as a term in a utility function (at slight risk of circularity, but that’s not really a problem).
ETA: well, the stated units were utils, so risk-aversion should be included, so I think you’re correct.
The expected value of choice B is 1, but the utility of choice B to a risk-averse person would be less than 1. Risk-averse people just don’t equate utility of a choice with the expected value of that choice.
I don’t think opportunities to make choices are usually considered to be in the domain of a utility function. (If I’m wrong, educate me. I’d appreciate it.)
Ok, I looked it up and it looks like you and thomblake (ETA: and Technologos. Thanks for correcting me!) are right: the usual way of doing it is to include risk aversion in the utility function. Sorry about that.
Wikipedia on risk-neutral measures does discuss the possibility of adjusting the probabilities, rather than the utility, when calculating the expected value of a choice, but it looks like that’s usually done for ease of financial calculation.
So, one explanation for why people don’t take the “half or double” gamble is that they do have the log(x) utility function, but don’t behave accordingly because of loss aversion (as opposed to risk aversion).
The post is technical, but Stuart_Armstrong analyzed some special cases of not-quite-utility-function agents.
I’m confused about what is uncomfortable about this, or what function of wealth you would measure utility by.
Naively it seems that logarithmic functions would be more risk averse than nth root functions which I have seen Robin Hanson use. How would a u-function be more sensitive to current wealth?
I think the uncomfortable part is that bill’s (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.
I’d suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))… If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.
Pretty sure it’s the standard result that people don’t consistently assign utilities to levels of wealth.
Hmm, good question. Quick Google search doesn’t turn up anything...
Got it. This totally answered my question.