If we define a utility function the way you recommend (which I don’t know if it’s standard to do so, but it seems reasonable), then you’re just not ever going to have utility risk averse individuals. By definition.
If a lottery pays 1M utils with 0.001 probability of winning, and the ticket costs 900 utils, an agent just wouldn’t turn it down. If the agent did turn it down, this means that the lottery wasn’t actually worth 1M utils, but less, because that’s how we determine how much the lottery is worth in the first place.
It is, however, possible that the utility function is bounded and can never reach 1M utils. This, I think, may lead to some confusion here: in that case, the agent would turn down a lottery with a ticket price of 1000 and a probability of winning of 0.1%, no matter the payoff. This seems to imply that he turns down the 1M lottery, but it isn’t irrational in this case.
Yeah, the utility lottery is a bizarre lottery. For one thing, even if it’s only conducted in monetary payoffs, both the price of the ticket and the amount of money you win depends on your overall well-being. In particular, if you’re on the edge of starvation, the ticket would become close to (but not quite) free.
I can’t imagine how it could be conducted in monetary payoffs, at least without a restrictive upper bound. Not only does the added utility of money decrease with scale, but you can only get so much utility out of money in a finite economy.
I’d be a bit surprised if, outside a certain range, utilons can be described as a function of money at all.
If we define a utility function the way you recommend (which I don’t know if it’s standard to do so, but it seems reasonable), then you’re just not ever going to have utility risk averse individuals. By definition.
If a lottery pays 1M utils with 0.001 probability of winning, and the ticket costs 900 utils, an agent just wouldn’t turn it down. If the agent did turn it down, this means that the lottery wasn’t actually worth 1M utils, but less, because that’s how we determine how much the lottery is worth in the first place.
It is, however, possible that the utility function is bounded and can never reach 1M utils. This, I think, may lead to some confusion here: in that case, the agent would turn down a lottery with a ticket price of 1000 and a probability of winning of 0.1%, no matter the payoff. This seems to imply that he turns down the 1M lottery, but it isn’t irrational in this case.
I’m really enjoying the contrast between your comment and mine.
It’s not every day that the same comment can elicit “By definition, this just can’t be true of anyone” and “Yeah, I think this is true of me.”
Yeah, the utility lottery is a bizarre lottery. For one thing, even if it’s only conducted in monetary payoffs, both the price of the ticket and the amount of money you win depends on your overall well-being. In particular, if you’re on the edge of starvation, the ticket would become close to (but not quite) free.
I can’t imagine how it could be conducted in monetary payoffs, at least without a restrictive upper bound. Not only does the added utility of money decrease with scale, but you can only get so much utility out of money in a finite economy.
I’d be a bit surprised if, outside a certain range, utilons can be described as a function of money at all.