That goes hand in hand with his comments about complexity.
The straightforward expected utility analysis doesn’t include the cost of the analysis into the analysis. Nor the increased cost to all subsequent analyses for the uncertainty.
We have limited computational power for executive functions. No doubt we have utility built into us to conserve those limited resources. Most people hate uncertainty and thinking, and they hate it much more than we do. I doubt I’m the only one here who has noticed that.
For me, the choice between 1A and 1B would depend on how badly I needed the money, which is why I disagree with Eliezer when the says that “marginal utility of the money doesn’t count”.
For example, let’s say I needed $20,000 in order to keep a roof over my head, food on my plate, and to generally survive. In this case, my penalty for failure is quite high, and IMO it would be more rational for me to take 1A. Sure, I could win more money if I picked 1B, but I could also die in that case. Thus, my utility in case of 1B would be something like
33⁄34 U($27,000, alive) + 1⁄34 U($0, dead)
and U($anything, dead) is a very negative number.
On the other hand, if I was a billionaire who makes $20,000 per second just by existing, then I would either pick 1B, or refuse to play the game altogether, because my time could be better spent on other things.
The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn’t outweigh an additional 1% chance of dying.
If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.
Right, I didn’t mean to imply that it was. But Eliezer seemed to be saying that picking 1A is irrational in general, in addition to the paradox, which is the notion that I was disputing. It’s possible that I misinterpreted him, however.
What Caledonian is discussing is the certainty effect- essentially, having a term in your utility function for not having to multiply probabilities to get an expected value. That’s different from risk aversion, which is just a statement that the utility function is concave.
A bird in the hand...
Certainty is a form of utility, too.
That goes hand in hand with his comments about complexity.
The straightforward expected utility analysis doesn’t include the cost of the analysis into the analysis. Nor the increased cost to all subsequent analyses for the uncertainty.
We have limited computational power for executive functions. No doubt we have utility built into us to conserve those limited resources. Most people hate uncertainty and thinking, and they hate it much more than we do. I doubt I’m the only one here who has noticed that.
For me, the choice between 1A and 1B would depend on how badly I needed the money, which is why I disagree with Eliezer when the says that “marginal utility of the money doesn’t count”.
For example, let’s say I needed $20,000 in order to keep a roof over my head, food on my plate, and to generally survive. In this case, my penalty for failure is quite high, and IMO it would be more rational for me to take 1A. Sure, I could win more money if I picked 1B, but I could also die in that case. Thus, my utility in case of 1B would be something like
and U($anything, dead) is a very negative number.
On the other hand, if I was a billionaire who makes $20,000 per second just by existing, then I would either pick 1B, or refuse to play the game altogether, because my time could be better spent on other things.
Reread the post; that’s not the paradox.
The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn’t outweigh an additional 1% chance of dying.
If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.
Right, I didn’t mean to imply that it was. But Eliezer seemed to be saying that picking 1A is irrational in general, in addition to the paradox, which is the notion that I was disputing. It’s possible that I misinterpreted him, however.
He makes it clearer in comments.
What Caledonian is discussing is the certainty effect- essentially, having a term in your utility function for not having to multiply probabilities to get an expected value. That’s different from risk aversion, which is just a statement that the utility function is concave.