Evidently I don’t understand how this works. I was under the impression that it was irrational to treat certain and expected values differently.
On the other hand, my math is probably wrong. When I did the same calculations with a lottery that had a 1 in 136 million odds of winning and a 580 million dollar jackpot I calculated that buying a lottery ticket had an expected utility of $3. This seems obviously wrong, otherwise everyone would jump at the chance to spend $1 or $2 on a lottery ticket.
I don’t think your math is wrong or bad. Rather, your confusion seems to come from conflating expected values with expected utilities. Consider an agent with several available actions. Learning the expected value (of, for example, money gained) from doing each action does not tell us which action the agent prefers. However, learning the expected utility of each action gives a complete specification of the agent’s preferences with respect to these actions. The reason expected utility is so much more powerful is simple—the utility function (of the agent) is defined to have this property.
In your lottery example, the expected value is $3, but the expected utility is unspecified (and is different for each person). Thus we cannot tell if anyone would want to spend any amount of money on this lottery ticket.
Evidently I don’t understand how this works. I was under the impression that it was irrational to treat certain and expected values differently.
On the other hand, my math is probably wrong. When I did the same calculations with a lottery that had a 1 in 136 million odds of winning and a 580 million dollar jackpot I calculated that buying a lottery ticket had an expected utility of $3. This seems obviously wrong, otherwise everyone would jump at the chance to spend $1 or $2 on a lottery ticket.
Evidently I’m even worse at math than I thought.
I don’t think your math is wrong or bad. Rather, your confusion seems to come from conflating expected values with expected utilities. Consider an agent with several available actions. Learning the expected value (of, for example, money gained) from doing each action does not tell us which action the agent prefers. However, learning the expected utility of each action gives a complete specification of the agent’s preferences with respect to these actions. The reason expected utility is so much more powerful is simple—the utility function (of the agent) is defined to have this property.
I gave a brief explanation of utility functions in this previous comment.
In your lottery example, the expected value is $3, but the expected utility is unspecified (and is different for each person). Thus we cannot tell if anyone would want to spend any amount of money on this lottery ticket.