I don’t think your math is wrong or bad. Rather, your confusion seems to come from conflating expected values with expected utilities. Consider an agent with several available actions. Learning the expected value (of, for example, money gained) from doing each action does not tell us which action the agent prefers. However, learning the expected utility of each action gives a complete specification of the agent’s preferences with respect to these actions. The reason expected utility is so much more powerful is simple—the utility function (of the agent) is defined to have this property.
In your lottery example, the expected value is $3, but the expected utility is unspecified (and is different for each person). Thus we cannot tell if anyone would want to spend any amount of money on this lottery ticket.
I don’t think your math is wrong or bad. Rather, your confusion seems to come from conflating expected values with expected utilities. Consider an agent with several available actions. Learning the expected value (of, for example, money gained) from doing each action does not tell us which action the agent prefers. However, learning the expected utility of each action gives a complete specification of the agent’s preferences with respect to these actions. The reason expected utility is so much more powerful is simple—the utility function (of the agent) is defined to have this property.
I gave a brief explanation of utility functions in this previous comment.
In your lottery example, the expected value is $3, but the expected utility is unspecified (and is different for each person). Thus we cannot tell if anyone would want to spend any amount of money on this lottery ticket.