I sometimes have the same intuition as banx. You’re right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars):
−100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it’s even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn’t feel this, it’s still likely a factor in most actual humans’ utility functions.
TrustVectoring summed it up well above:
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
I sometimes have the same intuition as banx. You’re right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars): −100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it’s even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn’t feel this, it’s still likely a factor in most actual humans’ utility functions.
TrustVectoring summed it up well above: If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.