I’d flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It’s almost a definition of what utility is—if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have (probably related to risk).
I’ve recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I’m working on working with that to do effective long-term planning—the best trick I have so far is weighing “unacceptable status quo continues” as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn’t help any, and I’ve been doing that).
I sometimes have the same intuition as banx. You’re right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars):
−100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it’s even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn’t feel this, it’s still likely a factor in most actual humans’ utility functions.
TrustVectoring summed it up well above:
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.
I’d flip that around. Whatever action you end up choosing reveals what you think has highest utility, according to the information and utility function you have at the time. It’s almost a definition of what utility is—if you consistently make choices that rank lower according to what you think your utility function is, then your model of your utility function is wrong.
If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have (probably related to risk).
I’ve recently come to terms with how much fear/anxiety/risk avoidance is in my revealed preferences. I’m working on working with that to do effective long-term planning—the best trick I have so far is weighing “unacceptable status quo continues” as a risk. That, and making explicit comparisons between anticipated and experienced outcomes of actions (consistently over-estimating risks doesn’t help any, and I’ve been doing that).
I sometimes have the same intuition as banx. You’re right that the problem is not in the choice, but in the utility function and it most likely stems from thinking about utility as money.
Lets examine the previous example and make it into money (dollars): −100 [dollars] with 99.9% chance and +10,000 [dollars] with 0.1% vs 100% chance at +1 [dollar]
When doing the math, you have to take into future consequences as well. For example, if you knew you would be offered 100 loaded bets with an expected payoff of $0.50 in the future, each of which only cost you $1 to participate in, then you have to count this in your original payoff calculation if losing the $100 would prohibit you from being able to take these other bets.
Basically, you have to think through all the long-term consequences when calculating expected payoff, even in dollars.
Then when you try to convert this to utility, it’s even more complicated. Is the utility per dollar gained in the +$10,000 case equivalent to the utility per dollar lost in the -$100 case? Would you feel guilty and beat yourself up afterwards if you took a bet that you had a 99.9% chance of losing? Even though a purely rational agent probably shouldn’t feel this, it’s still likely a factor in most actual humans’ utility functions.
TrustVectoring summed it up well above: If the utility function you think you have prefers B over A, and you prefer A over B, then there’s some fact that’s missing from the utility function you think you have.
If you still prefer picking the +1 option, then most likely your assessment that the first choice only gives a negative utility of 100 is probably wrong. There are some other factors that make it a less attractive choice.