For what it’s worth, prospect theory helps explain this paradox.
The first aspect is how probability functions in the mind. In experiment one, you’re comparing 1% to 10%. A 1% objective probability gets stretched drastically to a 10% subjective probability, while a 10% probability gets stretched to 15%. Functionally, your brain feels like there’s a 1.5x difference in probability when there’s really a 10x difference in probability. In experiment two, you’re comparing 10% to 11%. The subjective probabilities are going to be almost equally stretched (say, 10%->15% and 11%->16%). Functionally, your brain feels like there’s an 1.06x difference between them when there’s really a 1.10x difference. In other terms your brain discounts the difference between 1% and 10% disproportionately to the difference between 10% and 11%.
The second aspect has to do with how we compress expected values. In experiment two, we’re comparing an expected gain of $1m with $5m. If the most utilons we can gain or lose is 100, then $5m would be around 90 utilons (best thing ever!) and $1m would be 70 utilons (omg great thing!), a difference of 20 utilons. In experiment one, we’re looking at gaining 90 utilons (win $5m, best thing ever!) or losing 90 utilons (lose $1m, worst thing ever!) for a 180 utilon difference. Basically, losses and gains are compressed separately differently, and our expect utilities are based on whatever our set point is.
Combined, your brain feels like experiment one is asking you to choose between 70 expected utilons (70 utilons @ 100%) or ~67 expected utilons (70 utilons @ 89% plus 90 utilons @ 15% minus 90 utilons @ 10%). Experiment two feels like choosing between 11.2 expected utilons (70 utilons at 16%) or 13.5 expected utilons (90 utilons @ 15%). In other words, the math roughly matches out with a being ‘really close, but I’d rather have the sure thing’ in experiment one and ‘yeah, the better payoff overcomes less odds’ in experiment two.
I’m not trying to say if this is how we should calculate subjective probabilities, but it seems to be how we actually do it. Personally, this decision feels so right that I’d err on the side of evolution for now. I would not be surprised if the truly rational answer is to trust our heuristics because the naively rational answer only works in hypothetical models.
For what it’s worth, prospect theory helps explain this paradox.
The first aspect is how probability functions in the mind. In experiment one, you’re comparing 1% to 10%. A 1% objective probability gets stretched drastically to a 10% subjective probability, while a 10% probability gets stretched to 15%. Functionally, your brain feels like there’s a 1.5x difference in probability when there’s really a 10x difference in probability. In experiment two, you’re comparing 10% to 11%. The subjective probabilities are going to be almost equally stretched (say, 10%->15% and 11%->16%). Functionally, your brain feels like there’s an 1.06x difference between them when there’s really a 1.10x difference. In other terms your brain discounts the difference between 1% and 10% disproportionately to the difference between 10% and 11%.
The second aspect has to do with how we compress expected values. In experiment two, we’re comparing an expected gain of $1m with $5m. If the most utilons we can gain or lose is 100, then $5m would be around 90 utilons (best thing ever!) and $1m would be 70 utilons (omg great thing!), a difference of 20 utilons. In experiment one, we’re looking at gaining 90 utilons (win $5m, best thing ever!) or losing 90 utilons (lose $1m, worst thing ever!) for a 180 utilon difference. Basically, losses and gains are compressed separately differently, and our expect utilities are based on whatever our set point is.
Combined, your brain feels like experiment one is asking you to choose between 70 expected utilons (70 utilons @ 100%) or ~67 expected utilons (70 utilons @ 89% plus 90 utilons @ 15% minus 90 utilons @ 10%). Experiment two feels like choosing between 11.2 expected utilons (70 utilons at 16%) or 13.5 expected utilons (90 utilons @ 15%). In other words, the math roughly matches out with a being ‘really close, but I’d rather have the sure thing’ in experiment one and ‘yeah, the better payoff overcomes less odds’ in experiment two.
I’m not trying to say if this is how we should calculate subjective probabilities, but it seems to be how we actually do it. Personally, this decision feels so right that I’d err on the side of evolution for now. I would not be surprised if the truly rational answer is to trust our heuristics because the naively rational answer only works in hypothetical models.