As kim0 notes, the choices make sense if you are risk-averse; the standard choices are the ones with the least variance. (This seems to be identified as “ambiguity aversion” rather than risk-aversion, but I think it is clearly a sort of risk aversion). However, risk aversion isn’t even necessary to support choices similar to these.
Let me propose a different game that should give rise to the same choices.
Let’s imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. Whatever color you specify, you get a dollar for each ball of that color that is in the urn. The first question is: A) do you prefer to pick red, or B) do you prefer to pick green? The second question is: C) do you prefer to pick (red or blue), or D) do you prefer to pick (green or blue)?
Here, it is clear that the expected values work out the same as above. Picking any one color has an expected payout of $30, and picking 2 colors has an expected payout of $60. However, option A is a sure $30, while option B is between $0 and $60. Option C is between $30 and $90, while option D is a sure $60.
If you have the usual diminishing marginal utility over money, then A and D is actually the rational choice for expected utility maximization, without even taking into account risk/ambiguity aversion.
But you’re extracting one ball. AFAICT, for any probability distribution such that P(there are n blue balls and 60 - n green balls) = P(there are 60 - n blue balls and n green balls) for all n, and for any utility function, E(utility|I bet “red”) equals E(utility|I bet “green”) and E(utility|I bet “red or blue”) equals E(utility|I bet “green or blue”). (The only values of your utility function that matter are those for “you win the bet” and “you lose the bet”, and given that utility functions are only defined up to positive-scale affine transformations, you can normalize them to 1 and 0 respectively.)
Is this supposed to refer to my game, or the game in the OP? In my game, you examine all the balls.
The rest seems rather rushed—I’m having trouble parsing it. If it helps, I was not claiming that in the original game, an expected utility maximizer would strictly prefer A and D without taking into account risk/ambiguity aversion.
I was talking about the OP’s game. (The “the choices make sense if you are risk-averse” in the grandparent seems to be about it. Are you using risk aversion with a meaning other than “downward-concave utility function” by any chance?)
Are you using risk aversion with a meaning other than “downward-concave utility function” by any chance?
Sort of. I was referring to ambiguity aversion, as you can see in the clarification in that very sentence. But I would argue that ambiguity aversion is just the same thing as risk aversion, at a different meta-level.
Though it might take an insane person to prefer a “certain 1⁄3 chance” over an “uncertain 1⁄3 chance”.
As kim0 notes, the choices make sense if you are risk-averse; the standard choices are the ones with the least variance. (This seems to be identified as “ambiguity aversion” rather than risk-aversion, but I think it is clearly a sort of risk aversion). However, risk aversion isn’t even necessary to support choices similar to these.
Let me propose a different game that should give rise to the same choices.
Let’s imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. Whatever color you specify, you get a dollar for each ball of that color that is in the urn. The first question is: A) do you prefer to pick red, or B) do you prefer to pick green? The second question is: C) do you prefer to pick (red or blue), or D) do you prefer to pick (green or blue)?
Here, it is clear that the expected values work out the same as above. Picking any one color has an expected payout of $30, and picking 2 colors has an expected payout of $60. However, option A is a sure $30, while option B is between $0 and $60. Option C is between $30 and $90, while option D is a sure $60.
If you have the usual diminishing marginal utility over money, then A and D is actually the rational choice for expected utility maximization, without even taking into account risk/ambiguity aversion.
But you’re extracting one ball. AFAICT, for any probability distribution such that P(there are n blue balls and 60 - n green balls) = P(there are 60 - n blue balls and n green balls) for all n, and for any utility function, E(utility|I bet “red”) equals E(utility|I bet “green”) and E(utility|I bet “red or blue”) equals E(utility|I bet “green or blue”). (The only values of your utility function that matter are those for “you win the bet” and “you lose the bet”, and given that utility functions are only defined up to positive-scale affine transformations, you can normalize them to 1 and 0 respectively.)
I’m confused.
Is this supposed to refer to my game, or the game in the OP? In my game, you examine all the balls.
The rest seems rather rushed—I’m having trouble parsing it. If it helps, I was not claiming that in the original game, an expected utility maximizer would strictly prefer A and D without taking into account risk/ambiguity aversion.
I was talking about the OP’s game. (The “the choices make sense if you are risk-averse” in the grandparent seems to be about it. Are you using risk aversion with a meaning other than “downward-concave utility function” by any chance?)
Sort of. I was referring to ambiguity aversion, as you can see in the clarification in that very sentence. But I would argue that ambiguity aversion is just the same thing as risk aversion, at a different meta-level.
Though it might take an insane person to prefer a “certain 1⁄3 chance” over an “uncertain 1⁄3 chance”.