Preferring red is rational, because it is a known amount of risk, while each of the other two colours have unknown risks.
This is according to Kellys criterion and Darwinian evolution. Negative outcomes outweigh positive ones because negative ones lead to sickness and death through starvation, poorness, and kicks in the head.
This is only valid in the beginning, because when the experiment is repeated, the probabilities of blue and green become clearer.
I think what you’re saying is just that humans are risk-averse, and so a gamble with lower variance is preferable to one with higher variance (and the same mean)… but if the number of green vs. blue is randomly determined with expected value 30 to 30, then it has the same variance. You need to involve something more (like the intentional stance) to explain the paradox.
You don’t get exactly 1⁄3 of a win with no variance in either case. You get exactly 1 win, 1⁄3 of the time, and no win 2⁄3 of the time.
As an example when betting on green, suppose there’s a 1⁄3 chance of 30 blue and 30 green balls, 1⁄3 chance of 60 green, 1⁄3 chance of 60 blue. And there’s always 30 red balls.
There is a 1⁄3 of 1⁄3 chance that there are 30 green balls and you pick one. There is a 2⁄3 of 1⁄3 chance that there are 60 green balls and you pick one. There is no chance that there are no green balls and you still pick one. Therre is no other way to get a green ball. The total chance of picking a green ball is therefore 1⁄3, that is, 1⁄3 of 1⁄3 plus 2⁄3 of 1⁄3. That means that 1⁄3 of the time you win and 2⁄3 of the time you lose, just as in the case of betting on the red ball.
A distribution of 1 one third of the time and 0 two thirds of the time has some computable variance. Whatever it is, that’s the variance in your number of wins when you bet on green, and it’s also the variance in you number of wins when you bet red.
Like I said below, write out the actual random variables you use as a Bayesian: they have identical distributions if the mean of your green:blue prior is 30 to 30.
There is literally no sane justification for the “paradox” other than updating on the problem statement to have an unbalanced posterior estimate of green vs. blue.
Bayesian reasoning is for maximizing the probability of being right.
Kelly´s criterion is for maximizing aggregated value.
And yet again, the distributions of the probabilities are different, because they have different variance, and difference in variance give different aggregated value, which is what people tend to try to optimize.
Aggregating value in this case is to get more pies, and fewer boots to the head. Pies are of no value to you when you are dead from boots to the head, and this is the root cause for preferring lower variance.
This isn´t much of a discussion when you just ignore and deny my argument instead of trying to understand it.
If I decide whether you win or lose by drawing a random number from 1 to 60 in a symmetric fashion, then rolling a 60-sided die and comparing the result to the number I drew, this is the same random variable as a single fair coinflip. Unless you are playing multiple times (in which case you’ll experience higher variance from the correlation) or you have a reason to suspect an asymmetric probability distribution of green vs. blue, the two gambles will have the exact same effect in your utility function.
The above paragraph is mathematically rigorous. You should not disagree unless you find a mathematical error.
And yet again I am reminded why I do not frequent this supposedly rational forum more.
Rationality swishes by over most peoples head here, except for a few really smart ones.
You people make it too complicated. You write too much. Lots of these supposedly deep intellectual problems have quite simple answers, such as this Ellsberg paradox. You just have to look and think a little outside their boxes to solve them, or see that they are unsolvable, or that they are wrong questions.
I will yet again go away, to solve more useful and interesting problems on my own.
Oh, and Orthonormal, here is my correct final answer to you: You do not understand me, and this is your fault.
Write out the random variables. They have the same distribution as each other. I know that it “feels” like one has more variance than the other, but that’s a cognitive illusion.
There’s variance in the frequency, which results in variance in your metauncertainty. The 1⁄3 chance of red derives from a certain frequency of 1⁄3. The 1⁄3 chance of blue derives from uncertainty about the frequency, which is between 0 and 2⁄3.
It seems like the sort of person who would prefer to pick A and D in my game due to risk aversion would also prefer A and D in this one, for the same reason.
The effect of the metauncertainty on your utility function is the same as the effect of regular old uncertainty, unless you’re planning to play the game multiple times. I am speaking rigorously here; do not keep disagreeing unless you can find a mathematical error.
It does not have the same effect on your utility function, if your utility function has a term for your metauncertainty. Much as I might pay $3 in insurance to turn an expected, variable loss of $10 into a certain loss of $10, I might also pay $3 to switch from B to A and C to D, on the grounds that I favor situations with less metauncertainty.
Consider a horse race between 3 horses. A and B each have a 1⁄4 probability of winning a race, and C and D each have a 1⁄2 probability of winning a race, but C and D flip a fair coin to see who gets to run, after bets are placed. Then, a bet on A has the same probability of winning as a bet on C. But some people might still prefer to bet on A rather than C, since they don’t want to have bet on a horse that didn’t even run the race.
If you endorse this reasoning, you should also accept inconsistency in the Allais Paradox. From the relevant post:
The problem with attaching a huge extra value to certainty is that one time’s certainty is another time’s probability.
The only reason that I personally would prefer the red bet to the green bet is that it’s less exploitable by a malicious experimenter: in other words, given that the experimenter gave me those options, my estimate of the green:blue distribution becomes asymmetric. All other objections in this thread are unsound.
There is a possible state of the world where I have picked “green” and it turns out that there were never any green balls in the world. It is possible to have a very strong preference to not be in that state of the world. There is nothing irrational about having a particular preference. Preferences (and utility functions) cannot be irrational.
If you endorse this reasoning, you should also accept inconsistency in the Allais Paradox.
That does not necessarily follow. The Allais Paradox is not about metauncertainty; it is about putting a special premium on “absolute certainty” that does not translate to relative certainty. Someone who values certainty could consistently choose 1A and 2A.
How many boots to the head is that preference worth? I doubt it’s worth very many to you personally, and thus your personal reluctance is due to something else.
I’m done arguing this. I usually find you pretty levelheaded, but your objections in this thread are baffling.
Preferring red is rational, because it is a known amount of risk, while each of the other two colours have unknown risks.
This is according to Kellys criterion and Darwinian evolution. Negative outcomes outweigh positive ones because negative ones lead to sickness and death through starvation, poorness, and kicks in the head.
This is only valid in the beginning, because when the experiment is repeated, the probabilities of blue and green become clearer.
I think what you’re saying is just that humans are risk-averse, and so a gamble with lower variance is preferable to one with higher variance (and the same mean)… but if the number of green vs. blue is randomly determined with expected value 30 to 30, then it has the same variance. You need to involve something more (like the intentional stance) to explain the paradox.
No, because expected value is not the same thing as variance.
Betting on red gives 1⁄3 winnings, exactly.
Betting on green gives 1⁄3 +/- x winnings, and this is a variance, which is bad.
You don’t get exactly 1⁄3 of a win with no variance in either case. You get exactly 1 win, 1⁄3 of the time, and no win 2⁄3 of the time.
As an example when betting on green, suppose there’s a 1⁄3 chance of 30 blue and 30 green balls, 1⁄3 chance of 60 green, 1⁄3 chance of 60 blue. And there’s always 30 red balls.
There is a 1⁄3 of 1⁄3 chance that there are 30 green balls and you pick one. There is a 2⁄3 of 1⁄3 chance that there are 60 green balls and you pick one. There is no chance that there are no green balls and you still pick one. Therre is no other way to get a green ball. The total chance of picking a green ball is therefore 1⁄3, that is, 1⁄3 of 1⁄3 plus 2⁄3 of 1⁄3. That means that 1⁄3 of the time you win and 2⁄3 of the time you lose, just as in the case of betting on the red ball.
A distribution of 1 one third of the time and 0 two thirds of the time has some computable variance. Whatever it is, that’s the variance in your number of wins when you bet on green, and it’s also the variance in you number of wins when you bet red.
Like I said below, write out the actual random variables you use as a Bayesian: they have identical distributions if the mean of your green:blue prior is 30 to 30.
There is literally no sane justification for the “paradox” other than updating on the problem statement to have an unbalanced posterior estimate of green vs. blue.
Bayesian reasoning is for maximizing the probability of being right. Kelly´s criterion is for maximizing aggregated value.
And yet again, the distributions of the probabilities are different, because they have different variance, and difference in variance give different aggregated value, which is what people tend to try to optimize.
Aggregating value in this case is to get more pies, and fewer boots to the head. Pies are of no value to you when you are dead from boots to the head, and this is the root cause for preferring lower variance.
This isn´t much of a discussion when you just ignore and deny my argument instead of trying to understand it.
If I decide whether you win or lose by drawing a random number from 1 to 60 in a symmetric fashion, then rolling a 60-sided die and comparing the result to the number I drew, this is the same random variable as a single fair coinflip. Unless you are playing multiple times (in which case you’ll experience higher variance from the correlation) or you have a reason to suspect an asymmetric probability distribution of green vs. blue, the two gambles will have the exact same effect in your utility function.
The above paragraph is mathematically rigorous. You should not disagree unless you find a mathematical error.
And yet again I am reminded why I do not frequent this supposedly rational forum more. Rationality swishes by over most peoples head here, except for a few really smart ones. You people make it too complicated. You write too much. Lots of these supposedly deep intellectual problems have quite simple answers, such as this Ellsberg paradox. You just have to look and think a little outside their boxes to solve them, or see that they are unsolvable, or that they are wrong questions.
I will yet again go away, to solve more useful and interesting problems on my own.
Oh, and Orthonormal, here is my correct final answer to you: You do not understand me, and this is your fault.
Nobody is choosing between green vs. blue based on variance.
Option one: a sure 1⁄3 or an expected 1⁄3 with variance
Option two: an expected 2⁄3 with variance or a sure 2⁄3.
Red by itself is certain, blue with green is certain. Green by itself is uncertain, red with blue is uncertain.
Write out the random variables. They have the same distribution as each other. I know that it “feels” like one has more variance than the other, but that’s a cognitive illusion.
There’s variance in the frequency, which results in variance in your metauncertainty. The 1⁄3 chance of red derives from a certain frequency of 1⁄3. The 1⁄3 chance of blue derives from uncertainty about the frequency, which is between 0 and 2⁄3.
It seems like the sort of person who would prefer to pick A and D in my game due to risk aversion would also prefer A and D in this one, for the same reason.
The effect of the metauncertainty on your utility function is the same as the effect of regular old uncertainty, unless you’re planning to play the game multiple times. I am speaking rigorously here; do not keep disagreeing unless you can find a mathematical error.
ETA: Explained more thoroughly here.
It does not have the same effect on your utility function, if your utility function has a term for your metauncertainty. Much as I might pay $3 in insurance to turn an expected, variable loss of $10 into a certain loss of $10, I might also pay $3 to switch from B to A and C to D, on the grounds that I favor situations with less metauncertainty.
Consider a horse race between 3 horses. A and B each have a 1⁄4 probability of winning a race, and C and D each have a 1⁄2 probability of winning a race, but C and D flip a fair coin to see who gets to run, after bets are placed. Then, a bet on A has the same probability of winning as a bet on C. But some people might still prefer to bet on A rather than C, since they don’t want to have bet on a horse that didn’t even run the race.
If you endorse this reasoning, you should also accept inconsistency in the Allais Paradox. From the relevant post:
The only reason that I personally would prefer the red bet to the green bet is that it’s less exploitable by a malicious experimenter: in other words, given that the experimenter gave me those options, my estimate of the green:blue distribution becomes asymmetric. All other objections in this thread are unsound.
There is a possible state of the world where I have picked “green” and it turns out that there were never any green balls in the world. It is possible to have a very strong preference to not be in that state of the world. There is nothing irrational about having a particular preference. Preferences (and utility functions) cannot be irrational.
That does not necessarily follow. The Allais Paradox is not about metauncertainty; it is about putting a special premium on “absolute certainty” that does not translate to relative certainty. Someone who values certainty could consistently choose 1A and 2A.
How many boots to the head is that preference worth? I doubt it’s worth very many to you personally, and thus your personal reluctance is due to something else.
I’m done arguing this. I usually find you pretty levelheaded, but your objections in this thread are baffling.