In the previous post, I presented a simple version of Savage’s theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: “There is always a price to pay for leaving the Bayesian Way.”1 But not always, it turns out. I claimed that there was a method that is Ellsberg-paradoxical, therefore non-Bayesian, but can’t be money-pumped (or “Dutch booked”). I will present the method in this post.
I’m afraid this is another long post. There’s a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to money-pump it, I’d suggest reading at least the two highlighted dialogues.
Ambiguity aversion
To recap the Ellsberg paradox: there’s an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on red-or-blue or on green-or-blue, choose green-or-blue. For some people this behaviour persists even after due calculation and reflection. This behaviour is non-Bayesian, and is the prototypical example of ambiguity aversion.
There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I’ll try to redeem myself.
Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tie-breaking behaviour. (And when it’s not strict, we shouldn’t call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.
A third theme was that ambiguity aversion might be a good heuristic if betting against someone who may know something you don’t. Now, no such opponent was specified, and speaking for myself, I’m not inferring one when I make the paradoxical choices. Still, let me admit that it’s not contrived to infer a mischievous experimenter from the Ellsberg setup. One commentator puts it better than me:
Betting generally includes an adversary who wants you to lose money so they win in. Possibly in psychology experiments [this might not apply] … But generally, ignoring the possibility of someone wanting to win money off you when they offer you a bet is a bad idea.
Now betting is supposed to be a metaphor for options with possibly unknown results. In which case sometimes you still need to account for the possibility that the options were made available by an adversary who wants you to choose badly, but less often. And you should also account for the possibility that they were from other people who wanted you to choose well, or that the options were not determined by any intelligent being or process trying to predict your choices, so you don’t need to account for an anticorrelation between your choice and the best choice. Except for your own biases.
We can take betting on the Ellsberg urn as a stand-in for various decisions under ambiguous circumstances. Ambiguity aversion can be Bayesian if we assume the right sort of correlation between the options offered and the state of the world, or the right sort of correlation between the choice made and the state of the world. In that case just about anything can Bayesian. But sometimes the opponent will not have extra information, nor extra power. There might not even be any opponent as such. If we assume there are no such correlations, then ambiguity aversion is non-Bayesian.
The final theme was: so what? Ambiguity aversion is just another cognitive bias. One commentator specifically complained that I spent too much time talking about various abstractions and not enough time talking about how ambiguity aversion could be money-pumped. I will fix that now: I claim that ambiguity aversion cannot be money-pumped, and the rest of this post is about my claim.
I’ll start with a bit of name-dropping and some whig history, to make myself sound more credible than I really am2. In the last twenty years or so many models of ambiguity averse reasoning have been constructed. Choquet expected utility3 and maxmin expected utility4 were early proposed models of ambiguity aversion. Later multiplier preferences5 were the result of applying the ideas of robust control to macroeconomic models. This results in ambiguity aversion, though it was not explicitly motivated by the Ellsberg paradox. More recently, variational preferences6 generalises both multiplier preferences and maxmin expected utility. What I’m going to present is a finitary case of variational preferences, with some of my own amateur mathematical fiddling for rhetorical purposes.
Probability intervals
The starting idea is simple enough, and may have already occurred to some LW readers. Instead of using a prior probability for events, can we not use an interval of probabilities? What should our betting behaviour be for an event with probability 50%, plus or minus 10%?
There are some different ways of filling in the details. So to be quite clear, I’m not proposing the following as the One True Probability Theory, and I am not claiming that the following is descriptive of many people’s behaviour. What follows is just one way of making ambiguity aversion work, and perhaps the simplest way. This makes sense, given my aim: I should just describe a simple method that leaves the Bayesian Way, but does not pay.
Now, sometimes disjoint ambiguous events together make an event with known probability. Or even a certainty, as in an event and its negation. If we want probability intervals to be additive (and let’s say that we do) then what we really want are oriented intervals. I’ll use +- or -+ (pronounced: plus-or-minus, minus-or-plus) to indicate two opposite orientations. So, if P(X) = 1⁄2 +- 1⁄10, then P(not X) = 1⁄2 -+ 1⁄10, and these add up to 1 exactly.
Such oriented intervals are equivalent to ordered pairs of numbers. Sometimes it’s more helpful to think of them as oriented intervals, but sometimes it’s more helpful to think of them as pairs. So 1⁄2 +- 1⁄10 is the pair (3/5,2/5). And 1⁄2 -+ 1⁄10 is (2/5,3/5), the same numbers in the opposite order. The sum of these is (1,1), which is 1 exactly.
You may wonder, if we can use ordered pairs, can we use triples, or longer lists? Yes, this method can be made to work with those too. And we can still think in terms of centre, length, and orientation. The orientation can go off in all sorts of directions, instead of just two. But for my purposes, I’ll just stick with two.
You might also ask, can we set P(X) = 1⁄2 +- 1/2? No, this method just won’t handle it. A restriction of this method is that neither of the pair can be 0 or 1, except when they’re both 0 or both 1. The way we will be using these intervals, 1⁄2 +- 1⁄2 would be the extreme case of ambiguity aversion. 1⁄2 +- 1⁄10 represents a lesser amount of ambiguity aversion, a sort of compromise between worst-case and average-case behaviour.
To decide among bets (having the same two outcomes), compute their probability intervals. Sometimes, the intervals will not overlap. Then it’s unambiguous which is more likely, so it’s clear what to pick. In general, whether they overlap or not, pick the one with the largest minimum—though we will see there are three caveats when they do overlap. If P(X) = 1⁄2 +- 1⁄10, we would be indifferent between a bet on X and on not X: the minimum is 2⁄5 in either case. If P(Y) = 1⁄2 exactly, then we would strictly prefer a bet on Y to a bet on X.
Which leads to the first caveat: sometimes, given two options, it’s strictly better to randomise. Let’s suppose Y represents a fair coin. So P(Y) = 1⁄2 exactly, as we said. But also, Y is independent of X. P(X and Y) = 1⁄4 +- 1⁄20, and so on. This means that P((X and not Y) or (Y and not X)) = 1⁄2 exactly also. So we’re indifferent between a bet on X and a bet on not X, but we strictly prefer the randomised bet.
In general, randomisation will be strictly better if you have two choices with overlapping intervals of opposite orientations. The best randomisation ratio will be the one that gives a bet with zero-length interval.
Now let us reconsider the Ellsberg urn. We did say the urn can be a metaphor for various situations. Generally these situations will not be symmetrical. But, even in symmetrical scenarios, we should still re-think how we apply the principle of indifference. I argue that the underlying idea is really this: if our information has a symmetry, then our decisions should have that same symmetry. If we switch green and blue, our information about the Ellsberg urn doesn’t change. The situation is indistinguishable, so we should behave the same way. It follows that we should be indifferent between a bet on green and a bet on blue. Then, for the Bayesian, it follows that P(red) = P(green) = P(blue) = 1⁄3. Period.
But for us, there is a degree of freedom, even in this symmetrical situation. We know what the probability of red is, so of course P(red) = 1⁄3 exactly. But we can set, say7, P(green) = 1⁄3 +- 1⁄9, and P(blue) = 1⁄3 -+ 1⁄9. So we get P(red or green) = 2⁄3 +- 1⁄9, P(red or blue) = 2⁄3 -+ 1⁄9, P(green or blue) = 2⁄3 exactly, and of course P(red or green or blue) = 1 exactly.
So: red is 1⁄3 exactly, but the minimum of green is 2⁄9. (green or blue) is 2⁄3 exactly, but the minimum of (red or blue) is 5⁄9. So choose red over green, and (green or blue) over (red or blue). That’s the paradoxical behaviour. Note that neither pair of choices offered in the Ellsberg paradox has the type of overlap that favours randomisation.
Once we have a decision procedure for the two-outcome case, then we can tack on any utility function, as I explained in the previous post. The result here is what you would expect: we get oriented expected utility intervals, obtained by multiplying the oriented probability intervals by the utility. When deciding, we pick the one whose interval has the largest minimum. So for example, a bet which pays 15U on red (using U for “utils”, the abstract units of measurement of the utility function) has expected utility 5U exactly. A bet which pays 18U on green has expected utility 6U +- 2U, the minimum is 4U. So pick the bet on red over that.
Operationally, probability is associated with the “fair price” at which we are willing to bet. A probability interval indicates that there is no fair price. Instead we have a spread: we buy bets at their low price and sell at their high price. At least, we do that if we have no outstanding bets, or more generally, if the expected utility interval on our outstanding bets has zero-length. The second caveat is that if this interval has length, then it affects our price: we also sell bets of the same orientation at their low price, and buy bets of the opposite orientation at their high price, until the length of this interval is used up. The midpoint of the expected utility interval on our outstanding bets will be irrelevant.
This can be confusing, so it’s time for an analogy.
Bootsianism
If you are Bayesian and risk-neutral (and if bets pay in “utils” rather than cash, you are risk-neutral by definition) then outstanding bets have no effect on further betting behaviour. However, if you are risk-averse, as is the most common case, then this is no longer true. The more money you’ve already got on the line, the less willing you will be to bet.
But besides risk attitude, there could also be interference effects from non-monetary payouts. For example, if you are dealing in boots, then you wouldn’t buy a single boot for half the price of a pair, and neither would you sell one of your boots for half the price of a pair. Unless you happened to already have unmatched boots, then you would sell those at a lower price, or buy boots of the opposite orientation at a higher price, until you had no more unmatched boots. If you were otherwise risk-neutral with respect to boots, then your behaviour would not depend on the number of pairs you have, just on the number and orientation of your unmatched boots.
This closely resembles the non-Bayesian behaviour above. In fact, for the Ellsberg urn, we could just say that a bet on red is worth a pair of boots, a bet on green is worth two left boots, and a bet on blue is worth two right boots. Without saying anything further, it’s clear that we would strictly prefer red (a pair) over green (two lefts), but we would also strictly prefer green-or-blue (two pairs) over red-or-blue (one left and three rights). That’s the paradoxical behaviour, but you know you can’t money-pump boots.
A: I’ll buy that pair of boots for 30 zorkmids. B: Okay, here’s your pair of boots. A: And here’s your 30 zorkmids. Thank you. B: Thank you. Say, didn’t you just buy an identical pair this morning? A: Yeah, I did. Then a dingo ate the right one. I’ve got the left one here. Never worn. B: How narratively convenient! How much would you sell it for? A: Hmm, how about 10 zorkmids? B: Really, 10 zorkmids? So, do you think right boots are more valuable than left boots? A: No, of course not. Why? B: Arbitrage! A: Gesundheit. B: Thanks. I’ll buy a left boot from you for 10 zorkmids. A: Great! Here’s your left boot. B: And here’s your 10 zorkmids. Thank you. A: Thank you! B: And I’ll buy a right boot from you for 10 zorkmids. A: Errrm… Sorry? Why would I agree to that? B: You just sold me a left boot for 10 zorkmids. Well, you yourself said rights aren’t more valuable than lefts. So, logically, you should be willing to sell me a right boot for 10 zorkmids. A: What? No.
Boots’ rule
So much for the static case. But what do we do with new information? How do we handle conditional probabilities?
We still get P(A|B) by dividing P(A and B) by P(B). It will be easier to think in terms of pairs here. So for example P(red) = 1⁄3 exactly = (1/3,1/3) and P(red or green) = 2⁄3 +- 1⁄9 = (7/9,5/9), so P(red|red or green) = (3/7,3/5) = 18⁄35 -+ 3⁄35. And similarly P(green|red or green) = (1/3 +- 1⁄9)/(2/3 +- 1⁄9) = 17⁄35 +- 3⁄35.
This rule covers the dynamic passive case, where we update probabilities based on what we observe, before betting. The third and final caveat is in the active case, when information comes in between bets. Now, we saw that the length and orientation of the interval on expected utility of outstanding bets affects further betting behaviour. There is actually a separate update rule for this quantity. It is about as simple as it gets: do nothing. The interval can change when we make choices, and its midpoint can shift due to external events, but its length and orientation do not update.
You might expect the update rule for this quantity to follow from the way the expected utility updates, which follows from the way probability updates. But it has a mind of its own. So even if we are keeping track of our bets, we’d still need to keep track of this extra variable separately.
Sometimes it may be easier to think in terms of the total expected utility interval of our outstanding bets, but sometimes it may be easier to think of this in terms of having a “virtual” interval that cancels the change in the length and orientation of the “real” expected utility interval. The midpoint of this virtual interval is irrelevant and can be taken to always be zero. So, on update, compute the prior expected utility interval of outstanding bets, subtract the posterior expected utility interval from it, and add this difference to the virtual interval. Reset its midpoint to zero, keeping only the length and orientation.
That can also be confusing, so let’s have another analogy.
Yo’ mama’s so illogical...
I recently came across this example by Mark Machina:
M: Children, I only have one treat, I can only give it to one of you. I: Me, mama! J: No, give it to me! M: No. Rather than give it to either of you, it’s better if I toss a coin. Heads, it goes to Irina, tails, it goes to Joey. … M: Heads. Irina gets it. J: But mama! M: Fair is fair. I: Yeah Joey! J: But mama, you yourself said it’s better to toss a coin than to give it to either of us. So, logically, instead of giving it to Irina you should toss a coin again. M: Nice try, Joey.
Instead of giving the treat to either child, she strictly prefers to toss a coin and give the treat to the winner. But after the coin is tossed, she strictly prefers to give the treat to the winner rather than toss again.
This cannot be explained in terms of maximising expected utility, in the typical sense of “utility”. And of course only known probabilities are involved here, so there’s no question as to whether her beliefs are probabilistically sophisticated or not. But it could be said that she is still maximising the expected value of an extended objective function. This extended objective function does not just consider who gets a treat, but also considers who “had a fair chance”. She is unfair if she gives the treat to either child outright, but fair if she tosses a coin. That fairness doesn’t go away when the result of the coin toss is known.
Or something like that. There are surely other ways of dissecting the mother’s behaviour. But no matter what, it’s going to have to take the coin toss into account, even though the coin, in and of itself, has no relevance to the situation.
Let’s go back to the urn. Green and blue have the type of overlap that favours randomisation: P((green and heads) or (blue and tails)) = 1⁄3 exactly. A bet paying 9U on this event has expected utility of 3U exactly. Let’s say we took this bet. Now say the coin comes up heads. We can update the probabilities as per above. The answer is that P(green) = 1⁄3 +- 1⁄9 as it was before. That makes sense because it’s an independent event: knowing the result of the coin toss gives no information about the urn. The difference is that we now have an outstanding bet that pays 9U if the ball is green. The expected utility would therefore be 3U +- 1U. Except, the expected utility interval was zero-length before the coin was tossed, so it remains zero-length. Equivalently, the virtual interval becomes -+ 1U, so that the effective total is 3U exactly. (In this example, the midpoint of the expected utility interval didn’t change either. That’s not generally the case.) A bet randomised on a new coin toss would have expected utility 3U, plus the virtual interval of -+ 1U, for an effective total of 3U -+ 1U. So we would strictly prefer to keep the bet on green rather than re-randomise.
Let’s compare this with a trivial example: let’s say we took a bet that pays 9U if the ball drawn from the urn is green. The expected utility of this bet is 3U +- 1U. For some unrelated reason, a coin is tossed, and it comes up heads. The coin has also nothing to do with the urn or my bet. I still have a bet of 9U on green, and its expected utility is still 3U +- 1U.
But the difference between these two examples is just in the counterfactual: if the coin had come up tails, in the first example I would have had a bet of 9U on blue, and in the second example I would have had a bet of 9U on green. But the coin came up heads, and in both examples I end up with a bet of 9U on green. The virtual interval has some spooky dependency on what could have happened, just like “had a fair chance”. It is the ghost of a departed bet.
I expect many on LW are wondering what happened. There was supposed to be a proof that anything that isn’t Bayesian can be punished. Actually, this threat comes with some hidden assumptions, which I hope these analogies have helped to illustrate. A boot is an example of something which has no fair price, even if a pair of boots has one. A mother with two children and one treat is an example where some counterfactuals are not forgotten. The hidden assumptions fail in our case, just as they can fail in these other contexts where Bayesianism is not at issue. This can be stated more rigorously8, but that is basically how it’s possible. Now We Know. And Knowing is Half the Battle.
And footnotes pointing to some tangentially relevant journal articles make me sound extra credible.
For Choquet expected utility see: D. Schmeidler, Subjective probability and expected utility without additivity, Econometrica 57 (1989) pp 571-587.
For maxmin expected utility see: I. Gilboa and D. Schmeidler, Maxmin expected utility with a non-unique prior, J. Math. Econ. 18 (1989) pp 141-153.
For multiplier preferences see: L.P. Hansen and T.J. Sargeant, Robust control and model uncertainty, Amer. Econ. Rev. 91 (2001) pp 60-66.
For variational preferences see: F. Maccheroni, M. Marinacci, and A. Rustichini, Dynamic variational preferences, J. Econ. Theory 128 (2006) pp 4-44.
Any length between 0 and 1⁄3 works. But here’s where I pulled 1⁄9 from: a Bayesian might assign exactly 1⁄61 prior probability to the 61 possible urn compositions, and the result is roughly approximated by the Laplacian rule of succession, which prescribes a pseudocount of one green and one blue ball. A similar thing with probability intervals is roughly approximated by using a pseudocount of 3⁄2 +- 1⁄2 green and 3⁄2 -+ 1⁄2 blue balls.
To quickly relate this back to Savage’s rules: rules 1 and 3 guarantee that there’s no static money pump. Rule 2 then is supposed to guarantee that there is no dynamic money pump. But it is stronger than necessary for that purpose. I claim that this method obeys rules 1, 3, and a weaker version of rule 2, and that it is dynamically consistent. For dynamic consistency of variational preferences in general, see footnotes above. This method is a special case, for which I wrote up a simpler proof.
Appendix A: method summary
Events are assigned a pair of prior probabilities, which can also be thought of as an oriented probability interval. e.g. (3/5,2/5) can also be thought of as 1⁄2 +- 1⁄10.
Neither side of the pair can be 0 or 1, except when they’re both 0 or both 1.
Each side of the pair is additive: if A and B are disjoint, and P(A) = (x,y), and P(B) = (u,v), then P(A or B) = (x+u,y+v).
Each side of the pair updates by Bayes’ rule: if P(A and B) = (x,y), and P(B) = (u,v), then P(A|B) = (x/u,y/v).
Given a utility function, each bet will then have an expected utility interval: multiply the probability intervals by the utility for each possible outcome.
There is also a virtual expected utility interval to keep track of. The midpoint of this interval is always zero.
When updating the virtual expected utility interval, compute the prior expected utility interval of the outstanding bet(s), subtract the posterior expected utility interval from it, and add this difference to the virtual expected utility interval. Throw away the midpoint (reset the midpoint of the interval to zero, keeping just the length and orientation).
To decide among bets: compute the expected utility intervals of each of them—including already outstanding bets, and including the virtual expected utility interval. Rank them according to the minimum values of the intervals.
Implicitly when presented with options we are also presented with the option to randomise among them, and sometimes this is strictly better than any of the pure options.
Appendix B: obligatory image for LW posts on this topic
The Ellsberg paradox and money pumps
Followup to: The Savage theorem and the Ellsberg paradox
In the previous post, I presented a simple version of Savage’s theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: “There is always a price to pay for leaving the Bayesian Way.”1 But not always, it turns out. I claimed that there was a method that is Ellsberg-paradoxical, therefore non-Bayesian, but can’t be money-pumped (or “Dutch booked”). I will present the method in this post.
I’m afraid this is another long post. There’s a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to money-pump it, I’d suggest reading at least the two highlighted dialogues.
Ambiguity aversion
To recap the Ellsberg paradox: there’s an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on red-or-blue or on green-or-blue, choose green-or-blue. For some people this behaviour persists even after due calculation and reflection. This behaviour is non-Bayesian, and is the prototypical example of ambiguity aversion.
There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I’ll try to redeem myself.
Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tie-breaking behaviour. (And when it’s not strict, we shouldn’t call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.
A third theme was that ambiguity aversion might be a good heuristic if betting against someone who may know something you don’t. Now, no such opponent was specified, and speaking for myself, I’m not inferring one when I make the paradoxical choices. Still, let me admit that it’s not contrived to infer a mischievous experimenter from the Ellsberg setup. One commentator puts it better than me:
We can take betting on the Ellsberg urn as a stand-in for various decisions under ambiguous circumstances. Ambiguity aversion can be Bayesian if we assume the right sort of correlation between the options offered and the state of the world, or the right sort of correlation between the choice made and the state of the world. In that case just about anything can Bayesian. But sometimes the opponent will not have extra information, nor extra power. There might not even be any opponent as such. If we assume there are no such correlations, then ambiguity aversion is non-Bayesian.
The final theme was: so what? Ambiguity aversion is just another cognitive bias. One commentator specifically complained that I spent too much time talking about various abstractions and not enough time talking about how ambiguity aversion could be money-pumped. I will fix that now: I claim that ambiguity aversion cannot be money-pumped, and the rest of this post is about my claim.
I’ll start with a bit of name-dropping and some whig history, to make myself sound more credible than I really am2. In the last twenty years or so many models of ambiguity averse reasoning have been constructed. Choquet expected utility3 and maxmin expected utility4 were early proposed models of ambiguity aversion. Later multiplier preferences5 were the result of applying the ideas of robust control to macroeconomic models. This results in ambiguity aversion, though it was not explicitly motivated by the Ellsberg paradox. More recently, variational preferences6 generalises both multiplier preferences and maxmin expected utility. What I’m going to present is a finitary case of variational preferences, with some of my own amateur mathematical fiddling for rhetorical purposes.
Probability intervals
The starting idea is simple enough, and may have already occurred to some LW readers. Instead of using a prior probability for events, can we not use an interval of probabilities? What should our betting behaviour be for an event with probability 50%, plus or minus 10%?
There are some different ways of filling in the details. So to be quite clear, I’m not proposing the following as the One True Probability Theory, and I am not claiming that the following is descriptive of many people’s behaviour. What follows is just one way of making ambiguity aversion work, and perhaps the simplest way. This makes sense, given my aim: I should just describe a simple method that leaves the Bayesian Way, but does not pay.
Now, sometimes disjoint ambiguous events together make an event with known probability. Or even a certainty, as in an event and its negation. If we want probability intervals to be additive (and let’s say that we do) then what we really want are oriented intervals. I’ll use +- or -+ (pronounced: plus-or-minus, minus-or-plus) to indicate two opposite orientations. So, if P(X) = 1⁄2 +- 1⁄10, then P(not X) = 1⁄2 -+ 1⁄10, and these add up to 1 exactly.
Such oriented intervals are equivalent to ordered pairs of numbers. Sometimes it’s more helpful to think of them as oriented intervals, but sometimes it’s more helpful to think of them as pairs. So 1⁄2 +- 1⁄10 is the pair (3/5,2/5). And 1⁄2 -+ 1⁄10 is (2/5,3/5), the same numbers in the opposite order. The sum of these is (1,1), which is 1 exactly.
You may wonder, if we can use ordered pairs, can we use triples, or longer lists? Yes, this method can be made to work with those too. And we can still think in terms of centre, length, and orientation. The orientation can go off in all sorts of directions, instead of just two. But for my purposes, I’ll just stick with two.
You might also ask, can we set P(X) = 1⁄2 +- 1/2? No, this method just won’t handle it. A restriction of this method is that neither of the pair can be 0 or 1, except when they’re both 0 or both 1. The way we will be using these intervals, 1⁄2 +- 1⁄2 would be the extreme case of ambiguity aversion. 1⁄2 +- 1⁄10 represents a lesser amount of ambiguity aversion, a sort of compromise between worst-case and average-case behaviour.
To decide among bets (having the same two outcomes), compute their probability intervals. Sometimes, the intervals will not overlap. Then it’s unambiguous which is more likely, so it’s clear what to pick. In general, whether they overlap or not, pick the one with the largest minimum—though we will see there are three caveats when they do overlap. If P(X) = 1⁄2 +- 1⁄10, we would be indifferent between a bet on X and on not X: the minimum is 2⁄5 in either case. If P(Y) = 1⁄2 exactly, then we would strictly prefer a bet on Y to a bet on X.
Which leads to the first caveat: sometimes, given two options, it’s strictly better to randomise. Let’s suppose Y represents a fair coin. So P(Y) = 1⁄2 exactly, as we said. But also, Y is independent of X. P(X and Y) = 1⁄4 +- 1⁄20, and so on. This means that P((X and not Y) or (Y and not X)) = 1⁄2 exactly also. So we’re indifferent between a bet on X and a bet on not X, but we strictly prefer the randomised bet.
In general, randomisation will be strictly better if you have two choices with overlapping intervals of opposite orientations. The best randomisation ratio will be the one that gives a bet with zero-length interval.
Now let us reconsider the Ellsberg urn. We did say the urn can be a metaphor for various situations. Generally these situations will not be symmetrical. But, even in symmetrical scenarios, we should still re-think how we apply the principle of indifference. I argue that the underlying idea is really this: if our information has a symmetry, then our decisions should have that same symmetry. If we switch green and blue, our information about the Ellsberg urn doesn’t change. The situation is indistinguishable, so we should behave the same way. It follows that we should be indifferent between a bet on green and a bet on blue. Then, for the Bayesian, it follows that P(red) = P(green) = P(blue) = 1⁄3. Period.
But for us, there is a degree of freedom, even in this symmetrical situation. We know what the probability of red is, so of course P(red) = 1⁄3 exactly. But we can set, say7, P(green) = 1⁄3 +- 1⁄9, and P(blue) = 1⁄3 -+ 1⁄9. So we get P(red or green) = 2⁄3 +- 1⁄9, P(red or blue) = 2⁄3 -+ 1⁄9, P(green or blue) = 2⁄3 exactly, and of course P(red or green or blue) = 1 exactly.
So: red is 1⁄3 exactly, but the minimum of green is 2⁄9. (green or blue) is 2⁄3 exactly, but the minimum of (red or blue) is 5⁄9. So choose red over green, and (green or blue) over (red or blue). That’s the paradoxical behaviour. Note that neither pair of choices offered in the Ellsberg paradox has the type of overlap that favours randomisation.
Once we have a decision procedure for the two-outcome case, then we can tack on any utility function, as I explained in the previous post. The result here is what you would expect: we get oriented expected utility intervals, obtained by multiplying the oriented probability intervals by the utility. When deciding, we pick the one whose interval has the largest minimum. So for example, a bet which pays 15U on red (using U for “utils”, the abstract units of measurement of the utility function) has expected utility 5U exactly. A bet which pays 18U on green has expected utility 6U +- 2U, the minimum is 4U. So pick the bet on red over that.
Operationally, probability is associated with the “fair price” at which we are willing to bet. A probability interval indicates that there is no fair price. Instead we have a spread: we buy bets at their low price and sell at their high price. At least, we do that if we have no outstanding bets, or more generally, if the expected utility interval on our outstanding bets has zero-length. The second caveat is that if this interval has length, then it affects our price: we also sell bets of the same orientation at their low price, and buy bets of the opposite orientation at their high price, until the length of this interval is used up. The midpoint of the expected utility interval on our outstanding bets will be irrelevant.
This can be confusing, so it’s time for an analogy.
Bootsianism
If you are Bayesian and risk-neutral (and if bets pay in “utils” rather than cash, you are risk-neutral by definition) then outstanding bets have no effect on further betting behaviour. However, if you are risk-averse, as is the most common case, then this is no longer true. The more money you’ve already got on the line, the less willing you will be to bet.
But besides risk attitude, there could also be interference effects from non-monetary payouts. For example, if you are dealing in boots, then you wouldn’t buy a single boot for half the price of a pair, and neither would you sell one of your boots for half the price of a pair. Unless you happened to already have unmatched boots, then you would sell those at a lower price, or buy boots of the opposite orientation at a higher price, until you had no more unmatched boots. If you were otherwise risk-neutral with respect to boots, then your behaviour would not depend on the number of pairs you have, just on the number and orientation of your unmatched boots.
This closely resembles the non-Bayesian behaviour above. In fact, for the Ellsberg urn, we could just say that a bet on red is worth a pair of boots, a bet on green is worth two left boots, and a bet on blue is worth two right boots. Without saying anything further, it’s clear that we would strictly prefer red (a pair) over green (two lefts), but we would also strictly prefer green-or-blue (two pairs) over red-or-blue (one left and three rights). That’s the paradoxical behaviour, but you know you can’t money-pump boots.
Boots’ rule
So much for the static case. But what do we do with new information? How do we handle conditional probabilities?
We still get P(A|B) by dividing P(A and B) by P(B). It will be easier to think in terms of pairs here. So for example P(red) = 1⁄3 exactly = (1/3,1/3) and P(red or green) = 2⁄3 +- 1⁄9 = (7/9,5/9), so P(red|red or green) = (3/7,3/5) = 18⁄35 -+ 3⁄35. And similarly P(green|red or green) = (1/3 +- 1⁄9)/(2/3 +- 1⁄9) = 17⁄35 +- 3⁄35.
This rule covers the dynamic passive case, where we update probabilities based on what we observe, before betting. The third and final caveat is in the active case, when information comes in between bets. Now, we saw that the length and orientation of the interval on expected utility of outstanding bets affects further betting behaviour. There is actually a separate update rule for this quantity. It is about as simple as it gets: do nothing. The interval can change when we make choices, and its midpoint can shift due to external events, but its length and orientation do not update.
You might expect the update rule for this quantity to follow from the way the expected utility updates, which follows from the way probability updates. But it has a mind of its own. So even if we are keeping track of our bets, we’d still need to keep track of this extra variable separately.
Sometimes it may be easier to think in terms of the total expected utility interval of our outstanding bets, but sometimes it may be easier to think of this in terms of having a “virtual” interval that cancels the change in the length and orientation of the “real” expected utility interval. The midpoint of this virtual interval is irrelevant and can be taken to always be zero. So, on update, compute the prior expected utility interval of outstanding bets, subtract the posterior expected utility interval from it, and add this difference to the virtual interval. Reset its midpoint to zero, keeping only the length and orientation.
That can also be confusing, so let’s have another analogy.
Yo’ mama’s so illogical...
I recently came across this example by Mark Machina:
Instead of giving the treat to either child, she strictly prefers to toss a coin and give the treat to the winner. But after the coin is tossed, she strictly prefers to give the treat to the winner rather than toss again.
This cannot be explained in terms of maximising expected utility, in the typical sense of “utility”. And of course only known probabilities are involved here, so there’s no question as to whether her beliefs are probabilistically sophisticated or not. But it could be said that she is still maximising the expected value of an extended objective function. This extended objective function does not just consider who gets a treat, but also considers who “had a fair chance”. She is unfair if she gives the treat to either child outright, but fair if she tosses a coin. That fairness doesn’t go away when the result of the coin toss is known.
Or something like that. There are surely other ways of dissecting the mother’s behaviour. But no matter what, it’s going to have to take the coin toss into account, even though the coin, in and of itself, has no relevance to the situation.
Let’s go back to the urn. Green and blue have the type of overlap that favours randomisation: P((green and heads) or (blue and tails)) = 1⁄3 exactly. A bet paying 9U on this event has expected utility of 3U exactly. Let’s say we took this bet. Now say the coin comes up heads. We can update the probabilities as per above. The answer is that P(green) = 1⁄3 +- 1⁄9 as it was before. That makes sense because it’s an independent event: knowing the result of the coin toss gives no information about the urn. The difference is that we now have an outstanding bet that pays 9U if the ball is green. The expected utility would therefore be 3U +- 1U. Except, the expected utility interval was zero-length before the coin was tossed, so it remains zero-length. Equivalently, the virtual interval becomes -+ 1U, so that the effective total is 3U exactly. (In this example, the midpoint of the expected utility interval didn’t change either. That’s not generally the case.) A bet randomised on a new coin toss would have expected utility 3U, plus the virtual interval of -+ 1U, for an effective total of 3U -+ 1U. So we would strictly prefer to keep the bet on green rather than re-randomise.
Let’s compare this with a trivial example: let’s say we took a bet that pays 9U if the ball drawn from the urn is green. The expected utility of this bet is 3U +- 1U. For some unrelated reason, a coin is tossed, and it comes up heads. The coin has also nothing to do with the urn or my bet. I still have a bet of 9U on green, and its expected utility is still 3U +- 1U.
But the difference between these two examples is just in the counterfactual: if the coin had come up tails, in the first example I would have had a bet of 9U on blue, and in the second example I would have had a bet of 9U on green. But the coin came up heads, and in both examples I end up with a bet of 9U on green. The virtual interval has some spooky dependency on what could have happened, just like “had a fair chance”. It is the ghost of a departed bet.
I expect many on LW are wondering what happened. There was supposed to be a proof that anything that isn’t Bayesian can be punished. Actually, this threat comes with some hidden assumptions, which I hope these analogies have helped to illustrate. A boot is an example of something which has no fair price, even if a pair of boots has one. A mother with two children and one treat is an example where some counterfactuals are not forgotten. The hidden assumptions fail in our case, just as they can fail in these other contexts where Bayesianism is not at issue. This can be stated more rigorously8, but that is basically how it’s possible. Now We Know. And Knowing is Half the Battle.
Notes
Taken almost verbatim from Eliezer Yudkowsky’s post on the Allais paradox.
And footnotes pointing to some tangentially relevant journal articles make me sound extra credible.
For Choquet expected utility see: D. Schmeidler, Subjective probability and expected utility without additivity, Econometrica 57 (1989) pp 571-587.
For maxmin expected utility see: I. Gilboa and D. Schmeidler, Maxmin expected utility with a non-unique prior, J. Math. Econ. 18 (1989) pp 141-153.
For multiplier preferences see: L.P. Hansen and T.J. Sargeant, Robust control and model uncertainty, Amer. Econ. Rev. 91 (2001) pp 60-66.
For variational preferences see: F. Maccheroni, M. Marinacci, and A. Rustichini, Dynamic variational preferences, J. Econ. Theory 128 (2006) pp 4-44.
Any length between 0 and 1⁄3 works. But here’s where I pulled 1⁄9 from: a Bayesian might assign exactly 1⁄61 prior probability to the 61 possible urn compositions, and the result is roughly approximated by the Laplacian rule of succession, which prescribes a pseudocount of one green and one blue ball. A similar thing with probability intervals is roughly approximated by using a pseudocount of 3⁄2 +- 1⁄2 green and 3⁄2 -+ 1⁄2 blue balls.
To quickly relate this back to Savage’s rules: rules 1 and 3 guarantee that there’s no static money pump. Rule 2 then is supposed to guarantee that there is no dynamic money pump. But it is stronger than necessary for that purpose. I claim that this method obeys rules 1, 3, and a weaker version of rule 2, and that it is dynamically consistent. For dynamic consistency of variational preferences in general, see footnotes above. This method is a special case, for which I wrote up a simpler proof.
Appendix A: method summary
Events are assigned a pair of prior probabilities, which can also be thought of as an oriented probability interval. e.g. (3/5,2/5) can also be thought of as 1⁄2 +- 1⁄10.
Neither side of the pair can be 0 or 1, except when they’re both 0 or both 1.
Each side of the pair is additive: if A and B are disjoint, and P(A) = (x,y), and P(B) = (u,v), then P(A or B) = (x+u,y+v).
Each side of the pair updates by Bayes’ rule: if P(A and B) = (x,y), and P(B) = (u,v), then P(A|B) = (x/u,y/v).
Given a utility function, each bet will then have an expected utility interval: multiply the probability intervals by the utility for each possible outcome.
There is also a virtual expected utility interval to keep track of. The midpoint of this interval is always zero.
When updating the virtual expected utility interval, compute the prior expected utility interval of the outstanding bet(s), subtract the posterior expected utility interval from it, and add this difference to the virtual expected utility interval. Throw away the midpoint (reset the midpoint of the interval to zero, keeping just the length and orientation).
To decide among bets: compute the expected utility intervals of each of them—including already outstanding bets, and including the virtual expected utility interval. Rank them according to the minimum values of the intervals.
Implicitly when presented with options we are also presented with the option to randomise among them, and sometimes this is strictly better than any of the pure options.
Appendix B: obligatory image for LW posts on this topic