The probability of drawing a blue ball is 1⁄3, as is that of drawing a green ball.
I’d insist that my preferences are {} < {Red} = {Green} = {Blue} < {Red, Green} = {Red, Blue} = {Blue, Green} < {Red, Green, Blue}. There’s no reason to prefer Red to Green: the possibility of there being few Green balls is counterbalanced by the possibility of there being close to 200 of them.
ETA: Well, there are situations in which your preference order is a good idea, such as when there is an adversary changing the colours of the balls in order to make you lose. They can’t touch red without being found out, they can only change the relative numbers of Blue and Green. But in that case, choosing the colour that makes you win isn’t the only effect of an action—it also affects the colours of the balls, so you need to take that into account.
So the true state space would be {Ball Drawn = i} for each value of i in [1..300]. The contents of the urn are chosen by the adversary, to be {Red = 100, Green = n, Blue = 200 - n} for n in [0..200]. When you take the action {Green}, the adversary sets n to 0, so that action maps all {Ball Drawn = i} to {Lose}. And so on. Anyway, I don’t think this is a counter-example for that reason: you’re not just deciding the winning set, you’re affecting the balls in the urn.
I see. No, that’s not the kind of adversary I had in mind when I said that.
How about a four-state example. The states are { (A,Heads), (A,Tails), (B,Heads), (B,Tails) }.
The outcomes are { Win, Lose }. I won’t list all 16 actions, just to say that by P1 you must rank them all. In particular, you must rank the actions X = { (A,Heads), (A,Tails) }, Y = { (B,Heads), (B,Tails) }, U = { (A,Heads), (B,Tails) }, and V = { (A,Tails), (B,Heads) }. Again I’m writing actions as events, since there are only two outcomes.
To motivate this, consider the game where you and your (non-psychic, non-telekinetic etc) adversary are to simultaneously reveal A or B; if you pick the same, you win, if not, your adversary wins. You are at a point in time where your adversary has written “A” or “B” on a piece of paper face down, and you have not. You have also flipped a coin, which you have not looked at (and are not required to look at, or show your adversary). Therefore the above four states do indeed capture all the state information, and the four actions I’m singling out correspond to: you ignore the coin and write “A”, or ignore and write “B”; or else you decide to base what you write on the flip of the coin, one way, or the other. As I say, by P1, you must rank these.
Me, I’ll take the coin, thanks. I rank X=Y<U=V. I just violated P2. Am I really irrational?
And even if you think I am, one of the questions originally asked was how things could be justified by Dutch book arguments or the like. So the Ellsberg paradox and variants is still relevant to that question, normative arguments aside.
So P2 doesn’t apply in this example. Why not? Well, the reason you prefer to use the coin is because you suspect the adversary to be some kind of predictor, who is slightly more likely to write down a B if you just write down A (ignoring the coin). That’s not something captured by the state information here. You clearly don’t think that (A,Tails) is simultaneously more and less likely than (B,Tails), just that the action you choose can have some influence on the outcome. I think it might be that if you expanded the state space to include a predictor with all the possibilities of what it could do, P2 would hold again.
That isn’t the issue. At the point in time I am talking about, the adversary has already made his non-revealed choice (and he is not telekinetic). There is no other state.
Tails versus Heads is objectively 1:1 resulting from the toss of a fair coin, whereas A versus B has an uncertainty that results from my adversary’s choice. I may not have reason to think that he will choose A over B, so I can still call it 1:1, but there is still a qualitative distinction between uncertainty and randomness, or ambiguity and risk, or objective and subjective probability, or whatever you want to call it, and it is not irrational to take it into account.
I have to admit, this ordering seem reasonable… for the reasons nshepperd suggests. Just saying that he’s not telepathic isn’t enough to say he’s not any sort of predictor—after all, I’m a human, I’m bad at randomizing, maybe he’s played this game before and compiled statistics. Or he just has a good idea how peope tend to think about this sort of thing. So I’m not sure you’re correct in your conclusion that this isn’t the issue.
Then I claim that a non-psychic predictor, no matter how good, is very different from a psychic.
The powers of a non-psychic predictor are entirely natural and causal. Once he has written down his hidden choice, then he becomes irrelevant. If this isn’t clear, then we can make an analogy with the urn example. After the ball is drawn but before its colour is revealed, the contents of the urn are irrelevant. As I pointed out, the urn could even be destroyed before the colour of the ball is revealed, so that the ball’s colour truly is the only state. Similarly, after the predictor writes his choice but before it is revealed, he might accidentally behead himself while shaving.
Now of course your beliefs about the talents of the late predictor might inform your beliefs about his hidden choice. But that’s the only way they can possibly be releveant. The coin and the predictor’s hidden choice on the paper really are the only states of the world now, and your own choice is free and has no effect on the state. So, if you display a strict preference for the coin, then your uncertainty is still not captured by subjective probability. You still violate P2.
To get around this, it seems you would have to posit some residual entanglement between your choice and the external state. To me this sounds like a strange thing to argue. But I suppose you could say your cognition is flawed in a way that is invisible to you, yet was visible to the clever but departed predictor. So, you might argue that, even though there is no actual psychic effect, your choice is not really free, and you have to take into account your internalities in addition to the external states.
My question then would be, does this entanglement prevent you from having a total ordering over all maps from states (internal and external) to outcomes? If yes, then P1 is violated. If no, then can I not just ask you about the ordering of the maps which only depend on the external states, and don’t we just wind up where we were?
Because there might be more to uncertainty than subjective probability.
Let’s take a step back.
Yes, if you assume that uncertainty is entirely captured by subjective probability, then you’re completely right. But if you assume that, then you wouldn’t need the Savage axioms in the first place. The Savage axioms are one way of justifying this assumption (as well as expected utility). So, what justifies the Savage axioms?
One suggestion the original poster made was to use Dutch book arguments, or the like. But now here’s a situation where there does seem to be a qualitative difference between a random event and an uncertain event, where there is a “reasonable” thing to do that violates P2, and where nothing like a Dutch book argument seems to be available to show that it is suboptimal.
I hope that clarifies the context.
EDIT: I put “reasonable” in scare-quotes. It is reasonable, and I am prepared to defend that. But it isn’t necessary to believe it is reasonable to see why this example matters in this context.
The probability of drawing a blue ball is 1⁄3, as is that of drawing a green ball.
I’d insist that my preferences are {} < {Red} = {Green} = {Blue} < {Red, Green} = {Red, Blue} = {Blue, Green} < {Red, Green, Blue}. There’s no reason to prefer Red to Green: the possibility of there being few Green balls is counterbalanced by the possibility of there being close to 200 of them.
ETA: Well, there are situations in which your preference order is a good idea, such as when there is an adversary changing the colours of the balls in order to make you lose. They can’t touch red without being found out, they can only change the relative numbers of Blue and Green. But in that case, choosing the colour that makes you win isn’t the only effect of an action—it also affects the colours of the balls, so you need to take that into account.
So the true state space would be
{Ball Drawn = i}
for each value of i in [1..300]. The contents of the urn are chosen by the adversary, to be{Red = 100, Green = n, Blue = 200 - n}
for n in [0..200]. When you take the action{Green}
, the adversary setsn
to 0, so that action maps all{Ball Drawn = i}
to{Lose}
. And so on. Anyway, I don’t think this is a counter-example for that reason: you’re not just deciding the winning set, you’re affecting the balls in the urn.I see. No, that’s not the kind of adversary I had in mind when I said that.
How about a four-state example. The states are { (A,Heads), (A,Tails), (B,Heads), (B,Tails) }.
The outcomes are { Win, Lose }. I won’t list all 16 actions, just to say that by P1 you must rank them all. In particular, you must rank the actions X = { (A,Heads), (A,Tails) }, Y = { (B,Heads), (B,Tails) }, U = { (A,Heads), (B,Tails) }, and V = { (A,Tails), (B,Heads) }. Again I’m writing actions as events, since there are only two outcomes.
To motivate this, consider the game where you and your (non-psychic, non-telekinetic etc) adversary are to simultaneously reveal A or B; if you pick the same, you win, if not, your adversary wins. You are at a point in time where your adversary has written “A” or “B” on a piece of paper face down, and you have not. You have also flipped a coin, which you have not looked at (and are not required to look at, or show your adversary). Therefore the above four states do indeed capture all the state information, and the four actions I’m singling out correspond to: you ignore the coin and write “A”, or ignore and write “B”; or else you decide to base what you write on the flip of the coin, one way, or the other. As I say, by P1, you must rank these.
Me, I’ll take the coin, thanks. I rank X=Y<U=V. I just violated P2. Am I really irrational?
And even if you think I am, one of the questions originally asked was how things could be justified by Dutch book arguments or the like. So the Ellsberg paradox and variants is still relevant to that question, normative arguments aside.
So P2 doesn’t apply in this example. Why not? Well, the reason you prefer to use the coin is because you suspect the adversary to be some kind of predictor, who is slightly more likely to write down a B if you just write down A (ignoring the coin). That’s not something captured by the state information here. You clearly don’t think that
(A,Tails)
is simultaneously more and less likely than(B,Tails)
, just that the action you choose can have some influence on the outcome. I think it might be that if you expanded the state space to include a predictor with all the possibilities of what it could do, P2 would hold again.That isn’t the issue. At the point in time I am talking about, the adversary has already made his non-revealed choice (and he is not telekinetic). There is no other state.
Tails versus Heads is objectively 1:1 resulting from the toss of a fair coin, whereas A versus B has an uncertainty that results from my adversary’s choice. I may not have reason to think that he will choose A over B, so I can still call it 1:1, but there is still a qualitative distinction between uncertainty and randomness, or ambiguity and risk, or objective and subjective probability, or whatever you want to call it, and it is not irrational to take it into account.
I have to admit, this ordering seem reasonable… for the reasons nshepperd suggests. Just saying that he’s not telepathic isn’t enough to say he’s not any sort of predictor—after all, I’m a human, I’m bad at randomizing, maybe he’s played this game before and compiled statistics. Or he just has a good idea how peope tend to think about this sort of thing. So I’m not sure you’re correct in your conclusion that this isn’t the issue.
Then I claim that a non-psychic predictor, no matter how good, is very different from a psychic.
The powers of a non-psychic predictor are entirely natural and causal. Once he has written down his hidden choice, then he becomes irrelevant. If this isn’t clear, then we can make an analogy with the urn example. After the ball is drawn but before its colour is revealed, the contents of the urn are irrelevant. As I pointed out, the urn could even be destroyed before the colour of the ball is revealed, so that the ball’s colour truly is the only state. Similarly, after the predictor writes his choice but before it is revealed, he might accidentally behead himself while shaving.
Now of course your beliefs about the talents of the late predictor might inform your beliefs about his hidden choice. But that’s the only way they can possibly be releveant. The coin and the predictor’s hidden choice on the paper really are the only states of the world now, and your own choice is free and has no effect on the state. So, if you display a strict preference for the coin, then your uncertainty is still not captured by subjective probability. You still violate P2.
To get around this, it seems you would have to posit some residual entanglement between your choice and the external state. To me this sounds like a strange thing to argue. But I suppose you could say your cognition is flawed in a way that is invisible to you, yet was visible to the clever but departed predictor. So, you might argue that, even though there is no actual psychic effect, your choice is not really free, and you have to take into account your internalities in addition to the external states.
My question then would be, does this entanglement prevent you from having a total ordering over all maps from states (internal and external) to outcomes? If yes, then P1 is violated. If no, then can I not just ask you about the ordering of the maps which only depend on the external states, and don’t we just wind up where we were?
Well, that sounds irrational. Why would you pay to switch from X to U, a change that makes no difference to the probability of you winning?
Because there might be more to uncertainty than subjective probability.
Let’s take a step back.
Yes, if you assume that uncertainty is entirely captured by subjective probability, then you’re completely right. But if you assume that, then you wouldn’t need the Savage axioms in the first place. The Savage axioms are one way of justifying this assumption (as well as expected utility). So, what justifies the Savage axioms?
One suggestion the original poster made was to use Dutch book arguments, or the like. But now here’s a situation where there does seem to be a qualitative difference between a random event and an uncertain event, where there is a “reasonable” thing to do that violates P2, and where nothing like a Dutch book argument seems to be available to show that it is suboptimal.
I hope that clarifies the context.
EDIT: I put “reasonable” in scare-quotes. It is reasonable, and I am prepared to defend that. But it isn’t necessary to believe it is reasonable to see why this example matters in this context.