This breaks down somewhat when A and B are not axiomatically preferred. If instead, they are preferred because they enable other states in conjunction with other actions and resources down the line, then it is entirely possible that a certainty of B is preferable to the inability to commit other actions between the longer term states A and B until the lottery decides.
This may be one reason humans evolved to be somewhat risk averse, especially because in real situations the resources in question include our mental and physical resources.
This all comes back to the self-reference of the preference function. If you add the lottery you change the circumstances under which you were able to compute A > B, and even the meta-preference computation that said that determining this was preferable to other things you could have done instead.
Often this won’t make a difference, but often is not equivalent to always. It’s important to know the limits to these sorts of ideas.
This might be a good argument for the general preferences shown by the Allais paradox. If you strictly prefer 2B to 2A, you might nonetheless have a reason to prefer 1A to 1B—you could leverage your certainty to perform actions contingent on actually having $24,000. This might only work if the payoff is not immediate—you can take a loan based on the $24,000 you’ll get in a month, but probably not on the 34% chance of $24,000.
Because there’s a larger jump in expected utility between certainty (up to breach of contract, etc.) of future money and 99% than between (n < 100)% and (n-1)%. However, this means that the outcome of 1A and the winning outcome of 2A are no longer the same (both involve obtaining money at time t_1, but 1A also includes obtaining, at t_0, certainty of future money), and choosing 1A and 2B becomes unproblematic.
Unless I misunderstood, most of your comment was just another justification for preferring 1A to 1B.
It doesn’t seem to support simultaneously preferring 2B to 2A. Further, as near as I can tell, none of what you’re saying stops the vulnerability that’s opened up by having those two preferences simultaneously. I.e. the preference reversal issue is still there and still exploitable.
Haven’t followed too closely, but I think Nick’s saying that the preference reversal issue doesn’t apply and that’s OK, because as we’ve defined it now 2A is no longer the same thing as a 34% chance of 1A and a 66% chance of nothing, because in the context of what thomblake said we’re assuming you get the information at different times. (We’re assuming the 34% chance is not for your being certain now of getting 1A, but for your being certain only later of getting 1A, which breaks the symmetry.)
Yes to what Nick Tarleton said. I didn’t give a justification for preferring 2B to 2A because I was willing to assume that, and then gave reasons for nonetheless preferring 1A to 1B. There are things that certainty can buy you.
Also yes to what steven0461 said. While you can reverse the symmetry, you can’t reverse it twice—once you’ve given me certainty, you can’t take it away again (or at least, in this thought experiment, I won’t be willing to give it up).
Eliezer’s money-pump might still work once (thus making it not so much a money-pump) but inasmuch as you end up buying certainty for a penny, I don’t find it all that problematic.
I’ll try a concrete example. Of note, fuzziness of goals isn’t the problem, it’s the fact that the consequences on your other priorities are different choosing between the lottery and B, than choosing between A and B.
Let’s say A and B are lots of land on which you could build your new Human Instrumentality Lab. You’ve checking things out and you somewhat prefer lot A to lot B. You get the option (1) definitely get lot B, or (2) go in on a lottery-type auction and get a chance of either lot. In either case, you’ll get the lot at the end of the month.
If you go with (1) you can get the zoning permits, and get your architect started right now. If you go with (2) you can try that, but you may need to back track or do twice the work. It may not be worth doing that if you don’t prefer lot A enough.
Now obviously this isn’t an issue if the knowledge of the outcome of the lottery is instantaneous. But you can’t assume that you immediately know the outcomes of all your gambles.
What he seems to be saying is that there are situations where although you prefer A>B, the uncertainty and time for the lottery to settle the probabilities changes things so your new preference would be A>B>(pA + (1-p)B)
EDIT: It occured to me that would be somewhat dependent on the value of p. And the relative value between A and B. But for low values of p and fairly long time to settle the probabilities, B would often be higher valued than the lottery.
Well, if B is defined sufficiently precisely, ie, have X money at time Y, then B shouldn’t be greater than the lottery which, even if the lose happens, produces the exact same outcome.
ie, unless I misunderstand, the objection only arises out of being a bit fuzzy about what B actually precisely means, letting the B in the lottery be a different B then the, well, regular B.
Would you agree with that interpretation of things, or am I missing something critical here?
I think you’re right—I meant mainly that a lot depends on the specifics of the situation, so even with A>B, it is not necessarily irrational to prefer B to the probability.
I think Nick Tarleton refuted this in the other subthread—a lottery here means a lottery over states of the world, which include your knowledge state, so if you get your knowledge of the outcome later it’s not really the same thing.
It’s still true that this is a reason to disprefer realistic lotteries where you learn the outcome later, but maybe this is better termed “unpredictability aversion” than “risk aversion”? After all, it can happen even when all lottery outcomes are equally desirable. (Example: you like soup and potatoes equally, but prefer either to a lottery over them because you want to know whether to get a spoon or a fork.)
Okay. I’d say then that case is comparing B with a lottery involving some different B’.
(ie, like saying sometimes x=x is false of the x on the left is 2 and the one on the right is 3. Of course 2 is not = 3, but that’s a counterexample of x=x, rather that’s a case of ignoring what we actually mean by using the same variable name on both sides)
This breaks down somewhat when A and B are not axiomatically preferred. If instead, they are preferred because they enable other states in conjunction with other actions and resources down the line, then it is entirely possible that a certainty of B is preferable to the inability to commit other actions between the longer term states A and B until the lottery decides.
This may be one reason humans evolved to be somewhat risk averse, especially because in real situations the resources in question include our mental and physical resources.
This all comes back to the self-reference of the preference function. If you add the lottery you change the circumstances under which you were able to compute A > B, and even the meta-preference computation that said that determining this was preferable to other things you could have done instead.
Often this won’t make a difference, but often is not equivalent to always. It’s important to know the limits to these sorts of ideas.
This might be a good argument for the general preferences shown by the Allais paradox. If you strictly prefer 2B to 2A, you might nonetheless have a reason to prefer 1A to 1B—you could leverage your certainty to perform actions contingent on actually having $24,000. This might only work if the payoff is not immediate—you can take a loan based on the $24,000 you’ll get in a month, but probably not on the 34% chance of $24,000.
Fine, there could be a good reason to strictly prefer 1A to 1B, but then if you do, how do you justify preferring 2B to 2A?
Because there’s a larger jump in expected utility between certainty (up to breach of contract, etc.) of future money and 99% than between (n < 100)% and (n-1)%. However, this means that the outcome of 1A and the winning outcome of 2A are no longer the same (both involve obtaining money at time t_1, but 1A also includes obtaining, at t_0, certainty of future money), and choosing 1A and 2B becomes unproblematic.
Unless I misunderstood, most of your comment was just another justification for preferring 1A to 1B.
It doesn’t seem to support simultaneously preferring 2B to 2A. Further, as near as I can tell, none of what you’re saying stops the vulnerability that’s opened up by having those two preferences simultaneously. I.e. the preference reversal issue is still there and still exploitable.
Haven’t followed too closely, but I think Nick’s saying that the preference reversal issue doesn’t apply and that’s OK, because as we’ve defined it now 2A is no longer the same thing as a 34% chance of 1A and a 66% chance of nothing, because in the context of what thomblake said we’re assuming you get the information at different times. (We’re assuming the 34% chance is not for your being certain now of getting 1A, but for your being certain only later of getting 1A, which breaks the symmetry.)
Yes, that’s what I meant.
Yes to what Nick Tarleton said. I didn’t give a justification for preferring 2B to 2A because I was willing to assume that, and then gave reasons for nonetheless preferring 1A to 1B. There are things that certainty can buy you.
Also yes to what steven0461 said. While you can reverse the symmetry, you can’t reverse it twice—once you’ve given me certainty, you can’t take it away again (or at least, in this thought experiment, I won’t be willing to give it up).
Eliezer’s money-pump might still work once (thus making it not so much a money-pump) but inasmuch as you end up buying certainty for a penny, I don’t find it all that problematic.
Sorry, maybe it’s because I’m running on insufficient sleep, but I don’t understand what you’re saying here. Mind rephrasing your objection? Thanks.
I’ll try a concrete example. Of note, fuzziness of goals isn’t the problem, it’s the fact that the consequences on your other priorities are different choosing between the lottery and B, than choosing between A and B.
Let’s say A and B are lots of land on which you could build your new Human Instrumentality Lab. You’ve checking things out and you somewhat prefer lot A to lot B. You get the option (1) definitely get lot B, or (2) go in on a lottery-type auction and get a chance of either lot. In either case, you’ll get the lot at the end of the month.
If you go with (1) you can get the zoning permits, and get your architect started right now. If you go with (2) you can try that, but you may need to back track or do twice the work. It may not be worth doing that if you don’t prefer lot A enough.
Now obviously this isn’t an issue if the knowledge of the outcome of the lottery is instantaneous. But you can’t assume that you immediately know the outcomes of all your gambles.
What he seems to be saying is that there are situations where although you prefer A>B, the uncertainty and time for the lottery to settle the probabilities changes things so your new preference would be A>B>(pA + (1-p)B)
EDIT: It occured to me that would be somewhat dependent on the value of p. And the relative value between A and B. But for low values of p and fairly long time to settle the probabilities, B would often be higher valued than the lottery.
Well, if B is defined sufficiently precisely, ie, have X money at time Y, then B shouldn’t be greater than the lottery which, even if the lose happens, produces the exact same outcome.
ie, unless I misunderstand, the objection only arises out of being a bit fuzzy about what B actually precisely means, letting the B in the lottery be a different B then the, well, regular B.
Would you agree with that interpretation of things, or am I missing something critical here?
I think you’re right—I meant mainly that a lot depends on the specifics of the situation, so even with A>B, it is not necessarily irrational to prefer B to the probability.
I think Nick Tarleton refuted this in the other subthread—a lottery here means a lottery over states of the world, which include your knowledge state, so if you get your knowledge of the outcome later it’s not really the same thing.
It’s still true that this is a reason to disprefer realistic lotteries where you learn the outcome later, but maybe this is better termed “unpredictability aversion” than “risk aversion”? After all, it can happen even when all lottery outcomes are equally desirable. (Example: you like soup and potatoes equally, but prefer either to a lottery over them because you want to know whether to get a spoon or a fork.)
(In that link, I’m actually just restating Thom Blake’s argument.)
Thanks for the link!
Okay. I’d say then that case is comparing B with a lottery involving some different B’.
(ie, like saying sometimes x=x is false of the x on the left is 2 and the one on the right is 3. Of course 2 is not = 3, but that’s a counterexample of x=x, rather that’s a case of ignoring what we actually mean by using the same variable name on both sides)