I’ll try a concrete example. Of note, fuzziness of goals isn’t the problem, it’s the fact that the consequences on your other priorities are different choosing between the lottery and B, than choosing between A and B.
Let’s say A and B are lots of land on which you could build your new Human Instrumentality Lab. You’ve checking things out and you somewhat prefer lot A to lot B. You get the option (1) definitely get lot B, or (2) go in on a lottery-type auction and get a chance of either lot. In either case, you’ll get the lot at the end of the month.
If you go with (1) you can get the zoning permits, and get your architect started right now. If you go with (2) you can try that, but you may need to back track or do twice the work. It may not be worth doing that if you don’t prefer lot A enough.
Now obviously this isn’t an issue if the knowledge of the outcome of the lottery is instantaneous. But you can’t assume that you immediately know the outcomes of all your gambles.
What he seems to be saying is that there are situations where although you prefer A>B, the uncertainty and time for the lottery to settle the probabilities changes things so your new preference would be A>B>(pA + (1-p)B)
EDIT: It occured to me that would be somewhat dependent on the value of p. And the relative value between A and B. But for low values of p and fairly long time to settle the probabilities, B would often be higher valued than the lottery.
Well, if B is defined sufficiently precisely, ie, have X money at time Y, then B shouldn’t be greater than the lottery which, even if the lose happens, produces the exact same outcome.
ie, unless I misunderstand, the objection only arises out of being a bit fuzzy about what B actually precisely means, letting the B in the lottery be a different B then the, well, regular B.
Would you agree with that interpretation of things, or am I missing something critical here?
I think you’re right—I meant mainly that a lot depends on the specifics of the situation, so even with A>B, it is not necessarily irrational to prefer B to the probability.
I think Nick Tarleton refuted this in the other subthread—a lottery here means a lottery over states of the world, which include your knowledge state, so if you get your knowledge of the outcome later it’s not really the same thing.
It’s still true that this is a reason to disprefer realistic lotteries where you learn the outcome later, but maybe this is better termed “unpredictability aversion” than “risk aversion”? After all, it can happen even when all lottery outcomes are equally desirable. (Example: you like soup and potatoes equally, but prefer either to a lottery over them because you want to know whether to get a spoon or a fork.)
Okay. I’d say then that case is comparing B with a lottery involving some different B’.
(ie, like saying sometimes x=x is false of the x on the left is 2 and the one on the right is 3. Of course 2 is not = 3, but that’s a counterexample of x=x, rather that’s a case of ignoring what we actually mean by using the same variable name on both sides)
Sorry, maybe it’s because I’m running on insufficient sleep, but I don’t understand what you’re saying here. Mind rephrasing your objection? Thanks.
I’ll try a concrete example. Of note, fuzziness of goals isn’t the problem, it’s the fact that the consequences on your other priorities are different choosing between the lottery and B, than choosing between A and B.
Let’s say A and B are lots of land on which you could build your new Human Instrumentality Lab. You’ve checking things out and you somewhat prefer lot A to lot B. You get the option (1) definitely get lot B, or (2) go in on a lottery-type auction and get a chance of either lot. In either case, you’ll get the lot at the end of the month.
If you go with (1) you can get the zoning permits, and get your architect started right now. If you go with (2) you can try that, but you may need to back track or do twice the work. It may not be worth doing that if you don’t prefer lot A enough.
Now obviously this isn’t an issue if the knowledge of the outcome of the lottery is instantaneous. But you can’t assume that you immediately know the outcomes of all your gambles.
What he seems to be saying is that there are situations where although you prefer A>B, the uncertainty and time for the lottery to settle the probabilities changes things so your new preference would be A>B>(pA + (1-p)B)
EDIT: It occured to me that would be somewhat dependent on the value of p. And the relative value between A and B. But for low values of p and fairly long time to settle the probabilities, B would often be higher valued than the lottery.
Well, if B is defined sufficiently precisely, ie, have X money at time Y, then B shouldn’t be greater than the lottery which, even if the lose happens, produces the exact same outcome.
ie, unless I misunderstand, the objection only arises out of being a bit fuzzy about what B actually precisely means, letting the B in the lottery be a different B then the, well, regular B.
Would you agree with that interpretation of things, or am I missing something critical here?
I think you’re right—I meant mainly that a lot depends on the specifics of the situation, so even with A>B, it is not necessarily irrational to prefer B to the probability.
I think Nick Tarleton refuted this in the other subthread—a lottery here means a lottery over states of the world, which include your knowledge state, so if you get your knowledge of the outcome later it’s not really the same thing.
It’s still true that this is a reason to disprefer realistic lotteries where you learn the outcome later, but maybe this is better termed “unpredictability aversion” than “risk aversion”? After all, it can happen even when all lottery outcomes are equally desirable. (Example: you like soup and potatoes equally, but prefer either to a lottery over them because you want to know whether to get a spoon or a fork.)
(In that link, I’m actually just restating Thom Blake’s argument.)
Thanks for the link!
Okay. I’d say then that case is comparing B with a lottery involving some different B’.
(ie, like saying sometimes x=x is false of the x on the left is 2 and the one on the right is 3. Of course 2 is not = 3, but that’s a counterexample of x=x, rather that’s a case of ignoring what we actually mean by using the same variable name on both sides)