First, I did study mathematical logic, and please avoid such kind of ad hominem.
That said, if what you’re referring to is the whole world state, the outcomes are, in fact, awlays different. Even if only because there is somewhere in your brain the knowledge that the choice is different.
To take the formulation in the FAQ : « The independence axiom states that, for example, if an agent prefers an apple to an orange, then she must also prefer the lottery [55% chance she gets an apple, otherwise she gets cholera] over the lottery [55% chance she gets an orange, otherwise she gets cholera]. More generally, this axiom holds that a preference must hold independently of the possibility of another outcome (e.g. cholera). »
That has no meaning if you consider whole world states, not just specific outcomes. Because in the lottery it’s not “apple or orange” then but “apple with the knowledge I almost got cholera” vs “orange with the knowledge I almost got cholera”. And if there is an interaction between the two, then you have different ranking between them. Maybe you had a friend who died of cholera and loved apple, and that’ll change how much you appreciate apples knowing you almost had cholera. Maybe not. But anyway, if what you consider are whole world states, then by definition the whole world state is always different when you’re offered even a slightly different choice. How can you define an independence principle in that case ?
First, I did study mathematical logic, and please avoid such kind of ad hominem.
Fair enough
That said, if what you’re referring to is the whole world state, the outcomes are, in fact, always different. Even if only because there is somewhere in your brain the knowledge that the choice is different.
I thought this would be your reply, but didn’t want to address it because the comment was too long already.
Firstly, this is completely correct. (Well, technically we could imagine situations where the outcomes removed your memory of there ever having been a choice, but this isn’t usually the case). Its pretty much never possible to make actually useful deductions just from pure logic and the axiom of independence.
This is much the same as any other time you apply a mathematical model to the real world. We assume away some factors, not because we don’t think they exist, but because we think they do not have a large effect on the outcome or that the effect they do have does not actually affect our decision in any way.
E.g. Geometry is completely useless, because perfectly straight lines do not exist in the real world. However, in many situations they are incredibly good approximations which let us draw interesting non-trivial conclusions. This doesn’t mean Euclidean Geometry is an approximation, the approximation is when I claim the edge of my desk is a straight line.
So, I would say that usually, my memory of the other choice I was offered has quite small effects on my satisfaction with the outcome compared to what I actually get, so in most circumstances I can safely assume that the outcomes are equal (even though they aren’t). With that assumption, independence generates some interesting conclusions.
Other times, this assumption breaks down. Your cholera example strikes me as a little silly, but the example in your original post is an excellent illustration of how assuming two outcomes are equal because they look the same as English sentences can be a mistake.
At a guess, a good heuristic seems to be that after you’ve made your decision, and found out which outcome from the lottery you got, then usually the approximation that the existence of other outcomes changes nothing is correct. If there’s a long time gap between the decision and the lottery then decisions made in that time gap should usually be taken into account.
Of course, independence isn’t really that useful for its own sake, but more for the fact that combined with other axioms it gives you expected utility theory.
The cholera example was definitely a bit silly—after all, “cholera” and “apple vs orange” are usually really independent in the real world, you’ve to make very far-fetched circumstances for them to be dependent. But an axiom is supposed to be valid everywhere—even in far-fetched circumstances ;)
But overall, I understand the thing much better now: in fact, the independence principle doesn’t strictly hold in the real world, like there are no strictly right angle in the real world. But yet, like we do use the Pythagoras theorem in the real world, assuming an angle to be right when it’s “close enough” to be right, we apply the VNM axioms and the related expected utility theory when we consider the independence principle to have enough validity?
But do we have any way to measure the degree of error introduced by this approximation? Do we have ways to recognize the cases where we shouldn’t apply the expected utility theory, because we are too far from the ideal model?
My point never was to fully reject VNM and expected utility theory—I know they are useful, they work in many cases, … My point was to draw attention on a potential problem (making it an approximation, making it not always valid) that I don’t usually see being addressed (actually, I don’t remember ever having seen it that explicitly).
I think we have almost reached agreement, just a few more nitpicks I seem to have with your current post.
the independence principle doesn’t strictly hold in the real world, like there are no strictly right angle in the real world
Its pedantic, but these two statements aren’t analogous. A better analogy would be
“the independence principle doesn’t strictly hold in the real world, like the axiom that all right angles are equal doesn’t hold in the real world”
“there are no strictly identical outcomes in the real world, like there are no strictly right angle in the real world”
Personally I prefer the second phrasing. The independence principle and the right angle principle do hold in the real world, or at least they would if the objects they talked about ever actually appeared, which they don’t.
I’m in general uncomfortable with talk of the empirical status of mathematical statements, maybe this makes me a Platonist or something. I’m much happier with talk of whether idealised mathematical objects exist in the real world, or whether things similar to them do.
What this means is we don’t apply VNM when we think independence is relatively true, we apply them when we think the outcomes we are facing are relatively similar to each other, enough that any difference can be assumed away.
But do we have any way to measure the degree of error introduced by this approximation?
This is an interesting problem. As far as I can tell, its a special case of the interesting problem of “how do we know/decide our utility function?”.
Do we have ways to recognize the cases where we shouldn’t apply the expected utility theory
I’ve suggested one heuristic that I think is quite good. Any ideas for others?
(Once again, I want to nitpick the language. “Do we have ways to recognize the cases where two outcomes look equal but aren’t” is the correct phrasing.
First, I did study mathematical logic, and please avoid such kind of ad hominem.
That said, if what you’re referring to is the whole world state, the outcomes are, in fact, awlays different. Even if only because there is somewhere in your brain the knowledge that the choice is different.
To take the formulation in the FAQ : « The independence axiom states that, for example, if an agent prefers an apple to an orange, then she must also prefer the lottery [55% chance she gets an apple, otherwise she gets cholera] over the lottery [55% chance she gets an orange, otherwise she gets cholera]. More generally, this axiom holds that a preference must hold independently of the possibility of another outcome (e.g. cholera). »
That has no meaning if you consider whole world states, not just specific outcomes. Because in the lottery it’s not “apple or orange” then but “apple with the knowledge I almost got cholera” vs “orange with the knowledge I almost got cholera”. And if there is an interaction between the two, then you have different ranking between them. Maybe you had a friend who died of cholera and loved apple, and that’ll change how much you appreciate apples knowing you almost had cholera. Maybe not. But anyway, if what you consider are whole world states, then by definition the whole world state is always different when you’re offered even a slightly different choice. How can you define an independence principle in that case ?
Fair enough
I thought this would be your reply, but didn’t want to address it because the comment was too long already.
Firstly, this is completely correct. (Well, technically we could imagine situations where the outcomes removed your memory of there ever having been a choice, but this isn’t usually the case). Its pretty much never possible to make actually useful deductions just from pure logic and the axiom of independence.
This is much the same as any other time you apply a mathematical model to the real world. We assume away some factors, not because we don’t think they exist, but because we think they do not have a large effect on the outcome or that the effect they do have does not actually affect our decision in any way.
E.g. Geometry is completely useless, because perfectly straight lines do not exist in the real world. However, in many situations they are incredibly good approximations which let us draw interesting non-trivial conclusions. This doesn’t mean Euclidean Geometry is an approximation, the approximation is when I claim the edge of my desk is a straight line.
So, I would say that usually, my memory of the other choice I was offered has quite small effects on my satisfaction with the outcome compared to what I actually get, so in most circumstances I can safely assume that the outcomes are equal (even though they aren’t). With that assumption, independence generates some interesting conclusions.
Other times, this assumption breaks down. Your cholera example strikes me as a little silly, but the example in your original post is an excellent illustration of how assuming two outcomes are equal because they look the same as English sentences can be a mistake.
At a guess, a good heuristic seems to be that after you’ve made your decision, and found out which outcome from the lottery you got, then usually the approximation that the existence of other outcomes changes nothing is correct. If there’s a long time gap between the decision and the lottery then decisions made in that time gap should usually be taken into account.
Of course, independence isn’t really that useful for its own sake, but more for the fact that combined with other axioms it gives you expected utility theory.
The cholera example was definitely a bit silly—after all, “cholera” and “apple vs orange” are usually really independent in the real world, you’ve to make very far-fetched circumstances for them to be dependent. But an axiom is supposed to be valid everywhere—even in far-fetched circumstances ;)
But overall, I understand the thing much better now: in fact, the independence principle doesn’t strictly hold in the real world, like there are no strictly right angle in the real world. But yet, like we do use the Pythagoras theorem in the real world, assuming an angle to be right when it’s “close enough” to be right, we apply the VNM axioms and the related expected utility theory when we consider the independence principle to have enough validity?
But do we have any way to measure the degree of error introduced by this approximation? Do we have ways to recognize the cases where we shouldn’t apply the expected utility theory, because we are too far from the ideal model?
My point never was to fully reject VNM and expected utility theory—I know they are useful, they work in many cases, … My point was to draw attention on a potential problem (making it an approximation, making it not always valid) that I don’t usually see being addressed (actually, I don’t remember ever having seen it that explicitly).
I think we have almost reached agreement, just a few more nitpicks I seem to have with your current post.
Its pedantic, but these two statements aren’t analogous. A better analogy would be
“the independence principle doesn’t strictly hold in the real world, like the axiom that all right angles are equal doesn’t hold in the real world”
“there are no strictly identical outcomes in the real world, like there are no strictly right angle in the real world”
Personally I prefer the second phrasing. The independence principle and the right angle principle do hold in the real world, or at least they would if the objects they talked about ever actually appeared, which they don’t.
I’m in general uncomfortable with talk of the empirical status of mathematical statements, maybe this makes me a Platonist or something. I’m much happier with talk of whether idealised mathematical objects exist in the real world, or whether things similar to them do.
What this means is we don’t apply VNM when we think independence is relatively true, we apply them when we think the outcomes we are facing are relatively similar to each other, enough that any difference can be assumed away.
This is an interesting problem. As far as I can tell, its a special case of the interesting problem of “how do we know/decide our utility function?”.
I’ve suggested one heuristic that I think is quite good. Any ideas for others?
(Once again, I want to nitpick the language. “Do we have ways to recognize the cases where two outcomes look equal but aren’t” is the correct phrasing.