....you know what I can say? “No. I decline the bet.”
The world is uncertain; all actions are gambles; refusing to choose an action is like refusing to let time to pass.
But let’s pop up a level here and make sure we aren’t arguing about different things. Humans clearly aren’t rational, and don’t follow the VNM axioms. Are you arguing just that VNM axioms aren’t a good model for people, or arguing that VNM axioms aren’t a good model for rational agents?
The world is uncertain; all actions are gambles; refusing to choose an action is like refusing to let time to pass.
Refusing to choose an action is not the same as taking a bet. If you are claiming that passively doing nothing can be equivalent to taking a Dutch Book bet, then I say to you what I said to shminux:
Concrete real-world example, please.
Are you arguing just that VNM axioms aren’t a good model for people, or arguing that VNM axioms aren’t a good model for rational agents?
The latter. (The former is also true, of course.)
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
By all means, straw man away. I won’t take it personally.
However, for an example — and please don’t feel compelled to restrict your answer only to this — there’s a variant of the good old chickens/grandmother example:
Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or any chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.
I believe this violates VNM. (Am I wrong there?) At least, I don’t see how one would construe these preferences as those of an agent who is acting to maximize a quantity.
That’s the conclusion of the theorem. Which of the premise do you disagree with?
Those preferences do not violate the VNM axioms. The willingness to kill chickens in order to eliminate an arbitrary small chance of the death of your grandmother makes the preferences a bit weird, but still VNM compliant.
A live chicken might quantum tunnel from China to your grandmother’s house and eat all her heart medication. But then again, a quantum tunneling chicken could save your grandmother from tripping on fallen corn-seed. The net effect of chickens on your grandmother might be unfathomably small, but it is unlikely to ever be zero. If chickens never have zero effect on your grandmother, then your preference for the non-extinction of chickens would never apply *EDIT and so the only preferences we would need to consider would be of your grandmother’s life, which could be represented with utilities (say 0 for dead and 1 for alive).
If you’re willing to tolerate a 1⁄10,000,000 increase in chance of your grandmother’s death to save chickens from extinction, you sill have VNM-rational preferences. Here’s a utility function for that:
dead-chicken-dead-grandma 0
live-chicken-dead-grandma 1
dead-chicken-live-grandma 10,000,000
live-chicken-live-grandma 10,000,001
*Or perhaps not so small. 90% of all flu deaths happen to people age 65 or older, and chickens are the main reservoir of influenza.
If you’re willing to tolerate a 1⁄1,000,000 increase in chance of your grandmother’s death to save chickens from extinction, you sill have VNM-rational preferences.
If, by some cosmic happenstance the effect of a chicken’s life or death affected your grandmother exactly zero—no gravitational effects of the Australian chicken’s beak on your grandmother’s heart, no nothin’—then the utilities get more complicated. If the smallest possible non-zero probability a chicken could kill or save your grandmother were 10^-2,321,832,934,903, then the utilities could be something like:
It seems more likely that he really isn’t VNM-compliant. Chickens are tasty and nutritious, 1⁄1,000,000 is a small number and lets face it, grandparents of adults are already old and have much more chance than that of dying every day. It would be surprising if Said is so perfectly indifferent to chickens, especially since he’s already been explicitly telling us that he isn’t VNM-compliant.
What I was asking for was an existence proof (i.e., an example) for a claim being made about how the theory plays out in the real world. For such a claim, I most certainly am entitled to that particular proof.
What I was asking for was an existence proof (i.e., an example) for a claim being made about how the theory plays out in the real world. For such a claim, I most certainly am entitled to that particular proof.
I repeat the assertion from the grandparent. You made this demand multiple times in support of a point that you explicitly declared was about the behaviour of theoretical rational agents in the abstract. Please see the quotes provided in the grandparent. You are not entitled to real-world existence proofs.
If you wish to weaken your claim such that it only applies to humans in the real world then your demand becomes coherent. As it stands it is a non sequitur and logically rude.
I requested an existence proof in response to a claim. The fact that I was making a point in the same post is irrelevant. Any claim I was making is irrelevant.
The world is uncertain; all actions are gambles; refusing to choose an action is like refusing to let time to pass.
But let’s pop up a level here and make sure we aren’t arguing about different things. Humans clearly aren’t rational, and don’t follow the VNM axioms. Are you arguing just that VNM axioms aren’t a good model for people, or arguing that VNM axioms aren’t a good model for rational agents?
Refusing to choose an action is not the same as taking a bet. If you are claiming that passively doing nothing can be equivalent to taking a Dutch Book bet, then I say to you what I said to shminux:
Concrete real-world example, please.
The latter. (The former is also true, of course.)
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
Yes, it is. You chose to do nothing, and doing nothing has consequences. You can’t keep the bomb from going off by not choosing which wire to cut.
How does that result in me being Dutch-booked, though?
I can’t construct one without straw manning a set of preferences that violate the VNM axioms. Give me preferences and I can construct an example.
That’s the conclusion of the theorem. Which of the premise do you disagree with?
By all means, straw man away. I won’t take it personally.
However, for an example — and please don’t feel compelled to restrict your answer only to this — there’s a variant of the good old chickens/grandmother example:
Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or any chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.
I believe this violates VNM. (Am I wrong there?) At least, I don’t see how one would construe these preferences as those of an agent who is acting to maximize a quantity.
See this thread for clarification/correction.
Those preferences do not violate the VNM axioms. The willingness to kill chickens in order to eliminate an arbitrary small chance of the death of your grandmother makes the preferences a bit weird, but still VNM compliant.
A live chicken might quantum tunnel from China to your grandmother’s house and eat all her heart medication. But then again, a quantum tunneling chicken could save your grandmother from tripping on fallen corn-seed. The net effect of chickens on your grandmother might be unfathomably small, but it is unlikely to ever be zero. If chickens never have zero effect on your grandmother, then your preference for the non-extinction of chickens would never apply *EDIT and so the only preferences we would need to consider would be of your grandmother’s life, which could be represented with utilities (say 0 for dead and 1 for alive).
If you’re willing to tolerate a 1⁄10,000,000 increase in chance of your grandmother’s death to save chickens from extinction, you sill have VNM-rational preferences. Here’s a utility function for that:
dead-chicken-dead-grandma 0
live-chicken-dead-grandma 1
dead-chicken-live-grandma 10,000,000
live-chicken-live-grandma 10,000,001
*Or perhaps not so small. 90% of all flu deaths happen to people age 65 or older, and chickens are the main reservoir of influenza.
I am not willing, no. Still VNM-compliant?
Yep. But you probably don’t care about chickens.
If, by some cosmic happenstance the effect of a chicken’s life or death affected your grandmother exactly zero—no gravitational effects of the Australian chicken’s beak on your grandmother’s heart, no nothin’—then the utilities get more complicated. If the smallest possible non-zero probability a chicken could kill or save your grandmother were 10^-2,321,832,934,903, then the utilities could be something like:
dead-chicken-dead-grandma 0
live-chicken-dead-grandma 1
dead-chicken-live-grandma 10^2,321,832,934,903 + 1
live-chicken-live-grandma 10^-2,321,832,934,903 + 2
It seems more likely that he really isn’t VNM-compliant. Chickens are tasty and nutritious, 1⁄1,000,000 is a small number and lets face it, grandparents of adults are already old and have much more chance than that of dying every day. It would be surprising if Said is so perfectly indifferent to chickens, especially since he’s already been explicitly telling us that he isn’t VNM-compliant.
If you are making a theoretical claim about models for rational agents then you are not entitled to that particular proof.
What I was asking for was an existence proof (i.e., an example) for a claim being made about how the theory plays out in the real world. For such a claim, I most certainly am entitled to that particular proof.
I repeat the assertion from the grandparent. You made this demand multiple times in support of a point that you explicitly declared was about the behaviour of theoretical rational agents in the abstract. Please see the quotes provided in the grandparent. You are not entitled to real-world existence proofs.
If you wish to weaken your claim such that it only applies to humans in the real world then your demand becomes coherent. As it stands it is a non sequitur and logically rude.
I requested an existence proof in response to a claim. The fact that I was making a point in the same post is irrelevant. Any claim I was making is irrelevant.