Any individual whose preferences violate von Neumann and Morganstern’s axioms would agree to a Dutch book, which is a set of bets that necessarily leads to a loss.
This is false, because no one is obligated to agree to anything. If my preferences are such that they in some sense add up to a Dutch book, but then you actually offer me a bet (or set of bets, simultaneous or sequential) that constitute a Dutch book, you know what I can say?
“No. I decline the bet.”
Edit: Also, what if your values have incomparable quantities?
EDIT: I retract the claims in this comment. Given the revision made in the children they do not apply.
I’m familiar with the VNM axioms.
No, you aren’t. You may have heard of them but when you chose to start making claims about them you demonstrated that you do not know what they are. In particular:
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
None of the four axioms being discussed consist of or rely on that assumption. In fact, the whole point of Von Neumann–Morgenstern utility theorem is that “has a utility function” is a property that can be concluded of any agent that meets those four axioms. If the unitary utility was assumed then the theory wouldn’t need to exist.
You may, of course, deny that one of the axioms applies to you or you may deny that one of the axioms applies to rational agents. However, you will have difficulty persuading people of your controversial position after having already demonstrated your unfamiliarity with the subject matter.
Ok, it’s possible I’ve misunderstood. To see if I have, clarify something for me, please:
How would you represent the valuing of another agent’s values, using the VNM theorem? That is, let’s say I assign utility to (certain) other people having lots of utility. How would this be represented?
Edit: You know what, while the above question is still interesting, having reread the thread, I actually see the issue now, and it’s simpler. This line:
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state). It should be:
“Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary “utility”.” (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)
If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.
“Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary “utility”.” (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)
With the minor errata that ‘assume’ would best be replaced with ‘conclude’, ‘believe’ or ‘accept’ this revision seems accurate. For someone taking your position the most interesting thing about the VNM theory is that it prompts you to work out just which of the axioms you reject. One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.
Entirely agree. Humans, for example, are not remotely VNM coherent.
This line … is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state).
I have retracted my criticism via edit. One misstatement does not unfamiliarity make so even prior to your revision I suspect my criticism was overstated. Pardon me.
Entirely agree. Humans, for example, are not remotely VNM coherent.
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.
Edit: You know what, while the above question is still interesting,
You’re right, that question does seem interesting. Let me see...
How would you represent the valuing of another agent’s values, using the VNM theorem? That is, let’s say I assign utility to (certain) other people having lots of utility. How would this be represented?
I only ever apply values to entire world histories[1]. ie. Consider the entire wavefunction of the universe, which includes all of space, all of time, all Everett branches[2] and so forth. Different possible configurations of that universe are preferred over others on a basis that is entirely arbitrary. It so happens that my preferences over world histories do depend somewhat on computations about how the state of certain other people’s brains at certain times compares to the rest of the configuration of that world history. This preference is not different in nature to the preferring histories which do not have lots of copies wedrifid tortured for billions of years.
It also applies whether or not the other people I have altruistic preferences about happen to have utility functions at all. That’d probably make the math easier and the preference-preferences easier to instantiate but it isn’t necessary. Mind you I don’t necessarily care about all components of what make up their ‘utility function’ equally. I could perhaps assign negative weight to or ignore certain aspects of it on the basis of what caused those preferences.
Translating how strongly I prefer one history over another into a utility function occurs by the normal mechanism (ie. “require ‘VNM’; wedrifid.preferences.to_utility_function”. The altruistic values issue is orthogonal to the having-a-utility-function issue.
Of course, in practice I rely on and discuss much simpler things but this is from the perspective of considering the simpler models to be approximations of and simplifications of world-history preferences.
Ignore the branches part if you don’t believe in those—the difference isn’t of direct importance to the immediate question even though it has tangential relevance to your overall position.
....you know what I can say? “No. I decline the bet.”
The world is uncertain; all actions are gambles; refusing to choose an action is like refusing to let time to pass.
But let’s pop up a level here and make sure we aren’t arguing about different things. Humans clearly aren’t rational, and don’t follow the VNM axioms. Are you arguing just that VNM axioms aren’t a good model for people, or arguing that VNM axioms aren’t a good model for rational agents?
The world is uncertain; all actions are gambles; refusing to choose an action is like refusing to let time to pass.
Refusing to choose an action is not the same as taking a bet. If you are claiming that passively doing nothing can be equivalent to taking a Dutch Book bet, then I say to you what I said to shminux:
Concrete real-world example, please.
Are you arguing just that VNM axioms aren’t a good model for people, or arguing that VNM axioms aren’t a good model for rational agents?
The latter. (The former is also true, of course.)
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
By all means, straw man away. I won’t take it personally.
However, for an example — and please don’t feel compelled to restrict your answer only to this — there’s a variant of the good old chickens/grandmother example:
Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or any chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.
I believe this violates VNM. (Am I wrong there?) At least, I don’t see how one would construe these preferences as those of an agent who is acting to maximize a quantity.
That’s the conclusion of the theorem. Which of the premise do you disagree with?
Those preferences do not violate the VNM axioms. The willingness to kill chickens in order to eliminate an arbitrary small chance of the death of your grandmother makes the preferences a bit weird, but still VNM compliant.
A live chicken might quantum tunnel from China to your grandmother’s house and eat all her heart medication. But then again, a quantum tunneling chicken could save your grandmother from tripping on fallen corn-seed. The net effect of chickens on your grandmother might be unfathomably small, but it is unlikely to ever be zero. If chickens never have zero effect on your grandmother, then your preference for the non-extinction of chickens would never apply *EDIT and so the only preferences we would need to consider would be of your grandmother’s life, which could be represented with utilities (say 0 for dead and 1 for alive).
If you’re willing to tolerate a 1⁄10,000,000 increase in chance of your grandmother’s death to save chickens from extinction, you sill have VNM-rational preferences. Here’s a utility function for that:
dead-chicken-dead-grandma 0
live-chicken-dead-grandma 1
dead-chicken-live-grandma 10,000,000
live-chicken-live-grandma 10,000,001
*Or perhaps not so small. 90% of all flu deaths happen to people age 65 or older, and chickens are the main reservoir of influenza.
If you’re willing to tolerate a 1⁄1,000,000 increase in chance of your grandmother’s death to save chickens from extinction, you sill have VNM-rational preferences.
If, by some cosmic happenstance the effect of a chicken’s life or death affected your grandmother exactly zero—no gravitational effects of the Australian chicken’s beak on your grandmother’s heart, no nothin’—then the utilities get more complicated. If the smallest possible non-zero probability a chicken could kill or save your grandmother were 10^-2,321,832,934,903, then the utilities could be something like:
It seems more likely that he really isn’t VNM-compliant. Chickens are tasty and nutritious, 1⁄1,000,000 is a small number and lets face it, grandparents of adults are already old and have much more chance than that of dying every day. It would be surprising if Said is so perfectly indifferent to chickens, especially since he’s already been explicitly telling us that he isn’t VNM-compliant.
What I was asking for was an existence proof (i.e., an example) for a claim being made about how the theory plays out in the real world. For such a claim, I most certainly am entitled to that particular proof.
What I was asking for was an existence proof (i.e., an example) for a claim being made about how the theory plays out in the real world. For such a claim, I most certainly am entitled to that particular proof.
I repeat the assertion from the grandparent. You made this demand multiple times in support of a point that you explicitly declared was about the behaviour of theoretical rational agents in the abstract. Please see the quotes provided in the grandparent. You are not entitled to real-world existence proofs.
If you wish to weaken your claim such that it only applies to humans in the real world then your demand becomes coherent. As it stands it is a non sequitur and logically rude.
I requested an existence proof in response to a claim. The fact that I was making a point in the same post is irrelevant. Any claim I was making is irrelevant.
This is false, because no one is obligated to agree to anything. If my preferences are such that they in some sense add up to a Dutch book, but then you actually offer me a bet (or set of bets, simultaneous or sequential) that constitute a Dutch book, you know what I can say?
“No. I decline the bet.”
I don’t think that this is a valid escape clause. You don’t normally know that there is a bet going on. You just live your life, make small decisions every day, evaluating risks and rewards. It works just fine as long as no one is on your case. But if someone who knows of your vulnerability to Dutch booking can nudge the situations you find yourself in in the desired direction, you eventually end up back where you started, but with, say some of your money inexplicably lost. And then it happens again, and again. And you are powerless to change the situation, since the decisions still have to be made, or else you would be lying in bed all day waiting for the end (which might be what the adversary intended, anyway).
We discussed that a few days ago. As far as I know, De Finetti’s justification of probabilities only works in the exact scenario where agents must publish their beliefs and can’t refuse either side of the bet. I’d love to see a version for more general scenarios, of course.
I’m familiar with the VNM axioms.
This is false, because no one is obligated to agree to anything. If my preferences are such that they in some sense add up to a Dutch book, but then you actually offer me a bet (or set of bets, simultaneous or sequential) that constitute a Dutch book, you know what I can say?
“No. I decline the bet.”
Edit: Also, what if your values have incomparable quantities?
EDIT: I retract the claims in this comment. Given the revision made in the children they do not apply.
No, you aren’t. You may have heard of them but when you chose to start making claims about them you demonstrated that you do not know what they are. In particular:
None of the four axioms being discussed consist of or rely on that assumption. In fact, the whole point of Von Neumann–Morgenstern utility theorem is that “has a utility function” is a property that can be concluded of any agent that meets those four axioms. If the unitary utility was assumed then the theory wouldn’t need to exist.
You may, of course, deny that one of the axioms applies to you or you may deny that one of the axioms applies to rational agents. However, you will have difficulty persuading people of your controversial position after having already demonstrated your unfamiliarity with the subject matter.
Ok, it’s possible I’ve misunderstood. To see if I have, clarify something for me, please:
How would you represent the valuing of another agent’s values, using the VNM theorem? That is, let’s say I assign utility to (certain) other people having lots of utility. How would this be represented?
Edit: You know what, while the above question is still interesting, having reread the thread, I actually see the issue now, and it’s simpler. This line:
is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state). It should be:
“Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary “utility”.” (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)
If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.
With the minor errata that ‘assume’ would best be replaced with ‘conclude’, ‘believe’ or ‘accept’ this revision seems accurate. For someone taking your position the most interesting thing about the VNM theory is that it prompts you to work out just which of the axioms you reject. One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
Entirely agree. Humans, for example, are not remotely VNM coherent.
I have retracted my criticism via edit. One misstatement does not unfamiliarity make so even prior to your revision I suspect my criticism was overstated. Pardon me.
Thank you, and no offense taken.
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.
You’re right, that question does seem interesting. Let me see...
I only ever apply values to entire world histories[1]. ie. Consider the entire wavefunction of the universe, which includes all of space, all of time, all Everett branches[2] and so forth. Different possible configurations of that universe are preferred over others on a basis that is entirely arbitrary. It so happens that my preferences over world histories do depend somewhat on computations about how the state of certain other people’s brains at certain times compares to the rest of the configuration of that world history. This preference is not different in nature to the preferring histories which do not have lots of copies wedrifid tortured for billions of years.
It also applies whether or not the other people I have altruistic preferences about happen to have utility functions at all. That’d probably make the math easier and the preference-preferences easier to instantiate but it isn’t necessary. Mind you I don’t necessarily care about all components of what make up their ‘utility function’ equally. I could perhaps assign negative weight to or ignore certain aspects of it on the basis of what caused those preferences.
Translating how strongly I prefer one history over another into a utility function occurs by the normal mechanism (ie. “require ‘VNM’; wedrifid.preferences.to_utility_function”. The altruistic values issue is orthogonal to the having-a-utility-function issue.
Of course, in practice I rely on and discuss much simpler things but this is from the perspective of considering the simpler models to be approximations of and simplifications of world-history preferences.
Ignore the branches part if you don’t believe in those—the difference isn’t of direct importance to the immediate question even though it has tangential relevance to your overall position.
The world is uncertain; all actions are gambles; refusing to choose an action is like refusing to let time to pass.
But let’s pop up a level here and make sure we aren’t arguing about different things. Humans clearly aren’t rational, and don’t follow the VNM axioms. Are you arguing just that VNM axioms aren’t a good model for people, or arguing that VNM axioms aren’t a good model for rational agents?
Refusing to choose an action is not the same as taking a bet. If you are claiming that passively doing nothing can be equivalent to taking a Dutch Book bet, then I say to you what I said to shminux:
Concrete real-world example, please.
The latter. (The former is also true, of course.)
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
Yes, it is. You chose to do nothing, and doing nothing has consequences. You can’t keep the bomb from going off by not choosing which wire to cut.
How does that result in me being Dutch-booked, though?
I can’t construct one without straw manning a set of preferences that violate the VNM axioms. Give me preferences and I can construct an example.
That’s the conclusion of the theorem. Which of the premise do you disagree with?
By all means, straw man away. I won’t take it personally.
However, for an example — and please don’t feel compelled to restrict your answer only to this — there’s a variant of the good old chickens/grandmother example:
Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or any chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.
I believe this violates VNM. (Am I wrong there?) At least, I don’t see how one would construe these preferences as those of an agent who is acting to maximize a quantity.
See this thread for clarification/correction.
Those preferences do not violate the VNM axioms. The willingness to kill chickens in order to eliminate an arbitrary small chance of the death of your grandmother makes the preferences a bit weird, but still VNM compliant.
A live chicken might quantum tunnel from China to your grandmother’s house and eat all her heart medication. But then again, a quantum tunneling chicken could save your grandmother from tripping on fallen corn-seed. The net effect of chickens on your grandmother might be unfathomably small, but it is unlikely to ever be zero. If chickens never have zero effect on your grandmother, then your preference for the non-extinction of chickens would never apply *EDIT and so the only preferences we would need to consider would be of your grandmother’s life, which could be represented with utilities (say 0 for dead and 1 for alive).
If you’re willing to tolerate a 1⁄10,000,000 increase in chance of your grandmother’s death to save chickens from extinction, you sill have VNM-rational preferences. Here’s a utility function for that:
dead-chicken-dead-grandma 0
live-chicken-dead-grandma 1
dead-chicken-live-grandma 10,000,000
live-chicken-live-grandma 10,000,001
*Or perhaps not so small. 90% of all flu deaths happen to people age 65 or older, and chickens are the main reservoir of influenza.
I am not willing, no. Still VNM-compliant?
Yep. But you probably don’t care about chickens.
If, by some cosmic happenstance the effect of a chicken’s life or death affected your grandmother exactly zero—no gravitational effects of the Australian chicken’s beak on your grandmother’s heart, no nothin’—then the utilities get more complicated. If the smallest possible non-zero probability a chicken could kill or save your grandmother were 10^-2,321,832,934,903, then the utilities could be something like:
dead-chicken-dead-grandma 0
live-chicken-dead-grandma 1
dead-chicken-live-grandma 10^2,321,832,934,903 + 1
live-chicken-live-grandma 10^-2,321,832,934,903 + 2
It seems more likely that he really isn’t VNM-compliant. Chickens are tasty and nutritious, 1⁄1,000,000 is a small number and lets face it, grandparents of adults are already old and have much more chance than that of dying every day. It would be surprising if Said is so perfectly indifferent to chickens, especially since he’s already been explicitly telling us that he isn’t VNM-compliant.
If you are making a theoretical claim about models for rational agents then you are not entitled to that particular proof.
What I was asking for was an existence proof (i.e., an example) for a claim being made about how the theory plays out in the real world. For such a claim, I most certainly am entitled to that particular proof.
I repeat the assertion from the grandparent. You made this demand multiple times in support of a point that you explicitly declared was about the behaviour of theoretical rational agents in the abstract. Please see the quotes provided in the grandparent. You are not entitled to real-world existence proofs.
If you wish to weaken your claim such that it only applies to humans in the real world then your demand becomes coherent. As it stands it is a non sequitur and logically rude.
I requested an existence proof in response to a claim. The fact that I was making a point in the same post is irrelevant. Any claim I was making is irrelevant.
I don’t think that this is a valid escape clause. You don’t normally know that there is a bet going on. You just live your life, make small decisions every day, evaluating risks and rewards. It works just fine as long as no one is on your case. But if someone who knows of your vulnerability to Dutch booking can nudge the situations you find yourself in in the desired direction, you eventually end up back where you started, but with, say some of your money inexplicably lost. And then it happens again, and again. And you are powerless to change the situation, since the decisions still have to be made, or else you would be lying in bed all day waiting for the end (which might be what the adversary intended, anyway).
We discussed that a few days ago. As far as I know, De Finetti’s justification of probabilities only works in the exact scenario where agents must publish their beliefs and can’t refuse either side of the bet. I’d love to see a version for more general scenarios, of course.
Concrete real-world example, please.