It’s not just VNM; it just doesn’t even make logical sense. Probabilities are about your knowledge, not the state of the world: barring bizarre fringe cases/Cromwell’s law, I can always say that whatever I’m doing has probability 1, because I’m currently doing it, meaning it’s physically impossible to randomize your own actions. I can certainly have a probability other than 0 or 1 that I will do something, if this action depends on information I haven’t received. But as soon as I receive all the information involved in making my decision and update on it, I can’t have a 50% chance of doing something. Trying to randomize your own actions involves refusing to update on the information you have, a violation of Bayes’ theorem.
The problem is they don’t want to switch to Boston, they are happy moving to Atlanta.
In this world, the one that actually exists, Bob still wants to move to Boston. The fact that Bob made a promise and would now face additional costs associated with breaking the contract (i.e. upsetting Alice) doesn’t change the fact that he’d be happier in Boston, it just means that the contract and the action of revealing this information changed the options available. The choices are no longer “Boston” vs. “Atlanta,” they’re “Boston and upset Alice” vs. “Atlanta and don’t upset Alice.”
Moreover, holding to this contract after the information is revealed also rejects the possibility of a Pareto improvement (equivalent to a Dutch book). Say Alice and Bob agree to randomize their choice as you say. In this case, both Alice and Bob are strictly worse off than if they had agreed on an insurance policy. A contract that has Bob more than compensate Alice for the cost of moving to Boston if the California option fails would leave both of them strictly better off.
So, I am trying to talk about the preferences of the couple, not the preferences of either individual. You might reject that the couple is capable of having preference, if so I am curious if you think Bob is capable of having preferences, but not the couple, and if so, why?
I agree if you can do arbitrary utility transfers between Alice and Bob at a given exchange rate, then they should maximize the sum of their utilities (at that exchange rate), and do a side transfer. However, I am assuming here that efficient compensation is not possible. I specifically made it a relatively big decision, so that compensation would not obviously be possible.
Whether the couple is capable of having preferences probably depends on your definition of “preferences.” The more standard terminology for preferences by a group of people is “social choice function.” The main problem we run into is that social choice functions don’t behave like preferences.
It’s not just VNM; it just doesn’t even make logical sense. Probabilities are about your knowledge, not the state of the world: barring bizarre fringe cases/Cromwell’s law, I can always say that whatever I’m doing has probability 1, because I’m currently doing it, meaning it’s physically impossible to randomize your own actions. I can certainly have a probability other than 0 or 1 that I will do something, if this action depends on information I haven’t received. But as soon as I receive all the information involved in making my decision and update on it, I can’t have a 50% chance of doing something. Trying to randomize your own actions involves refusing to update on the information you have, a violation of Bayes’ theorem.
In this world, the one that actually exists, Bob still wants to move to Boston. The fact that Bob made a promise and would now face additional costs associated with breaking the contract (i.e. upsetting Alice) doesn’t change the fact that he’d be happier in Boston, it just means that the contract and the action of revealing this information changed the options available. The choices are no longer “Boston” vs. “Atlanta,” they’re “Boston and upset Alice” vs. “Atlanta and don’t upset Alice.”
Moreover, holding to this contract after the information is revealed also rejects the possibility of a Pareto improvement (equivalent to a Dutch book). Say Alice and Bob agree to randomize their choice as you say. In this case, both Alice and Bob are strictly worse off than if they had agreed on an insurance policy. A contract that has Bob more than compensate Alice for the cost of moving to Boston if the California option fails would leave both of them strictly better off.
So, I am trying to talk about the preferences of the couple, not the preferences of either individual. You might reject that the couple is capable of having preference, if so I am curious if you think Bob is capable of having preferences, but not the couple, and if so, why?
I agree if you can do arbitrary utility transfers between Alice and Bob at a given exchange rate, then they should maximize the sum of their utilities (at that exchange rate), and do a side transfer. However, I am assuming here that efficient compensation is not possible. I specifically made it a relatively big decision, so that compensation would not obviously be possible.
Whether the couple is capable of having preferences probably depends on your definition of “preferences.” The more standard terminology for preferences by a group of people is “social choice function.” The main problem we run into is that social choice functions don’t behave like preferences.