Interesting argument! I think it goes through—but only under certain ecological / environmental assumptions:
That decisions / trades between goods are reversible.
That there are multiple opportunities to make such trades / decisions in the environment.
But this isn’t always the case! Consider:
Both John and David prefer living over dying.
Hence, John would not trade (John Alive, David Dead) for (John Dead, David Alive), and vice versa for David.
This is already a case of weakly incomplete preferences which, while technically reducible to a complete order over “indifference sets”, doesn’t seem well described by a utility function! In particular, it seems really important to represent the fact that neither person would trade their life for the other’s life, even though both (John Alive, David Dead) and (John Dead, David Alive) lie in the same “indifference / incommensurability set”.
(I think it’s better to call it an “incommensurability set”—just because two elements in a lattice share a least upper bound, it doesn’t mean they are themselves comparable).
Now let’s try and make the preferences strongly incomplete:
John prefers living freely over imprisonment, and imprisonment to dying.
Even if David was dead, he would prefer that John be alive over John being imprisoned.
Apart from the fact that you can’t reverse death (at least with current technology), this is similar to the pizza scenario: The system as a whole prefers:
(John Free, David Alive) > (John Free, David Dead) > (John Imprisoned, David Dead) > Both Dead
(John Free, David Alive) > (John Imprisoned, David Alive) > (John Dead, David Alive) > Both Dead
No preferences between options of the form (X, David Dead) and (John Dead, Y).
If John and David could contract to go from (John Imprisoned, David Dead) to (John Dead, David Alive) and then to (John Alive, David Dead) when those trades are offered, that would result in an improvement in achieving preferred outcomes on average. But of course, they can’t because death is irreversible!
Rather than talking about reversibility, can this situation be described just by saying that the probability of certain opportunities is zero? For example, if John and David somehow know in advance that no one will ever offer them pepperoni in exchange for anchovies, then the maximum amount of probability mass that can be shifted from mushrooms to pepperoni by completing their preferences happens to be zero. This doesn’t need to be a physical law of anchovies; it could just be a characteristic of their trade partners.
But in this hypothetical, their preferences are effectively no longer strongly incomplete—or at least, their trade policy is no longer strongly incomplete. Since we’ve assumed away the edge between pepperoni and anchovies, we can (vacuously) claim that John and David will collectively accept 100% of the (non-existent) trades from anchovies to pepperoni, and it becomes possible to describe their trade policy as being a utility maximizer. (Specifically, we can say anchovies = mushrooms because they won’t trade between them, and say pepperoni > mushrooms because they will trade mushrooms for pepperoni. The original problem was that this implies that pepperoni > anchovies, which is false in their preferences, but it is now (vacuously) true in their trade policy if such opportunities have probability zero.)
It seems to me that it’s not right to assume the probability of opportunities to trade are zero?
Suppose both John and David are alive on a desert island right now (but slowly dying), and there’s a chance that a rescue boat will arrive that will save only one of them, leaving the other to die. What would they contract to? Assuming no altruistic preferences, presumably neither would agree to only the other person being rescued.
It seems more likely here that bargaining will break down, and one of them will kill off the other, resulting in an arbitrary resolution of who ends up on the rescue boat, not a “rational” resolution.
Doesn’t irreversibility imply that there is zero probability of a trade opportunity to reverse the thing? I’m not proposing a new trait that your original scenario didn’t have; I’m proposing that I identified which aspect of your scenario was load-bearing.
I don’t think I understand how your new hypothetical is meant to be related to anything discussed so far. As described, the group doesn’t have strongly incomplete preferences, just 2 mutually-exclusive objectives.
Zero probability of trade is indeed the feature which would make the argument in the OP potentially not go through, when irreversibility is present. (Though we would still get a weakened form of the argument from the OP, in which we complete the preferences by adding a preference for a trade which has zero probability, and the original system is indifferent between that completion and its original preferences.)
While I’ve focused on death here, I think this is actually much more general—there are a lot of irreversible decisions that people make (and that artificial agents might make) between potentially incommensurable choices. Here’s a nice example from Elizabeth Anderson’s “Value in Ethics & Economics” (Ch. 3, P57 re: the question of how one should live one’s life, to which I think irreversibility applies
Similar incommensurability applies, I think, to what kind of society we collectively we want to live in, given that path dependency makes many choices irreversible.
Interesting argument! I think it goes through—but only under certain ecological / environmental assumptions:
That decisions / trades between goods are reversible.
That there are multiple opportunities to make such trades / decisions in the environment.
But this isn’t always the case! Consider:
Both John and David prefer living over dying.
Hence, John would not trade (John Alive, David Dead) for (John Dead, David Alive), and vice versa for David.
This is already a case of weakly incomplete preferences which, while technically reducible to a complete order over “indifference sets”, doesn’t seem well described by a utility function! In particular, it seems really important to represent the fact that neither person would trade their life for the other’s life, even though both (John Alive, David Dead) and (John Dead, David Alive) lie in the same “indifference / incommensurability set”.
(I think it’s better to call it an “incommensurability set”—just because two elements in a lattice share a least upper bound, it doesn’t mean they are themselves comparable).
Now let’s try and make the preferences strongly incomplete:
John prefers living freely over imprisonment, and imprisonment to dying.
Even if David was dead, he would prefer that John be alive over John being imprisoned.
Apart from the fact that you can’t reverse death (at least with current technology), this is similar to the pizza scenario: The system as a whole prefers:
(John Free, David Alive) > (John Free, David Dead) > (John Imprisoned, David Dead) > Both Dead
(John Free, David Alive) > (John Imprisoned, David Alive) > (John Dead, David Alive) > Both Dead
No preferences between options of the form (X, David Dead) and (John Dead, Y).
If John and David could contract to go from (John Imprisoned, David Dead) to (John Dead, David Alive) and then to (John Alive, David Dead) when those trades are offered, that would result in an improvement in achieving preferred outcomes on average. But of course, they can’t because death is irreversible!
Rather than talking about reversibility, can this situation be described just by saying that the probability of certain opportunities is zero? For example, if John and David somehow know in advance that no one will ever offer them pepperoni in exchange for anchovies, then the maximum amount of probability mass that can be shifted from mushrooms to pepperoni by completing their preferences happens to be zero. This doesn’t need to be a physical law of anchovies; it could just be a characteristic of their trade partners.
But in this hypothetical, their preferences are effectively no longer strongly incomplete—or at least, their trade policy is no longer strongly incomplete. Since we’ve assumed away the edge between pepperoni and anchovies, we can (vacuously) claim that John and David will collectively accept 100% of the (non-existent) trades from anchovies to pepperoni, and it becomes possible to describe their trade policy as being a utility maximizer. (Specifically, we can say anchovies = mushrooms because they won’t trade between them, and say pepperoni > mushrooms because they will trade mushrooms for pepperoni. The original problem was that this implies that pepperoni > anchovies, which is false in their preferences, but it is now (vacuously) true in their trade policy if such opportunities have probability zero.)
It seems to me that it’s not right to assume the probability of opportunities to trade are zero?
Suppose both John and David are alive on a desert island right now (but slowly dying), and there’s a chance that a rescue boat will arrive that will save only one of them, leaving the other to die. What would they contract to? Assuming no altruistic preferences, presumably neither would agree to only the other person being rescued.
It seems more likely here that bargaining will break down, and one of them will kill off the other, resulting in an arbitrary resolution of who ends up on the rescue boat, not a “rational” resolution.
Doesn’t irreversibility imply that there is zero probability of a trade opportunity to reverse the thing? I’m not proposing a new trait that your original scenario didn’t have; I’m proposing that I identified which aspect of your scenario was load-bearing.
I don’t think I understand how your new hypothetical is meant to be related to anything discussed so far. As described, the group doesn’t have strongly incomplete preferences, just 2 mutually-exclusive objectives.
Zero probability of trade is indeed the feature which would make the argument in the OP potentially not go through, when irreversibility is present. (Though we would still get a weakened form of the argument from the OP, in which we complete the preferences by adding a preference for a trade which has zero probability, and the original system is indifferent between that completion and its original preferences.)
Well, it can be overcame by future contracts, no? We replace “Jonh dead” with “John dies tomorrow” and perform trades today.
While I’ve focused on death here, I think this is actually much more general—there are a lot of irreversible decisions that people make (and that artificial agents might make) between potentially incommensurable choices. Here’s a nice example from Elizabeth Anderson’s “Value in Ethics & Economics” (Ch. 3, P57 re: the question of how one should live one’s life, to which I think irreversibility applies
Similar incommensurability applies, I think, to what kind of society we collectively we want to live in, given that path dependency makes many choices irreversible.