Let’s walk through a simplified example, and see if we can find the point of disagreement. The primary simplification here is that I’ll assume consequentialism, where utilities are mappings from outcomes to reals and the mapping from policies (i.e. a probabilistic collection of outcomes) to reals is the probabilistically weighted sum of the outcome utilities. Even without consequentialism, this should work, but there will be many more fiddly bits.
So, let’s suppose that the two of us have a joint pool of money, which we’re going to spend on a lottery ticket, which could win one of three fabulous prizes (that we would then jointly own):
A Koala (K)
A Lemur (L)
A Macaw (M)
Nothing (N)
We can express the various tickets (which all cost the same, and together we can only afford one) as vectors, like a=(.1,.1,.1,.7), which has a 10% chance of delivering each animal, and a 70% chance of delivering Nothing, or b=(.2,.02,.02,.76), which has a 20% chance of delivering a Koala, 76% chance of Nothing, and 2% chance for each of the Lemur and Macaw. Suppose there are three tickets, and the third is c=(0,.3,.04,.66).
By randomly spinning a wheel to determine which ticket we want to buy, we have access to a convex combination of any of the tickets. If half the wheel points to the a ticket, and the other half points to the b ticket, our final chance of getting any of the animals will be (.15,.6,.6,.73).
Now, before we look at the tickets actually available to us, you and I eat sit down separately and imagine four ‘ideal tickets’- (1,0,0,0), (0,1,0,0), (0,0,1,0), and (0,0,0,1). We can express our preferences for those as another vector: mine, V, would be, say, (3;2;1;0). (That means, for example, that I would be indifferent between a Lemur for sure and a half chance of a Koala or a Macaw, because 2=(1+3)/2.) This is a column vector, and we can multiply a*V to get .6, b*V to get .66, and c*V to get .64, which says that I would prefer the b ticket to the c ticket to the a ticket. The magnitude of V doesn’t matter, just the direction, and suppose we adjust it so that the least preferred outcome is always 0. I don’t know what W, your preference vector, is; it could be any four-vector with non-negative values.
Note that any real ticket can be seen as a convex combination of the ideal tickets. It’s a lottery, and so they won’t let us just walk up and buy a koala for the price of a ticket, but if they did that’d be my preferred outcome. Instead, I look at the real tickets for sale, right multiply them by my preference column vector, and pick one of the tickets with the highest value, which is the b ticket.
But, the pool of money is partly yours, too; you have some preference ordering W. Suppose it’s (2,4,0,1), and so a*W=1.3, b*W=1.24, and c*W=1.86, meaning you prefer c to a to b.
We can think of lots of different algorithms for determining which ticket (or convex combination of tickets) we end up buying. Suppose we want it to be consistent, i.e. there’s some preference vector J that describes our joint decision. Any algorithm that doesn’t depend on just your and my preference scores for the ticket being considered (suppose you wanted to scratch off our least favorite options until only one is left) will run into problems (how do you scratch off the infinite variety of convex combinations, and what happened to the probabilistic encoding of preferences?), and any function that maps from (V,W) to J that isn’t a linear combination of V and W with nonnegative weights on V and W will introduce new preferences that we disagree with (assuming the combination was normed, or you have an affine combination of V and W). Suppose we pick some v and w, such that J=vV+wW; if we pick v=1 and w=1 then J=(5,6,1,1)->(4,5,0,0), a and b have the same score, and c is the clear winner. Note that, regardless of v and w, c will always be preferred to a, and the primary question is whether c or b is preferred, and that a wide range of v and w would lead to c being picked.
So far, we should be in agreement, since we haven’t gotten to the issue that I think you’re discussing, which sounds like: this is all fine and dandy for a, b, and c, but:
What if we had some new set of tickets, d, e, and f? There’s no guarantee that we would still agree on the same v and w.
What if we had some new set of animals, Hippo, Ibis, and Jackal? There’s no guarantee that we would still agree on the same v and w.
I think that the ideal tickets suggest that 1 isn’t a serious concern. We may not have measured v and w very carefully with the tickets we had before, since even a rough estimate is sufficient to pin down our ticket choice (unless we were close to the edge), and we might be near the edge now, but supposing that we measured v and w exactly, we should be able to apply J as before.
I think that 2 is a slightly more serious concern, but I think it can be addressed.
First, we could have some constructive method of picking the weights. You and I, when deciding to pool our money to buy a lottery ticket, might have decided to normalize our preference functions some way and then combine them with weights relative to our financial contribution, or we might decide that your taste in animals is totally better than mine, and so v would be 0 and w 1, or we might decide that I’m better at arm wrestling, and V/w should be 5 after normalization. The outcomes don’t play in to the weighting, and so we can be confident in the weights.
Second, we could find the weights with both lotteries in mind. The first lottery will give us an acceptable range for v/w, the second lottery will give us an acceptable range for v/w, and the two should overlap, and so we can pick one from the smaller range that satisfies both. (Is the issue that you’re not sure they will overlap?)
Ok, I think what’s going on is that we have different ideas in mind about how two people make joint decisions. What I have in mind is something like Nash Bargaining solution or Kalai-Smorodinsky Bargaining Solution (both described in this post), for which the the VNM-equivalent weights do change depending on the set of feasible outcomes. I have to read your comment more carefully and think over your suggestions, but I’m going to guess that there are situations where they do not work or do not make sense, otherwise the NBS and KSBS would not be “the two most popular ways of doing this”.
Note that, as expected, in all cases we only consider options on the Pareto frontier, and those bargaining solutions could be expressed as the choice made by a single agent with a normal utility function. You’re right that the weights which identify the chosen solution will vary based on the options used and bargaining power of the individuals, and it’s worth reiterating that this theorem does not give you any guidance on how to pick the weights (besides saying they should be nonnegative). Think of it more as the argument that “If we needed to build an agent to select our joint choice for us and we can articulate our desires and settle on a mutually agreeable solution, then we can find weights for our utility functions such that the agent only needs to know a weighted sum of utility functions,” not the argument “If we needed to choose jointly and we can articulate our desires, then we can settle on a mutually agreeable solution.”
The NBS and KSBS are able to give some guidance on how to find a mutually agreeable solution because they have a disagreement point that they can use to get rid of the translational freedom, and thus they can get a theoretically neat result that does not depend on the relative scaling of the utility functions. Without that disagreement point (or something similar), there isn’t a theoretically neat way to do it.
In the example above, we could figure out each of our utilities for not paying our part for the ticket (and thus getting no chance to win), and decide what weights to put on based on that. But as the Pareto frontier shifts- as more tickets or more animals become available- our bargaining positions could easily shift. Suppose my utility for not buying in is .635, and your utility for not buying in is 1; I gain barely anything by buying a ticket (b is .025, c is .005), and you gain a lot (b is .24, and c is .86), and so I can use my indifference to making a deal to get my way (.025*.24>.005*.86).
But then the Ibis becomes available, as well as a ticket that offers a decent chance to get it, and I desperately want to get an Ibis. My indifference evaporates, and with it my strong bargaining position.
In situations where social utility will be aggregated, one way or another, then we don’t really have a d to get rid of our translational freedom. In cases where the disagreement point is something like “everybody dies” it’s not clear we want our metaethics (i.e. how we choose the weights) to be dependent on how willing someone is to let everybody die to not get their way (the old utility monster complaint).
Think of it more as the argument that “If we needed to build an agent to select our joint choice for us and we can articulate our desires and settle on a mutually agreeable solution, then we can find weights for our utility functions such that the agent only needs to know a weighted sum of utility functions,”
I still disagree with this. I’ll restate/expand the argument that I made at the top of the previous thread. Suppose we want to use NBS or KSBS to make the joint choice. We could:
Compute the Pareto frontier, apply NBS/KSBS to find the mutually agreeable solution, use the slope of the tangent at that point to derive a set of weights, use those weights to form a linear aggregation of our utility functions, program the linear aggregation into a VNM AI, have the VNM AI recompute that solution we already found and apply it, or
Input our utility functions into an AI separately, program it to compute the Pareto frontier and apply NBS/KSBS to find the mutually agreeable solution and directly apply that solution.
It seems to me that in 1 you’re manually doing all of the work to make the actual decision outside of the VNM framework, and then tacking on a VNM AI at the end to do more redundant work. Why would you do that instead of 2?
You disagree with the statement that we can, or you disagree with the implication that we should?
Why would you do that instead of 2?
In practice, I don’t think you would need to. The point of the theorem is that you always can if you want to, and I’m not sure why this result is interesting to Nisan.
(Note also that this approach works for other metaethical approaches besides NBS/KSBS, and that you don’t always have access to NBS/KSBS.)
Yeah, I thought you meant to imply “should”. If we’re just talking about “can”, then I agree (with some caveats that aren’t very important at this point).
Let’s walk through a simplified example, and see if we can find the point of disagreement. The primary simplification here is that I’ll assume consequentialism, where utilities are mappings from outcomes to reals and the mapping from policies (i.e. a probabilistic collection of outcomes) to reals is the probabilistically weighted sum of the outcome utilities. Even without consequentialism, this should work, but there will be many more fiddly bits.
So, let’s suppose that the two of us have a joint pool of money, which we’re going to spend on a lottery ticket, which could win one of three fabulous prizes (that we would then jointly own):
A Koala (K)
A Lemur (L)
A Macaw (M)
Nothing (N)
We can express the various tickets (which all cost the same, and together we can only afford one) as vectors, like a=(.1,.1,.1,.7), which has a 10% chance of delivering each animal, and a 70% chance of delivering Nothing, or b=(.2,.02,.02,.76), which has a 20% chance of delivering a Koala, 76% chance of Nothing, and 2% chance for each of the Lemur and Macaw. Suppose there are three tickets, and the third is c=(0,.3,.04,.66).
By randomly spinning a wheel to determine which ticket we want to buy, we have access to a convex combination of any of the tickets. If half the wheel points to the a ticket, and the other half points to the b ticket, our final chance of getting any of the animals will be (.15,.6,.6,.73).
Now, before we look at the tickets actually available to us, you and I eat sit down separately and imagine four ‘ideal tickets’- (1,0,0,0), (0,1,0,0), (0,0,1,0), and (0,0,0,1). We can express our preferences for those as another vector: mine, V, would be, say, (3;2;1;0). (That means, for example, that I would be indifferent between a Lemur for sure and a half chance of a Koala or a Macaw, because 2=(1+3)/2.) This is a column vector, and we can multiply a*V to get .6, b*V to get .66, and c*V to get .64, which says that I would prefer the b ticket to the c ticket to the a ticket. The magnitude of V doesn’t matter, just the direction, and suppose we adjust it so that the least preferred outcome is always 0. I don’t know what W, your preference vector, is; it could be any four-vector with non-negative values.
Note that any real ticket can be seen as a convex combination of the ideal tickets. It’s a lottery, and so they won’t let us just walk up and buy a koala for the price of a ticket, but if they did that’d be my preferred outcome. Instead, I look at the real tickets for sale, right multiply them by my preference column vector, and pick one of the tickets with the highest value, which is the b ticket.
But, the pool of money is partly yours, too; you have some preference ordering W. Suppose it’s (2,4,0,1), and so a*W=1.3, b*W=1.24, and c*W=1.86, meaning you prefer c to a to b.
We can think of lots of different algorithms for determining which ticket (or convex combination of tickets) we end up buying. Suppose we want it to be consistent, i.e. there’s some preference vector J that describes our joint decision. Any algorithm that doesn’t depend on just your and my preference scores for the ticket being considered (suppose you wanted to scratch off our least favorite options until only one is left) will run into problems (how do you scratch off the infinite variety of convex combinations, and what happened to the probabilistic encoding of preferences?), and any function that maps from (V,W) to J that isn’t a linear combination of V and W with nonnegative weights on V and W will introduce new preferences that we disagree with (assuming the combination was normed, or you have an affine combination of V and W). Suppose we pick some v and w, such that J=vV+wW; if we pick v=1 and w=1 then J=(5,6,1,1)->(4,5,0,0), a and b have the same score, and c is the clear winner. Note that, regardless of v and w, c will always be preferred to a, and the primary question is whether c or b is preferred, and that a wide range of v and w would lead to c being picked.
So far, we should be in agreement, since we haven’t gotten to the issue that I think you’re discussing, which sounds like: this is all fine and dandy for a, b, and c, but:
What if we had some new set of tickets, d, e, and f? There’s no guarantee that we would still agree on the same v and w.
What if we had some new set of animals, Hippo, Ibis, and Jackal? There’s no guarantee that we would still agree on the same v and w.
I think that the ideal tickets suggest that 1 isn’t a serious concern. We may not have measured v and w very carefully with the tickets we had before, since even a rough estimate is sufficient to pin down our ticket choice (unless we were close to the edge), and we might be near the edge now, but supposing that we measured v and w exactly, we should be able to apply J as before.
I think that 2 is a slightly more serious concern, but I think it can be addressed.
First, we could have some constructive method of picking the weights. You and I, when deciding to pool our money to buy a lottery ticket, might have decided to normalize our preference functions some way and then combine them with weights relative to our financial contribution, or we might decide that your taste in animals is totally better than mine, and so v would be 0 and w 1, or we might decide that I’m better at arm wrestling, and V/w should be 5 after normalization. The outcomes don’t play in to the weighting, and so we can be confident in the weights.
Second, we could find the weights with both lotteries in mind. The first lottery will give us an acceptable range for v/w, the second lottery will give us an acceptable range for v/w, and the two should overlap, and so we can pick one from the smaller range that satisfies both. (Is the issue that you’re not sure they will overlap?)
Ok, I think what’s going on is that we have different ideas in mind about how two people make joint decisions. What I have in mind is something like Nash Bargaining solution or Kalai-Smorodinsky Bargaining Solution (both described in this post), for which the the VNM-equivalent weights do change depending on the set of feasible outcomes. I have to read your comment more carefully and think over your suggestions, but I’m going to guess that there are situations where they do not work or do not make sense, otherwise the NBS and KSBS would not be “the two most popular ways of doing this”.
Ah, I think I see where you’re coming from now.
Note that, as expected, in all cases we only consider options on the Pareto frontier, and those bargaining solutions could be expressed as the choice made by a single agent with a normal utility function. You’re right that the weights which identify the chosen solution will vary based on the options used and bargaining power of the individuals, and it’s worth reiterating that this theorem does not give you any guidance on how to pick the weights (besides saying they should be nonnegative). Think of it more as the argument that “If we needed to build an agent to select our joint choice for us and we can articulate our desires and settle on a mutually agreeable solution, then we can find weights for our utility functions such that the agent only needs to know a weighted sum of utility functions,” not the argument “If we needed to choose jointly and we can articulate our desires, then we can settle on a mutually agreeable solution.”
The NBS and KSBS are able to give some guidance on how to find a mutually agreeable solution because they have a disagreement point that they can use to get rid of the translational freedom, and thus they can get a theoretically neat result that does not depend on the relative scaling of the utility functions. Without that disagreement point (or something similar), there isn’t a theoretically neat way to do it.
In the example above, we could figure out each of our utilities for not paying our part for the ticket (and thus getting no chance to win), and decide what weights to put on based on that. But as the Pareto frontier shifts- as more tickets or more animals become available- our bargaining positions could easily shift. Suppose my utility for not buying in is .635, and your utility for not buying in is 1; I gain barely anything by buying a ticket (b is .025, c is .005), and you gain a lot (b is .24, and c is .86), and so I can use my indifference to making a deal to get my way (.025*.24>.005*.86).
But then the Ibis becomes available, as well as a ticket that offers a decent chance to get it, and I desperately want to get an Ibis. My indifference evaporates, and with it my strong bargaining position.
In situations where social utility will be aggregated, one way or another, then we don’t really have a d to get rid of our translational freedom. In cases where the disagreement point is something like “everybody dies” it’s not clear we want our metaethics (i.e. how we choose the weights) to be dependent on how willing someone is to let everybody die to not get their way (the old utility monster complaint).
I still disagree with this. I’ll restate/expand the argument that I made at the top of the previous thread. Suppose we want to use NBS or KSBS to make the joint choice. We could:
Compute the Pareto frontier, apply NBS/KSBS to find the mutually agreeable solution, use the slope of the tangent at that point to derive a set of weights, use those weights to form a linear aggregation of our utility functions, program the linear aggregation into a VNM AI, have the VNM AI recompute that solution we already found and apply it, or
Input our utility functions into an AI separately, program it to compute the Pareto frontier and apply NBS/KSBS to find the mutually agreeable solution and directly apply that solution.
It seems to me that in 1 you’re manually doing all of the work to make the actual decision outside of the VNM framework, and then tacking on a VNM AI at the end to do more redundant work. Why would you do that instead of 2?
You disagree with the statement that we can, or you disagree with the implication that we should?
In practice, I don’t think you would need to. The point of the theorem is that you always can if you want to, and I’m not sure why this result is interesting to Nisan.
(Note also that this approach works for other metaethical approaches besides NBS/KSBS, and that you don’t always have access to NBS/KSBS.)
Yeah, I thought you meant to imply “should”. If we’re just talking about “can”, then I agree (with some caveats that aren’t very important at this point).