In your Romantic Dinner / Battlestar Galactica example, it’s not really clear what it means for one person to translate or scale their utility function. I’m going to take a newbie stab at it here, and please correct me if this is wrong:
Scaling: This is where the outcome matters more to one party than to the other party. For an extreme example, one person stands to gain between 0 and 10 minutes of extra life from the bargain, and the other person could gain between 0 and 500 years of extra life. And these are totally selfish people, who genuinely don’t care about each other’s utility; they just want to reach a bargain that optimizes their own utility. The total-utility-maximizing answer would favor the person who has the most to gain, and either person could turn this to their advantage by claiming to really, really care about the outcome. “I’ll literally die if we don’t watch Battlestar Galactica!”, or some such thing.
Translation: This just adds or subtracts a constant amount of utility to any person’s utility function. “I would love to watch BSG, but I’m so happy just being with you! ♥”. If one person will be disgruntled if they don’t (e.g.) have a romantic dinner, and the other person will be at least fairly cheerful either way, then this could influence the bargaining—and if so, it gives a selfish person a way to game the system, by acting unhappy when they don’t get their way.
The way to take advantage of people using the naive egalitarian or utility-sum-maximizing decision methods is to exaggerate how much you care about things, and sulk when you don’t get your way. Also known as “acting like a toddler”, which our society frowns on, probably for exactly this reason.
I just want to express my disagreement with the other two replies to this comment. Yes, in a vacuum, it’s true that scaling and translating utility functions doesn’t have any effect. But as soon as you start trying to compare them across individuals—which is exactly what we were doing in the relevant part of the post—it seems to me that scaling and translation behave just like this comment describes.
It doesn’t mean anything to translate or scale a utility function. A utility function is just a mathematical way to encode certain relative relationships between outcomes, and it turns out if you add 42 to every value of the function, it still encodes those relationships faithfully. Or if you multiply everything by 9000.
This is why comparing two individuals’ utility functions in any naive way makes no sense: their functions are encoding relative preference relationships, but it doesn’t matter if one of them multiplies everything by 9000. So any comparison that breaks when one of them multiplies everything by 9000 or adds 42 to everything but the other doesn’t isn’t actually making correct use of the underlying relative preference relationships.
The scaling and translations don’t correpsond to anything real. It’s simply that if I have a utility function that has value u1 in world w1 and u2 in world w2, and u1 > u2, then I prefer w1 to w2. However, if I add a constant c to my utility function, then the utility of world w1 becomes u1 + c and that of w2 become u2 + c: I still prefer world w1 to w2!
Similalrly, if you generalise to all worlds, then adding c doesn’t change my utility at all: I will always have the same preferences as I did before. An agent with the same utility function, plus a constant, will alwaus make the same decisions whatever the value of that constant. The same goes for multiplication by a (positive) scalar. Since these “affine transformations” don’t change your preferences, any reasonable system of bargaining should be indifferent to them.
The mathematical object in question is a pair of utility functions. We have to chose how we lift the concept of an affine transformation of a utility function to an affine transformation of a pair of utility functions.
One choice is to define an affine transformation of a pair of utility functions (u,v) as pair of affine transformations (f,g) which take (u,v) to (fu, gv). With this choice we cannot compare the component utility functions within a pair.
Another choice is to define an affine transformation of a pair of utility functions (u,v) by applying a single transformation f to both components getting (fu, fv). This preserves comparisons within the pair.
The key point is that our inability to do interpersonal comparisons of utility is a modeling assumption. It is something we put in to the analysis, not something that we get out of the analysis.
sketerpot is asking “why can’t we just compare the utilities?” and in the same comment noticing that there are problems with discovering utilities. What is to stop people exaggerating their utilities in order to game the bargaining system?
sketerpot’s comment pretty much nails the situation. Since permitting interpersonal comparisons of utility opens a huge can of worms an important leg of the broader project is to say: Let us assume that interpersonal comparison of utility is impossible, and press on with the analysis to find what solutions to the bargaining problem are available under this assumption.
Utility scaling/translation can mean something if you’re scaling them to normalize the average and standard deviation (or other spreading statistic) of reported marginal utilities in group decisions over time; see my comment above.
ETA: In case it’s not clear, I agree that a choice of scale for your utility function doesn’t mean anything by default, and you’re right to be pointing that out, because people mistaken assume that way too often. But if you scale it with a certain purpose in mind, like group decision making, utility can take on additional meaning.
An example of what I mean: if you and I have to make a series of binary decisions as a two-person team, we could each report, on each decision, what is the marginal utility of option 1 over option 2, using a scale of our own choosing. Reporting marginal utility eliminates the choice of translational constant, but we are still scaling our answers according to some arbitrary choice of unit. However, suppose we expect to make, say, 100 decisions per year. We can make a rule: the absolute values of the marginal utilities you report must add up to less than 1000. In other words, you should choose your units so the average absolute marginal utility you report is around 10, or slightly less. This will result in a certain balance in our decision-making procedure: you can’t claim to care more than me on every decision; we will end up having about the same amount of influence on the outcomes.
But again, this doesn’t mean numerical utilities are intrinsically comparable across individuals. The comparison depends on a choice of scale,.a choice that can be tailored to differing purposes and hence give different meanings to the numerical utilities.
A way to prevent people exaggerating how much they care about stuff is to mandate that, on average, people should care the same amount about everything. This is, very approximately, what I do with my close friends to make our mutual decisions quicker: so we don’t accidentally make large sacrifices for small benefits to the other, we say on a scale from 1-5 how much we care about each decision, and the larger carer decides.
Example: “Where do you want to go for dinner? I only care 2.”
“I care 3 (because I have a bad stomach). Let me decide.”
Over time, we’ve gotten faster, and just say the numbers unless an explanation is necessary. It’s a nice system :) I’d hate to think my friend would make large sacrifices to benefit me only small gains, and conversely, so that’s what we do.
In your Romantic Dinner / Battlestar Galactica example, it’s not really clear what it means for one person to translate or scale their utility function. I’m going to take a newbie stab at it here, and please correct me if this is wrong:
Scaling: This is where the outcome matters more to one party than to the other party. For an extreme example, one person stands to gain between 0 and 10 minutes of extra life from the bargain, and the other person could gain between 0 and 500 years of extra life. And these are totally selfish people, who genuinely don’t care about each other’s utility; they just want to reach a bargain that optimizes their own utility. The total-utility-maximizing answer would favor the person who has the most to gain, and either person could turn this to their advantage by claiming to really, really care about the outcome. “I’ll literally die if we don’t watch Battlestar Galactica!”, or some such thing.
Translation: This just adds or subtracts a constant amount of utility to any person’s utility function. “I would love to watch BSG, but I’m so happy just being with you! ♥”. If one person will be disgruntled if they don’t (e.g.) have a romantic dinner, and the other person will be at least fairly cheerful either way, then this could influence the bargaining—and if so, it gives a selfish person a way to game the system, by acting unhappy when they don’t get their way.
The way to take advantage of people using the naive egalitarian or utility-sum-maximizing decision methods is to exaggerate how much you care about things, and sulk when you don’t get your way. Also known as “acting like a toddler”, which our society frowns on, probably for exactly this reason.
Is this reasonably accurate?
I just want to express my disagreement with the other two replies to this comment. Yes, in a vacuum, it’s true that scaling and translating utility functions doesn’t have any effect. But as soon as you start trying to compare them across individuals—which is exactly what we were doing in the relevant part of the post—it seems to me that scaling and translation behave just like this comment describes.
It doesn’t mean anything to translate or scale a utility function. A utility function is just a mathematical way to encode certain relative relationships between outcomes, and it turns out if you add 42 to every value of the function, it still encodes those relationships faithfully. Or if you multiply everything by 9000.
This is why comparing two individuals’ utility functions in any naive way makes no sense: their functions are encoding relative preference relationships, but it doesn’t matter if one of them multiplies everything by 9000. So any comparison that breaks when one of them multiplies everything by 9000 or adds 42 to everything but the other doesn’t isn’t actually making correct use of the underlying relative preference relationships.
The scaling and translations don’t correpsond to anything real. It’s simply that if I have a utility function that has value u1 in world w1 and u2 in world w2, and u1 > u2, then I prefer w1 to w2. However, if I add a constant c to my utility function, then the utility of world w1 becomes u1 + c and that of w2 become u2 + c: I still prefer world w1 to w2!
Similalrly, if you generalise to all worlds, then adding c doesn’t change my utility at all: I will always have the same preferences as I did before. An agent with the same utility function, plus a constant, will alwaus make the same decisions whatever the value of that constant. The same goes for multiplication by a (positive) scalar. Since these “affine transformations” don’t change your preferences, any reasonable system of bargaining should be indifferent to them.
The mathematical object in question is a pair of utility functions. We have to chose how we lift the concept of an affine transformation of a utility function to an affine transformation of a pair of utility functions.
One choice is to define an affine transformation of a pair of utility functions (u,v) as pair of affine transformations (f,g) which take (u,v) to (fu, gv). With this choice we cannot compare the component utility functions within a pair.
Another choice is to define an affine transformation of a pair of utility functions (u,v) by applying a single transformation f to both components getting (fu, fv). This preserves comparisons within the pair.
The key point is that our inability to do interpersonal comparisons of utility is a modeling assumption. It is something we put in to the analysis, not something that we get out of the analysis.
sketerpot is asking “why can’t we just compare the utilities?” and in the same comment noticing that there are problems with discovering utilities. What is to stop people exaggerating their utilities in order to game the bargaining system?
sketerpot’s comment pretty much nails the situation. Since permitting interpersonal comparisons of utility opens a huge can of worms an important leg of the broader project is to say: Let us assume that interpersonal comparison of utility is impossible, and press on with the analysis to find what solutions to the bargaining problem are available under this assumption.
Utility scaling/translation can mean something if you’re scaling them to normalize the average and standard deviation (or other spreading statistic) of reported marginal utilities in group decisions over time; see my comment above.
ETA: In case it’s not clear, I agree that a choice of scale for your utility function doesn’t mean anything by default, and you’re right to be pointing that out, because people mistaken assume that way too often. But if you scale it with a certain purpose in mind, like group decision making, utility can take on additional meaning.
An example of what I mean: if you and I have to make a series of binary decisions as a two-person team, we could each report, on each decision, what is the marginal utility of option 1 over option 2, using a scale of our own choosing. Reporting marginal utility eliminates the choice of translational constant, but we are still scaling our answers according to some arbitrary choice of unit. However, suppose we expect to make, say, 100 decisions per year. We can make a rule: the absolute values of the marginal utilities you report must add up to less than 1000. In other words, you should choose your units so the average absolute marginal utility you report is around 10, or slightly less. This will result in a certain balance in our decision-making procedure: you can’t claim to care more than me on every decision; we will end up having about the same amount of influence on the outcomes.
But again, this doesn’t mean numerical utilities are intrinsically comparable across individuals. The comparison depends on a choice of scale,.a choice that can be tailored to differing purposes and hence give different meanings to the numerical utilities.
You got it: sharing decision utility is sharing power, not welfare..
A way to prevent people exaggerating how much they care about stuff is to mandate that, on average, people should care the same amount about everything. This is, very approximately, what I do with my close friends to make our mutual decisions quicker: so we don’t accidentally make large sacrifices for small benefits to the other, we say on a scale from 1-5 how much we care about each decision, and the larger carer decides.
Example: “Where do you want to go for dinner? I only care 2.” “I care 3 (because I have a bad stomach). Let me decide.”
Over time, we’ve gotten faster, and just say the numbers unless an explanation is necessary. It’s a nice system :) I’d hate to think my friend would make large sacrifices to benefit me only small gains, and conversely, so that’s what we do.