This could (should?) also make you suspicious of talk of “average utilitarianism” and “total utilitarianism”. However, beware: only one kind of “utilitarianism” holds that the term “utility” in decision theory means the same thing as “utility” in ethics: namely, preference utilitarianism.
Ok, I’m suspicious of preference utilitarianism which requires aggregation across entities. And suspicious of other kinds because they mean something else by “utility”. Then you show that there are aggregate functions that have some convenient properties. But why does that resolve my suspicion?
What makes any of these social choice functions any more valid than any other assumption about other people’s utility transformations? The pareto-optimal part is fine, as they are compatible with all transformations—they work for ordinal incommensurate preferences. So they’re trivial and boring. But once you talk about bargaining and ” the relative value of one person’s suffering vs another person’s convenience ”, you’re back on shaky ground.
The incomparability of utility functions doesn’t mean we can’t trade off between the utilities of different people.
We can prefer whatever we want, we can make all sorts of un-justified comparisons. But it DOES MEAN that we can’t claim to be justified in violating someone’s preferences just because we picked an aggregation function that says so.
We just need more information. … we need more assumptions …
I think it’s _far_ more the second than the first. There is no available information that makes these comparisons/aggregations possible. We can make assumptions and do it, but I wish you’d be more explicit in what is the minimal assumption set required, and provide some justification for the assumptions (other than “it enables us to aggregate in ways that I like”).
Ok, I’m suspicious of preference utilitarianism which requires aggregation across entities. And suspicious of other kinds because they mean something else by “utility”. Then you show that there are aggregate functions that have some convenient properties. But why does that resolve my suspicion?
I think the Pareto-optimality part is really where this gets off the ground.
Let’s say,
You’re altruistic enough to prefer Pareto improvements with respect to everyone’s preferences.
You want to make choices in a way that respects the VNM axioms.
Then we must be able to interpret your decisions as that of a preference utilitarian who has chosen some specific way to add up everyone’s utility functions (IE, has determined multiplicative constants whereby to trade off between people).
After that, it’s “just” a question of setting the constants. (And breaking ties, as in the dollar-splitting example where illustrated how the Harsanyi perspective isn’t very useful.)
So once you’re on board with the Pareto-improvement part, you have to start rejecting axioms of individual rationality in order to avoid becoming a preference-utilitarian.
What makes any of these social choice functions any more valid than any other assumption about other people’s utility transformations? The pareto-optimal part is fine, as they are compatible with all transformations—they work for ordinal incommensurate preferences. So they’re trivial and boring. But once you talk about bargaining and ” the relative value of one person’s suffering vs another person’s convenience ”, you’re back on shaky ground.
For example, if you refuse to trade off between people’s ordinal incommensurate preferences, then you just end up refusing to have an opinion when you try to choose between charity A which saves a few lives in Argentina vs charity B which saves many lives in Brazil. (You can’t calculate an expected utility, since you can’t compare the lives of different people.) So you can end up in a situation where you do nothing, even though you strictly prefer to put your money in either one charity or the other, because your principles refuse to make a comparison, so you can’t choose between the two.
altruistic enough to prefer Pareto improvements with respect to everyone’s preferences.
Wait, what? Altruism has nothing to do with it. Everyone is supportive of (or indifferent to) any given Pareto improvement because it increases (or at least does not reduce) their utility. Pareto improvements provide no help in comparing utility because they are cases where there is no conflict among utility functions. Every multiplicative or additive transform across utility functions remains valid for Pareto improvements.
For example, if you refuse to trade off between people’s ordinal incommensurate preferences, then you just end up refusing to have an opinion when you try to choose between charity A which saves a few lives in Argentina vs charity B which saves many lives in Brazil.
I don’t refuse to have an opinion, I only refuse to claim that that it’s anything but my preferences which form that opinion. My opinion is about my (projected) utility from the saved or unsaved lives. That _may_ include my perception of their satisfaction (or whatever observable property I choose), but it does not have any access to their actual preference or utility.
Wait, what? Altruism has nothing to do with it. Everyone is supportive of (or indifferent to) any given Pareto improvement because it increases (or at least does not reduce) their utility.
I grant that this is not very altruistic at all, but it is possible to be even less altruistic: I could only support Pareto improvements which I benefit from. This is sorta the default.
The Pareto-optimality assumption isn’t that you’re “just OK” with Pareto-improvements, in a ≥ sense. The assumption is that you prefer them, ie, >.
I don’t refuse to have an opinion, I only refuse to claim that that my opinion is about their preferences/utility.
If you accept the Pareto-optimality assumption, and you accept the rationality assumptions with respect to your choices, then by Harsanyi’s theorem you’ve gotta make an implicit trade-off between other people’s preferences.
My opinion is about my (projected) utility from the saved or unsaved lives.
So you’ve got some way to trade off between saving different lives.
That _may_ include my perception of their satisfaction (or whatever observable property I choose), but it does not have any access to their actual preference or utility.
It sounds like your objection here is “I don’t have any access to their actual preferences”.
I agree that the formal model assumes access to the preferences. But I don’t think a preference utilitarian needs access. You can be coherently trying to respect other people’s preferences without knowing exactly what they are. You can assent to the concept of Pareto improvements as an idealized decision theory which you aspire to approximate. I think this can be a very fruitful way of thinking, even though it’s good to also track reality as distinct from the idealization. (We already have to make such idealizations to think “utility” is relevant to our decision-making at all.)
The point of the Harsanyi argument is that if you assent to Pareto improvements as something to aspire to, and also assent to VNM as something to aspire to, then you must assent to a version of preference utilitarianism as something to aspire to.
The Pareto-optimality assumption isn’t that you’re “just OK” with Pareto-improvements, in a ≥ sense. The assumption is that you prefer them, ie, >.
That’s not what Pareto-optimality asserts. It only talks about >= for all participants individually. If you’re making assumptions about altruism, you should be clearer that it’s an arbitrary aggregation function that is being increased.
And then, Pareto-optimality is a red herring. I don’t know of any aggregation functions that would change a 0 to a + for a Pareto-optimal change, and would not give a + to some non-Pareto-optimal changes, which violate other agents’ preferences.
My primary objection is that any given aggregation function is itself merely a preference held by the evaluator. There is no reason to believe that there is a justifiable-to-assume-in-others or automatically-agreeable aggregation function.
if you assent to Pareto improvements as something to aspire to
This may be the crux. I do not assent to that. I don’t even think it’s common. Pareto improvements are fine, and some of them actually improve my situation, so go for it! But in the wider sense, there are lots of non-Pareto changes that I’d pick over a Pareto subset of those changes. Pareto is a min-bar for agreement, not an optimum for any actual aggregation function.
I should probably state what function I actually use (as far as I can tell). I do not claim universality, and in fact, it’s indexed based on non-replicable factors like my level of empathy for someone. I do not include their preferences (because I have no access). I don’t even include my prediction of their preferences. I DO include my preferences for what (according to my beliefs) they SHOULD prefer, which in a lot of cases correlates closely enough with their actual preferences that I can pass as an altruist. I then weight my evaluation of those imputed-preferences by something like an inverse-square relationship of “empathetic distance”. People closer to me (including depth and concreteness of my model for them, how much I like them, and likely many other factors I can’t articulate), including imaginary and future people who I feel close to get weighted much much higher than more distant or statistical people.
I repeat—this is not normative. I deny that there exists a function which everyone “should” use. This is merely a description of what I seem to do.
Given an initial situation, a Pareto improvement is a new situation where some agents will gain, and no agents will lose.
So a pareto improvement is a move that is > for at least one agent, and >= for the rest.
If you’re making assumptions about altruism, you should be clearer that it’s an arbitrary aggregation function that is being increased.
I stated that the setup is to consider a social choice function (a way of making decisions which would “respect everyone’s preferences” in the sense of regarding pareto improvements as strict preferences, ie, >-type preferences).
Perhaps I didn’t make clear that the social choice function should regard Pareto improvements as strict preferences. But this is the only way to ensure that you prefer the Pareto improvement and not the opposite change (which only makes things worse).
And then, Pareto-optimality is a red herring. I don’t know of any aggregation functions that would change a 0 to a + for a Pareto-optimal change, and would not give a + to some non-Pareto-optimal changes, which violate other agents’ preferences.
Exactly. That’s, like, basically the point of the Harsanyi theorem right there. If your social choice function respects Pareto optimality and rationality, then it’s forced to also make some trade-offs—IE, give a + to some non-Pareto changes.
(Unless you’re in a degenerate case, EG, everyone already has the same preferences.)
I feel as if you’re denying my argument by… making my argument.
My primary objection is that any given aggregation function is itself merely a preference held by the evaluator. There is no reason to believe that there is a justifiable-to-assume-in-others or automatically-agreeable aggregation function.
I don’t believe I ever said anything about justifying it to others.
I think one possible view is that every altruist could have their own personal aggregation function.
There’s still a question of which aggregation function to choose, what properties you might want it to have, etc.
But then, many people might find the same considerations persuasive. So I see nothing against people working together to figure out what “the right aggregation function” is, either.
This may be the crux. I do not assent to that. I don’t even think it’s common.
OK! So that’s just saying that you’re not interested in the whole setup. That’s not contrary to what I’m trying to say here—I’m just trying to say that if an agent satisfies the minimal altruism assumption of preferring Pareto improvements, then all the rest.
If you’re not at all interested in the utilitarian project, that’s fine, other people can be interested.
Pareto improvements are fine, and some of them actually improve my situation, so go for it! But in the wider sense, there are lots of non-Pareto changes that I’d pick over a Pareto subset of those changes.
Again, though, now it just seems like you’re stating my argument.
Weren’t you just criticizing the kind of aggregation I discussed for assenting to Pareto improvements but inevitably assenting to non-Pareto-improvements as well?
Pareto is a min-bar for agreement, not an optimum for any actual aggregation function.
My section on Pareto is literally titled “Pareto-Optimality: The Minimal Standard”
I’m feeling a bit of “are you trolling me” here.
You’ve both denied and asserted both the premises and the conclusion of the argument.
Me too! I’m having trouble seeing how that version of the pareto-preference assumption isn’t already assuming what you’re trying to show, that there is a universally-usable social aggregation function. Or maybe I misunderstand what you’re trying to show—are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?
So a pareto improvement is a move that is > for at least one agent, and >= for the rest.
Agreed so far. And now we have to specify which agent’s preferences we’re talking about when we say “support”. If it’s > for the agent in question, they clearly support it. If it’s =, they don’t oppose it, but don’t necessarily support it.
The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that’s a confusing use of “preferences”. If it’s =, that strongly implies neutrality (really, by definition of preference utility), and “active support” strongly implies > (again, that’s the definition of preference). I still think I’m missing an important assumption here, and that’s causing us to talk past each other.
When I say “Pareto optimality is min-bar for agreement”, I’m making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.
In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity’s preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities’ preferences against each other.
Me too! I’m having trouble seeing how that version of the pareto-preference assumption isn’t already assuming what you’re trying to show, that there is a universally-usable social aggregation function.
I’m not sure what you meant by “universally usable”, but I don’t really argue anything about existence, only what it has to look like if it exists. It’s easy enough to show existence, though; just take some arbitrary sum over utility functions.
Or maybe I misunderstand what you’re trying to show—are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?
Yep, at least in some sense. (Not sure how “privileged” they are in your eyes!) What the Harsanyi Utilitarianism Theorem shows is that linear aggregations are just such a distinguished class.
And now we have to specify which agent’s preferences we’re talking about when we say “support”.
[...]
The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that’s a confusing use of “preferences”.
That’s why, in the post, I moved to talking about “a social choice function”—to avert that confusion.
So we have people, who are what we define Pareto-improvement over, and then we have the social choice function, which is what we suppose must > every Pareto improvement.
Then we prove that the social choice function must act like it prefers some weighted sum of the people’s utility functions.
But this really is just to avert a confusion. If we get someone to assent to both VNM and strict preference of Pareto improvements, then we can go back and say “by the way, the social choice function was secretly you” because that person meets the conditions of the argument.
There’s no contradiction because we’re not secretly trying to sneak in a =/> shift; the person has to already prefer for Pareto improvements to happen.
If it’s > for the agent in question, they clearly support it. If it’s =, they don’t oppose it, but don’t necessarily support it.
Right, so, if we’re applying this argument to a person rather than just some social choice function, then it has to be > in all cases.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Yes, it’s trivially true that if it’s = for them then it must not be >. But humans aren’t perfectly reflectively consistent. So, what this argument step is trying to do is engage with the person’s intuitions about their preferences. Do they prefer to make a move that’s (at worst) costless to them and which is beneficial to someone else? If yes, then they can be engaged with the rest of the argument.
To put it a different way: yes, we can’t just assume that an agent strictly prefers for all Pareto-improvements to happen. But, we also can’t just assume that they don’t, and dismiss the argument on those grounds. That agent should figure out for itself whether it has a strict preference in favor of Pareto improvements.
When I say “Pareto optimality is min-bar for agreement”, I’m making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.
Ah, ok. I mean, that makes perfect sense to me and I agree. In this language, the idea of the Pareto assumption is that an aggregation function should at least prefer things which everyone agrees about, whatever else it may do.
In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity’s preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities’ preferences against each other.
The point of the Harsanyi theorem is sort of that they say surprisingly much. Particularly when coupled with a VNM rationality assumption.
I follow a bit more, but I still feel we’ve missed a step in stating whether it’s “a social choice function, which each agent has as part of it’s preference set”, or “the social choice function, shared across agents somehow”. I think we’re agreed that there are tons of rational social choice functions, and perhaps we’re agreed that there’s no reason to expect different individuals to have the same weights for the same not-me actors.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it’s a Pareto improvement is irrelevant—they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.
To me, the premise seems off—I doubt the target of the argument is understanding what “neutral” means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn’t extend to other decisions.
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
Well, I haven’t actually given the argument that it has to be linear. I’ve just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven’t been emphasizing in our discussion.
Here’s a pretty strong argument (with correspondingly strong assumptions).
Suppose each individual is VNM-rational.
Suppose the social choice function is VNM-rational.
Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
Suppose that the social choice function has a strict preference for every Pareto improvement.
Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.
By VNM, each individual’s preferences can be represented by a utility function, as can the preferences of the social choice function.
Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.
By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.
VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.
VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don’t mind additive constants, we can subtract the value at the origin to get a linear function.
But remember that points in this space are just vectors of individual’s utilities for an action. So that means the social choice function can be represented as a linear function of individual’s utilities.
So now we’ve got a linear function. But I haven’t used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual’s utility function.
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot.
[...]
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
Indeed, that’s what I’m saying. I’m trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).
But the informal argument relies on the formal argument.
Ah, I think I understand better—I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents’ functions.
Thanks for taking the time down this rabbit hole to clarify it for me.
Ok, I’m suspicious of preference utilitarianism which requires aggregation across entities. And suspicious of other kinds because they mean something else by “utility”. Then you show that there are aggregate functions that have some convenient properties. But why does that resolve my suspicion?
What makes any of these social choice functions any more valid than any other assumption about other people’s utility transformations? The pareto-optimal part is fine, as they are compatible with all transformations—they work for ordinal incommensurate preferences. So they’re trivial and boring. But once you talk about bargaining and ” the relative value of one person’s suffering vs another person’s convenience ”, you’re back on shaky ground.
We can prefer whatever we want, we can make all sorts of un-justified comparisons. But it DOES MEAN that we can’t claim to be justified in violating someone’s preferences just because we picked an aggregation function that says so.
I think it’s _far_ more the second than the first. There is no available information that makes these comparisons/aggregations possible. We can make assumptions and do it, but I wish you’d be more explicit in what is the minimal assumption set required, and provide some justification for the assumptions (other than “it enables us to aggregate in ways that I like”).
I think the Pareto-optimality part is really where this gets off the ground.
Let’s say,
You’re altruistic enough to prefer Pareto improvements with respect to everyone’s preferences.
You want to make choices in a way that respects the VNM axioms.
Then we must be able to interpret your decisions as that of a preference utilitarian who has chosen some specific way to add up everyone’s utility functions (IE, has determined multiplicative constants whereby to trade off between people).
After that, it’s “just” a question of setting the constants. (And breaking ties, as in the dollar-splitting example where illustrated how the Harsanyi perspective isn’t very useful.)
So once you’re on board with the Pareto-improvement part, you have to start rejecting axioms of individual rationality in order to avoid becoming a preference-utilitarian.
For example, if you refuse to trade off between people’s ordinal incommensurate preferences, then you just end up refusing to have an opinion when you try to choose between charity A which saves a few lives in Argentina vs charity B which saves many lives in Brazil. (You can’t calculate an expected utility, since you can’t compare the lives of different people.) So you can end up in a situation where you do nothing, even though you strictly prefer to put your money in either one charity or the other, because your principles refuse to make a comparison, so you can’t choose between the two.
Wait, what? Altruism has nothing to do with it. Everyone is supportive of (or indifferent to) any given Pareto improvement because it increases (or at least does not reduce) their utility. Pareto improvements provide no help in comparing utility because they are cases where there is no conflict among utility functions. Every multiplicative or additive transform across utility functions remains valid for Pareto improvements.
I don’t refuse to have an opinion, I only refuse to claim that that it’s anything but my preferences which form that opinion. My opinion is about my (projected) utility from the saved or unsaved lives. That _may_ include my perception of their satisfaction (or whatever observable property I choose), but it does not have any access to their actual preference or utility.
I grant that this is not very altruistic at all, but it is possible to be even less altruistic: I could only support Pareto improvements which I benefit from. This is sorta the default.
The Pareto-optimality assumption isn’t that you’re “just OK” with Pareto-improvements, in a ≥ sense. The assumption is that you prefer them, ie, >.
If you accept the Pareto-optimality assumption, and you accept the rationality assumptions with respect to your choices, then by Harsanyi’s theorem you’ve gotta make an implicit trade-off between other people’s preferences.
So you’ve got some way to trade off between saving different lives.
It sounds like your objection here is “I don’t have any access to their actual preferences”.
I agree that the formal model assumes access to the preferences. But I don’t think a preference utilitarian needs access. You can be coherently trying to respect other people’s preferences without knowing exactly what they are. You can assent to the concept of Pareto improvements as an idealized decision theory which you aspire to approximate. I think this can be a very fruitful way of thinking, even though it’s good to also track reality as distinct from the idealization. (We already have to make such idealizations to think “utility” is relevant to our decision-making at all.)
The point of the Harsanyi argument is that if you assent to Pareto improvements as something to aspire to, and also assent to VNM as something to aspire to, then you must assent to a version of preference utilitarianism as something to aspire to.
That’s not what Pareto-optimality asserts. It only talks about >= for all participants individually. If you’re making assumptions about altruism, you should be clearer that it’s an arbitrary aggregation function that is being increased.
And then, Pareto-optimality is a red herring. I don’t know of any aggregation functions that would change a 0 to a + for a Pareto-optimal change, and would not give a + to some non-Pareto-optimal changes, which violate other agents’ preferences.
My primary objection is that any given aggregation function is itself merely a preference held by the evaluator. There is no reason to believe that there is a justifiable-to-assume-in-others or automatically-agreeable aggregation function.
This may be the crux. I do not assent to that. I don’t even think it’s common. Pareto improvements are fine, and some of them actually improve my situation, so go for it! But in the wider sense, there are lots of non-Pareto changes that I’d pick over a Pareto subset of those changes. Pareto is a min-bar for agreement, not an optimum for any actual aggregation function.
I should probably state what function I actually use (as far as I can tell). I do not claim universality, and in fact, it’s indexed based on non-replicable factors like my level of empathy for someone. I do not include their preferences (because I have no access). I don’t even include my prediction of their preferences. I DO include my preferences for what (according to my beliefs) they SHOULD prefer, which in a lot of cases correlates closely enough with their actual preferences that I can pass as an altruist. I then weight my evaluation of those imputed-preferences by something like an inverse-square relationship of “empathetic distance”. People closer to me (including depth and concreteness of my model for them, how much I like them, and likely many other factors I can’t articulate), including imaginary and future people who I feel close to get weighted much much higher than more distant or statistical people.
I repeat—this is not normative. I deny that there exists a function which everyone “should” use. This is merely a description of what I seem to do.
You said:
From wikipedia:
So a pareto improvement is a move that is > for at least one agent, and >= for the rest.
I stated that the setup is to consider a social choice function (a way of making decisions which would “respect everyone’s preferences” in the sense of regarding pareto improvements as strict preferences, ie, >-type preferences).
Perhaps I didn’t make clear that the social choice function should regard Pareto improvements as strict preferences. But this is the only way to ensure that you prefer the Pareto improvement and not the opposite change (which only makes things worse).
Exactly. That’s, like, basically the point of the Harsanyi theorem right there. If your social choice function respects Pareto optimality and rationality, then it’s forced to also make some trade-offs—IE, give a + to some non-Pareto changes.
(Unless you’re in a degenerate case, EG, everyone already has the same preferences.)
I feel as if you’re denying my argument by… making my argument.
I don’t believe I ever said anything about justifying it to others.
I think one possible view is that every altruist could have their own personal aggregation function.
There’s still a question of which aggregation function to choose, what properties you might want it to have, etc.
But then, many people might find the same considerations persuasive. So I see nothing against people working together to figure out what “the right aggregation function” is, either.
OK! So that’s just saying that you’re not interested in the whole setup. That’s not contrary to what I’m trying to say here—I’m just trying to say that if an agent satisfies the minimal altruism assumption of preferring Pareto improvements, then all the rest.
If you’re not at all interested in the utilitarian project, that’s fine, other people can be interested.
Again, though, now it just seems like you’re stating my argument.
Weren’t you just criticizing the kind of aggregation I discussed for assenting to Pareto improvements but inevitably assenting to non-Pareto-improvements as well?
My section on Pareto is literally titled “Pareto-Optimality: The Minimal Standard”
I’m feeling a bit of “are you trolling me” here.
You’ve both denied and asserted both the premises and the conclusion of the argument.
All in the same single comment.
Me too! I’m having trouble seeing how that version of the pareto-preference assumption isn’t already assuming what you’re trying to show, that there is a universally-usable social aggregation function. Or maybe I misunderstand what you’re trying to show—are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?
Agreed so far. And now we have to specify which agent’s preferences we’re talking about when we say “support”. If it’s > for the agent in question, they clearly support it. If it’s =, they don’t oppose it, but don’t necessarily support it.
The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that’s a confusing use of “preferences”. If it’s =, that strongly implies neutrality (really, by definition of preference utility), and “active support” strongly implies > (again, that’s the definition of preference). I still think I’m missing an important assumption here, and that’s causing us to talk past each other.
When I say “Pareto optimality is min-bar for agreement”, I’m making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.
In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity’s preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities’ preferences against each other.
I’m not sure what you meant by “universally usable”, but I don’t really argue anything about existence, only what it has to look like if it exists. It’s easy enough to show existence, though; just take some arbitrary sum over utility functions.
Yep, at least in some sense. (Not sure how “privileged” they are in your eyes!) What the Harsanyi Utilitarianism Theorem shows is that linear aggregations are just such a distinguished class.
That’s why, in the post, I moved to talking about “a social choice function”—to avert that confusion.
So we have people, who are what we define Pareto-improvement over, and then we have the social choice function, which is what we suppose must > every Pareto improvement.
Then we prove that the social choice function must act like it prefers some weighted sum of the people’s utility functions.
But this really is just to avert a confusion. If we get someone to assent to both VNM and strict preference of Pareto improvements, then we can go back and say “by the way, the social choice function was secretly you” because that person meets the conditions of the argument.
There’s no contradiction because we’re not secretly trying to sneak in a =/> shift; the person has to already prefer for Pareto improvements to happen.
Right, so, if we’re applying this argument to a person rather than just some social choice function, then it has to be > in all cases.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Yes, it’s trivially true that if it’s = for them then it must not be >. But humans aren’t perfectly reflectively consistent. So, what this argument step is trying to do is engage with the person’s intuitions about their preferences. Do they prefer to make a move that’s (at worst) costless to them and which is beneficial to someone else? If yes, then they can be engaged with the rest of the argument.
To put it a different way: yes, we can’t just assume that an agent strictly prefers for all Pareto-improvements to happen. But, we also can’t just assume that they don’t, and dismiss the argument on those grounds. That agent should figure out for itself whether it has a strict preference in favor of Pareto improvements.
Ah, ok. I mean, that makes perfect sense to me and I agree. In this language, the idea of the Pareto assumption is that an aggregation function should at least prefer things which everyone agrees about, whatever else it may do.
The point of the Harsanyi theorem is sort of that they say surprisingly much. Particularly when coupled with a VNM rationality assumption.
I follow a bit more, but I still feel we’ve missed a step in stating whether it’s “a social choice function, which each agent has as part of it’s preference set”, or “the social choice function, shared across agents somehow”. I think we’re agreed that there are tons of rational social choice functions, and perhaps we’re agreed that there’s no reason to expect different individuals to have the same weights for the same not-me actors.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it’s a Pareto improvement is irrelevant—they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.
To me, the premise seems off—I doubt the target of the argument is understanding what “neutral” means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn’t extend to other decisions.
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
Well, I haven’t actually given the argument that it has to be linear. I’ve just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven’t been emphasizing in our discussion.
Here’s a pretty strong argument (with correspondingly strong assumptions).
Suppose each individual is VNM-rational.
Suppose the social choice function is VNM-rational.
Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
Suppose that the social choice function has a strict preference for every Pareto improvement.
Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.
By VNM, each individual’s preferences can be represented by a utility function, as can the preferences of the social choice function.
Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.
By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.
VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.
VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don’t mind additive constants, we can subtract the value at the origin to get a linear function.
But remember that points in this space are just vectors of individual’s utilities for an action. So that means the social choice function can be represented as a linear function of individual’s utilities.
So now we’ve got a linear function. But I haven’t used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual’s utility function.
Indeed, that’s what I’m saying. I’m trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).
But the informal argument relies on the formal argument.
Ah, I think I understand better—I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents’ functions.
Thanks for taking the time down this rabbit hole to clarify it for me.