Yeah, I like your “consensus spoiler”. Maybe needs a better name, though… “Contrarian Monster”?
having a coference of −1 for everyone.
This way of defining the Consensus Spoiler seems needlessly assumption-heavy, since it assumes not only that we can already compare utilities in order to define this perfect antagonism, but furthermore that we’ve decided how to deal with cofrences.
A similar option with a little less baggage is to define it as having the opposite of the preferences of our social choice function. They just hate whatever we end up choosing to represent the group’s preferences.
A simpler option is just to define the Contrarian Monster as having opposite preferences from one particular member of the collective. (Any member will do.) This ensures that there can be no Pareto improvements.
If you have a community of 100 agents that would agree to pick some states over others and construct a new comunity of 101 with the consensus spoiler then they can’t form any choice function.
Actually, the conclusion is that you can form any social choice function. Everything is “Pareto optimal”.
The question whether it is warranted, allowed or forbidden that the coalition of 100 just proceeds with the policy choice that screws the spoiler over doesn’t seem to be a mathematical kind of claim.
If we think of it as bargaining to form a coalition, then there’s never any reason to include the Spoiler in a coalition (especially if you use the “opposite of whatever the coalition wants” version). In fact, there is a version of Harsanyi’s theorem which allows for negative weights, to allow for this—giving an ingroup/outgroup sort of thing. Usually this isn’t considered very seriously for definitions of utilitarianism. But it could be necessary in extreme cases.
(Although putting zero weight on it seems sufficient, really.)
And even in the less extreme degree I don’t get how you could use this setup to judge values that are in conflict.And if you encounter a unknown agent it seems it is ambigious whether you should take heed of its values in compromise or just treat it as a possible enemy and just adhere to your personal choices.
Pareto-optimality doesn’t really give you the tools to mediate conflicts, it’s just an extremely weak condition on how you do so, which says essentially that we shouldn’t put negative weight on anyone.
Granted, the Consensus Spoiler is an argument that Pareto-optimality may not be weak enough, in extreme situations.
“Contrarian” is a good adjective on it. I don’t think it makes anyone suffer so “monster” is only reference to utility monster but the general class of “conceptual tripstones” being called “monster” doesn’t seem the most handy.
If the particular members is ambivalent about something then there might still be room to weak pareto improve along that axis. Totally opposed ambivalence is ambivalence.
There is a slight circularity in that if the definition what the agent wants rests on what the social choice is going to be it can seem a bit unfair. If it can be “fixed in advance” then allowed attempts to make a social choice function is fairer. It seems that if we can make a preference then the preference in the other direction should be able to exist as well. If there are more state pairs to prefer over than agents then the Diagonal Opposer could be constructed by pairing each state pair with an agent and taking the antipreference of that. One conception would be Public Enemy—no matter who else you are, you are enemies with this agent, you have atleast 1 preference in the opposite direction. There are many ways to construct a public enemy. And it might be that there are public enemies that 1 on 1 are only slight enemies to agent but are in conflict over more points with what the other agents would have formed as social choice. Say there are yes and no questions over A, B and C. Other agents answer to two yes and to one no. Then answering all in yes would leave all in 2⁄3 agreement. But a stance of all no is in 3⁄3 disagreement over the compromise despite being only 2⁄3 disagreement with individual agents.
I thought that the end result is that since any change would not be a pareto improvement the function can’t recommend any change so it must be completely ambivalent about everything thus is the constant function of every option being of utility 0.
Pareto-optimality says that if there is a mass murderer that wants to kill as many people as possible then you should not do a choice that lessens the amount of people killed ie you should not oppose the mass murderer.
I thought that the end result is that since any change would not be a pareto improvement the function can’t recommend any change so it must be completely ambivalent about everything thus is the constant function of every option being of utility 0.
Pareto-optimality says that if there is a mass murderer that wants to kill as many people as possible then you should not do a choice that lessens the amount of people killed ie you should not oppose the mass murderer.
Ah, I should have made more clear that it’s a one-way implication: if it’s a Pareto improvement, then the social choice function is supposed to prefer it. Not the other way around.
A social choice function meeting that minimal requirement can still do lots of other things. So it could still oppose a mass murderer, so long as mass-murder is not itself a Pareto improvement.
Yeah, I like your “consensus spoiler”. Maybe needs a better name, though… “Contrarian Monster”?
This way of defining the Consensus Spoiler seems needlessly assumption-heavy, since it assumes not only that we can already compare utilities in order to define this perfect antagonism, but furthermore that we’ve decided how to deal with cofrences.
A similar option with a little less baggage is to define it as having the opposite of the preferences of our social choice function. They just hate whatever we end up choosing to represent the group’s preferences.
A simpler option is just to define the Contrarian Monster as having opposite preferences from one particular member of the collective. (Any member will do.) This ensures that there can be no Pareto improvements.
Actually, the conclusion is that you can form any social choice function. Everything is “Pareto optimal”.
If we think of it as bargaining to form a coalition, then there’s never any reason to include the Spoiler in a coalition (especially if you use the “opposite of whatever the coalition wants” version). In fact, there is a version of Harsanyi’s theorem which allows for negative weights, to allow for this—giving an ingroup/outgroup sort of thing. Usually this isn’t considered very seriously for definitions of utilitarianism. But it could be necessary in extreme cases.
(Although putting zero weight on it seems sufficient, really.)
Pareto-optimality doesn’t really give you the tools to mediate conflicts, it’s just an extremely weak condition on how you do so, which says essentially that we shouldn’t put negative weight on anyone.
Granted, the Consensus Spoiler is an argument that Pareto-optimality may not be weak enough, in extreme situations.
“Contrarian” is a good adjective on it. I don’t think it makes anyone suffer so “monster” is only reference to utility monster but the general class of “conceptual tripstones” being called “monster” doesn’t seem the most handy.
If the particular members is ambivalent about something then there might still be room to weak pareto improve along that axis. Totally opposed ambivalence is ambivalence.
There is a slight circularity in that if the definition what the agent wants rests on what the social choice is going to be it can seem a bit unfair. If it can be “fixed in advance” then allowed attempts to make a social choice function is fairer. It seems that if we can make a preference then the preference in the other direction should be able to exist as well. If there are more state pairs to prefer over than agents then the Diagonal Opposer could be constructed by pairing each state pair with an agent and taking the antipreference of that. One conception would be Public Enemy—no matter who else you are, you are enemies with this agent, you have atleast 1 preference in the opposite direction. There are many ways to construct a public enemy. And it might be that there are public enemies that 1 on 1 are only slight enemies to agent but are in conflict over more points with what the other agents would have formed as social choice. Say there are yes and no questions over A, B and C. Other agents answer to two yes and to one no. Then answering all in yes would leave all in 2⁄3 agreement. But a stance of all no is in 3⁄3 disagreement over the compromise despite being only 2⁄3 disagreement with individual agents.
I thought that the end result is that since any change would not be a pareto improvement the function can’t recommend any change so it must be completely ambivalent about everything thus is the constant function of every option being of utility 0.
Pareto-optimality says that if there is a mass murderer that wants to kill as many people as possible then you should not do a choice that lessens the amount of people killed ie you should not oppose the mass murderer.
Ah, I should have made more clear that it’s a one-way implication: if it’s a Pareto improvement, then the social choice function is supposed to prefer it. Not the other way around.
A social choice function meeting that minimal requirement can still do lots of other things. So it could still oppose a mass murderer, so long as mass-murder is not itself a Pareto improvement.