There’s another way that Arrow’s theorem was an important foundation, particularly for rationalists. He was explicitly thinking about voting methods not just as real-world ways of electing politicians, but as theoretical possibilities for reconciling values. In this more philosophical sense, Arrow’s theorem says something depressing about morality: if morality is to be based on (potentially revealed) preferences rather than interpersonal comparison of (subjective) utilities, it cannot simply be a democratic matter; “the greatest good for the greatest number” doesn’t work without inherently-subjective comparisons of goodness. Amartya Sen continued exploring the philosophical implications of voting theory, showing that the idea of “private autonomy” is incompatible with Pareto efficiency.
I haven’t really been able to understand this from a philosophical perspective.
Representation theorems like VNM, Savage, Jeffrey-Bolker, etc say that the preference view is equivalent to the subjective utility view.
Harsanyi says that utility maximization and Pareto-efficiency are interchangeable concepts.
Yet, you say that if morality is to be based on preferences, rather than utilities, we cannot define “the greatest good for the greatest number”.
A generalization of Gibbard–Satterthwaite is Gibbard’s Theorem, which says that whether ranked or not no mechanism can
be non-dictatorial
choose between more than two options
be strategy-proof.
So the crux of the issue isn’t ordinal vs cardinal (preferences vs utility, ranked vs scored). Rather, the crux of the issue is strategy-proofness: Arrow and related theorems are about the difficulty of strategy-proof implementations of the utillitarian ideal.
Basically, a utility monster is a person or group that derives orders of magnitude more utility from some activity, and the effect is so large that it cancels out the rest of the population’s preferences.
An example is Confederate slave owners derived way more utility from owning slaves than non-slave owners, so if there was an election over whether slavery is illegal, then the Confederate slave owners have a strategy that is: Turn into utility monsters that derive orders of magnitude more utility, and make slave owners have extreme preferences. Even if there were 1,000 slave owners to 1 million non slave owners, there’s still a way for the slave owners to win using a strategy.
While this is a solid example if argued better, in my view this is a somewhat badly argued description of it, and the example is one that is extremely important to be correct about. There are several turns of phrase that I think are false under standard definitions of the words, despite themselves being standard turns of phrase; eg, it is in my view not possible under <?natural law?> to own another being, so the “ownership” in the law of the enforcers of the time was misleading phrasing, and that “ownership” should not be agreed with today since we have begun to near consensus that they were in the wrong (though some people, obviously in my view terrible people, endorse continuation of slavery in the countries where it exists, such as the US prison system or various mining groups in africa). That said, it’s also not the end of the world to be wrong about it as a first attempt from someone who has spent less than decades thinking heavily about this—such is the nature of attempting to discuss high sensitivity things, one is wrong about them as often or more than low sensitivity things. But I would encourage attempting to rephrase to encode the concepts you wouldn’t want a reader to miss, since in my view it’s worth posting commentary on everything to clarify convergence towards prosocial policy.
IMO this is a good time to use something besides hedonic utilitarianism’s lens as an additional comparison: eg, a utility monster, in the preference utilitarian math, is defined by and could only be implemented by a hyperoptimizer, ie a being willing to push something very far into a target configuration. In this case, the slavers were optimizing their own life’s shape by enslaving people in order to optimize the slaver’s preferences at incredible cost to those they trapped. The trapping people and forcing them to work allowed the person who’d done so to implement devaluation of others’ preferences by the universe; the enslaved people were not able to implement their preferences, and so by any measurement of revealed preference that failed to understand the intents behind attempts to escape the trap, the enslaved people appeared to “accept” their conditions; a human wouldn’t necessarily make that mistake, but this is an important case to get right for any “discovering agency”-like algorithm, since efforts by the enslaved people were weakened by their enslavement.
I haven’t really been able to understand this from a philosophical perspective.
Representation theorems like VNM, Savage, Jeffrey-Bolker, etc say that the preference view is equivalent to the subjective utility view.
Harsanyi says that utility maximization and Pareto-efficiency are interchangeable concepts.
Yet, you say that if morality is to be based on preferences, rather than utilities, we cannot define “the greatest good for the greatest number”.
How can this be, if the views are equivalent?
Ah, right, here’s the rub:
A generalization of Gibbard–Satterthwaite is Gibbard’s Theorem, which says that whether ranked or not no mechanism can
be non-dictatorial
choose between more than two options
be strategy-proof.
So the crux of the issue isn’t ordinal vs cardinal (preferences vs utility, ranked vs scored). Rather, the crux of the issue is strategy-proofness: Arrow and related theorems are about the difficulty of strategy-proof implementations of the utillitarian ideal.
For a trivial example, see any discussion about utility monsters.
Say more about the relevance?
Basically, a utility monster is a person or group that derives orders of magnitude more utility from some activity, and the effect is so large that it cancels out the rest of the population’s preferences.
An example is Confederate slave owners derived way more utility from owning slaves than non-slave owners, so if there was an election over whether slavery is illegal, then the Confederate slave owners have a strategy that is: Turn into utility monsters that derive orders of magnitude more utility, and make slave owners have extreme preferences. Even if there were 1,000 slave owners to 1 million non slave owners, there’s still a way for the slave owners to win using a strategy.
While this is a solid example if argued better, in my view this is a somewhat badly argued description of it, and the example is one that is extremely important to be correct about. There are several turns of phrase that I think are false under standard definitions of the words, despite themselves being standard turns of phrase; eg, it is in my view not possible under <?natural law?> to own another being, so the “ownership” in the law of the enforcers of the time was misleading phrasing, and that “ownership” should not be agreed with today since we have begun to near consensus that they were in the wrong (though some people, obviously in my view terrible people, endorse continuation of slavery in the countries where it exists, such as the US prison system or various mining groups in africa). That said, it’s also not the end of the world to be wrong about it as a first attempt from someone who has spent less than decades thinking heavily about this—such is the nature of attempting to discuss high sensitivity things, one is wrong about them as often or more than low sensitivity things. But I would encourage attempting to rephrase to encode the concepts you wouldn’t want a reader to miss, since in my view it’s worth posting commentary on everything to clarify convergence towards prosocial policy.
IMO this is a good time to use something besides hedonic utilitarianism’s lens as an additional comparison: eg, a utility monster, in the preference utilitarian math, is defined by and could only be implemented by a hyperoptimizer, ie a being willing to push something very far into a target configuration. In this case, the slavers were optimizing their own life’s shape by enslaving people in order to optimize the slaver’s preferences at incredible cost to those they trapped. The trapping people and forcing them to work allowed the person who’d done so to implement devaluation of others’ preferences by the universe; the enslaved people were not able to implement their preferences, and so by any measurement of revealed preference that failed to understand the intents behind attempts to escape the trap, the enslaved people appeared to “accept” their conditions; a human wouldn’t necessarily make that mistake, but this is an important case to get right for any “discovering agency”-like algorithm, since efforts by the enslaved people were weakened by their enslavement.