In case you hadn’t seen it, there’s a post on the EA forum which argues that if you accept both utilitarianism and try to resist scope insensitivity, there’s no way to escape stuff like the Repugnant Conclusion.
I hope the reasoning is clear enough from this sketch. If you are committed to the scope of utility mattering, such that you cannot just declare additional utility de facto irrelevant past a certain point, then there is no way for you to formulate a moral theory that can avoid being swamped by utility comparisons. Once the utility stakes get large enough—and, when considering the scale of human or animal suffering or the size of the future, the utility stakes really are quite large—all other factors become essentially irrelevant, supplying no relevant information for our evaluation of actions or outcomes.
The post even includes a recipe for how to construct new paradoxes:
Indeed, in section five Cowen comes close to suggesting a quasi-algorithmic procedure for generating challenges to utilitarianism.[9] You just need a sum over a large number of individually-imperceptible epsilons somewhere in your example, and everything else falls into place. The epsilons can represent tiny amounts of pleasure, or pain, or probability, or something else; the large number can be extended in time, or space, or state-space, or across possible worlds; it can be a one-shot or repeated game. It doesn’t matter. You just need some Σ ε and you can generate a new absurdity: you start with an obvious choice between two options, then keep adding additional epsilons to the worse option until either utility vanishes in importance or utility dominates everything else.
In other words, Cowen can just keep generating more and more absurd examples, and there is no principled way for you to say ‘this far but no further’. As Cowen puts it:
Once values are treated as commensurable, one value may swamp all others in importance and trump their effects… The possibility of value dictatorship, when we must weigh conflicting ends, stands as a fundamental difficulty.
One way to get out of this predicament, while getting different problems, is via accepting incommensurable values:
If we accept a certain amount of incommensurability between our values, and thus a certain amount of non-systematicity in our ethics, we can avoid the absurdities directly. Different values are just valuable in different ways, and they are not systematically comparable: while sometimes the choices between different values are obvious, often we just have to respond to trade-offs between values with context-specific judgment. On these views, as we add more and more utility to option B, eventually we reach a point where the different goods in A and B are incommensurable and the trade-off is systematically undecidable; as such, we can avoid the problem of utility swallowing all other considerations without arbitrarily declaring it unimportant past a certain point.
In case you hadn’t seen it, there’s a post on the EA forum which argues that if you accept both utilitarianism and try to resist scope insensitivity, there’s no way to escape stuff like the Repugnant Conclusion.
The post even includes a recipe for how to construct new paradoxes:
One way to get out of this predicament, while getting different problems, is via accepting incommensurable values: