Among other things, if you try to violate “utilitarianism”, you run into paradoxes, contradictions, circular preferences, and other things that aren’t symptoms of moral wrongness so much as moral incoherence.
Nobody seems to have problems with circular preferences in practice, probably because people’s preferences aren’t precise enough. So we don’t have to adopt utilitarianism to fix this non-problem.
But you don’t conclude that there are actually two tiers of utility with lexical ordering. You don’t conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don’t conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority
People aren’t going to be doing ethical calculations using hyperrreal numbers, and they aren’t going to be doing them with real numbers either—both are beyond our cognitive limitations. Mathematically perfect but cognitively intractable ethics is angels-on-pinheads stuff.
Cognitive limitations means that ethics has to be based on rough heuristics. What would they look like? They would look like sacred values, taboos, and rules—like ethics as it actually exists, not like utilitarianism.
And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there’s any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.
So it seems like there should be an unconditional social injunction against preferring money to life, and no “but” following it. Not even “but a thousand dollars isn’t worth a 0.0000000001% probability of saving a life”. Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.
The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.
On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.
It is not difficult to steelman the usefulness of absolute prohibitions, eg against torture: they are a Schelling fence which prevents society sliding into a dystopia. So there is X amount of good consequences that stem from having taboos.
And there is Y amount of value that is lost by having them. Maybe you could torture the terrrorist and find out where the bomb is. (A much better example than the dust specs one, since it doesn’t depend on the fantasy of pains aggregating).
So if you are a consequentialist—there are excellent reasons for sticking with consequentialism even if you reject utilitarianism—the crux is whether X>Y or Y>X. Saying nothing about X, as per the OP, doesn’t even address the argument.
Nobody seems to have problems with circular preferences in practice, probably because people’s preferences aren’t precise enough. So we don’t have to adopt utilitarianism to fix this non-problem.
People aren’t going to be doing ethical calculations using hyperrreal numbers, and they aren’t going to be doing them with real numbers either—both are beyond our cognitive limitations. Mathematically perfect but cognitively intractable ethics is angels-on-pinheads stuff.
Cognitive limitations means that ethics has to be based on rough heuristics. What would they look like? They would look like sacred values, taboos, and rules—like ethics as it actually exists, not like utilitarianism.
It is not difficult to steelman the usefulness of absolute prohibitions, eg against torture: they are a Schelling fence which prevents society sliding into a dystopia. So there is X amount of good consequences that stem from having taboos.
And there is Y amount of value that is lost by having them. Maybe you could torture the terrrorist and find out where the bomb is. (A much better example than the dust specs one, since it doesn’t depend on the fantasy of pains aggregating).
So if you are a consequentialist—there are excellent reasons for sticking with consequentialism even if you reject utilitarianism—the crux is whether X>Y or Y>X. Saying nothing about X, as per the OP, doesn’t even address the argument.