An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc… may have some validity, total utilitarianism wins as the population increases because the others don’t scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.
I’ll admit I haven’t encountered this argument before, but to me it looks like a type error. As you note, average utilitarianism counts something quite different than total utilitarianism; observers might (correctly) note that the latter can spit out much larger numbers than the former under some circumstances, but those values are unrelated abstractions, not something commensurate with each other’s or those of other ethical theories absent a quantifying theory of metaethics that we don’t have. It’s like dividing seven by cucumber. I’d argue that the normalization process you suggest doesn’t make much sense either, though; many utilitarianisms don’t have well-defined upper bounds (why stop at a quadrillion?), and some don’t have well-defined lower (a life not worth living might be counted as a negative contribution).
Insofar as ethical theories are models of our ethical intuitions, I can see an argument for normalizing against people’s subjective satisfaction with a world-state, which is almost certainly a finite range and therefore implies some kind of diminishing returns or dynamic rather than static evaluation of state changes. But I can see arguments against this, too; in particular, it doesn’t make any sense if you’re trying to make a universalizable theory of ethics (which has its own problems, but it has been tried). The hedonic treadmill also raises issues.
I’ll admit I haven’t encountered this argument before, but to me it looks like a type error. As you note, average utilitarianism counts something quite different than total utilitarianism; observers might (correctly) note that the latter can spit out much larger numbers than the former under some circumstances, but those values are unrelated abstractions, not something commensurate with each other’s or those of other ethical theories absent a quantifying theory of metaethics that we don’t have. It’s like dividing seven by cucumber. I’d argue that the normalization process you suggest doesn’t make much sense either, though; many utilitarianisms don’t have well-defined upper bounds (why stop at a quadrillion?), and some don’t have well-defined lower (a life not worth living might be counted as a negative contribution).
Insofar as ethical theories are models of our ethical intuitions, I can see an argument for normalizing against people’s subjective satisfaction with a world-state, which is almost certainly a finite range and therefore implies some kind of diminishing returns or dynamic rather than static evaluation of state changes. But I can see arguments against this, too; in particular, it doesn’t make any sense if you’re trying to make a universalizable theory of ethics (which has its own problems, but it has been tried). The hedonic treadmill also raises issues.