in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter);
Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.
if there’s a scenario where one person’s utility function diverges to infinity, then both sum- and average-utility aggregation claim that it’s worth sacrificing everyone else to make sure that happens (the “utility monster” problem).
Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.
Again, this seems like scope insensitivity.