Then suppose everyone gets +1 utility. You’d think that giving an infinity of agents one extra utility each would be fabulous—but the utilities are exactly the same as before. The current −1 utility belongs to the person who had −2 before, but there’s still currently someone with −1, just as there was someone with −1 before the change. And this holds for every utility value: an infinity of improvements has accomplished… nothing. As soon as you relabel who is who, you’re in exactly the same position as before.
Something seems wrong with this argument. The relabeling, in particular, jars my intuition. You have an infinity of, presumably, conscious beings with subjective experience, each of whom would tell you that they are better off than they were before. Are you sure that “relabel” is a sensible operation in this context? You do not seem to be changing anything external to your own mind, and that seems like it ought not to affect your judgement of whether a thing is good or not.
Something seems wrong with this argument. The relabeling, in particular, jars my intuition. You have an infinity of, presumably, conscious beings with subjective experience, each of whom would tell you that they are better off than they were before. Are you sure that “relabel” is a sensible operation in this context? You do not seem to be changing anything external to your own mind, and that seems like it ought not to affect your judgement of whether a thing is good or not.