Except, you are anyway? After all, the utilities can grow as fast or faster than the discounts shrink. Thus, if the pattern of utilities is just 2^(numbers of bits for the door number+1), the discounted total is infinite (1+1+1+1…); and so, too, is it infinite in worlds where everyone has a million times the utility (1M + 1M + 1M…). Yet the second world seems better. Thus, we’ve lost Pareto (over whatever sort of location you like), and we’re back to obsessing about infinite worlds anyway, despite our discounts.
Maybe one wants to say: the utility at a given location isn’t allowed to take on any finite value (thanks to Paul Christiano for discussion). Sure, maybe agents can live for any finite length of time. But our UTM should be trying to specify momentary experiences (“observer-moments”) rather than e.g. lives, and experiences can’t get any finite amount of pleasure-able (or whatever you care about experiences being) – or perhaps, to the extent they can, they get correspondingly harder to specify.
Naively, though, this strikes me as a dodge (and one that the rest of the philosophical literature, which talks about worlds like <1, 2, 3…> all the time, doesn’t allow itself). It feels like denying the hypothetical, rather than handling it. And are we really so confident about how much of what can be fit inside an “experience”?
I think these are real problems, but it’s worth pointing out that they occur even if you are certain that the world contains only one observer living for a bounded amount of time, whose welfare is merely uncertain (e.g. in a St. Petersburg case). So I don’t think it’s fair to characterize these as problems with infinite worlds, rather than more fundamental problems with common intuitions about unbounded utilities.
The St. Petersburg case does involve infinitely many possible worlds, in the sense that its probability distribution is not finitely supported, or in the Kripke-semantics sense. But I agree that infinitely many possible worlds is an extremely common and everyday assumption (say when modeling variables with Gaussians), and that similar issues come up that perhaps could be handled by similar solutions.
I do think it’s reasonable to call those cases “infinite ethics” given that they involve infinitely many possible worlds. But I definitely think it’s a distraction to frame them as being about infinite populations, and a mistake to expect them to be handled by ideas about aggregation across people.
(The main counterargument I can imagine is that you might think of probability distributions as a special case of aggregation across people, in which case you might think of “infinite populations” as the simpler case than “infinitely many possible worlds.” But this is still a bit funky given that infinitely many possible worlds is kind of the everyday default whereas infinite populations feel exotic.)
I think these are real problems, but it’s worth pointing out that they occur even if you are certain that the world contains only one observer living for a bounded amount of time, whose welfare is merely uncertain (e.g. in a St. Petersburg case). So I don’t think it’s fair to characterize these as problems with infinite worlds, rather than more fundamental problems with common intuitions about unbounded utilities.
The St. Petersburg case does involve infinitely many possible worlds, in the sense that its probability distribution is not finitely supported, or in the Kripke-semantics sense. But I agree that infinitely many possible worlds is an extremely common and everyday assumption (say when modeling variables with Gaussians), and that similar issues come up that perhaps could be handled by similar solutions.
I do think it’s reasonable to call those cases “infinite ethics” given that they involve infinitely many possible worlds. But I definitely think it’s a distraction to frame them as being about infinite populations, and a mistake to expect them to be handled by ideas about aggregation across people.
(The main counterargument I can imagine is that you might think of probability distributions as a special case of aggregation across people, in which case you might think of “infinite populations” as the simpler case than “infinitely many possible worlds.” But this is still a bit funky given that infinitely many possible worlds is kind of the everyday default whereas infinite populations feel exotic.)