Rawls’s Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.
If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don’t find it and stop it. Tell me that you think that’s an unlikely hypothesis and I’ll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis.
However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don’t know this worse-off person exists nor that they could do anything about it if that person did exist.
That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it’s very hard to avoid the conclusion that we should dedicate 100% of the world’s surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms.
This isn’t a problem that Bentham’s hedonistic utilitarianism, nor Eliezer’s gloss on it, handles any more satisfactorily than any other theory as far as I can tell.
Rawls’s Wager: the least well-off person lives in a different part of the multiverse than we do, so we should spend all our resources researching trans-multiverse travel in a hopeless attempt to rescue that person. Nobody else matters anyway.
If this is a problem for Rawls, then Bentham has exactly the same problem given that you can hypothesise the existence of a gizmo that creates 3^^^3 units of positive utility which is hidden in a different part of the multiverse. Or for that matter a gizmo which will inflict 3^^^3 dust specks on the eyes of the multiverse if we don’t find it and stop it. Tell me that you think that’s an unlikely hypothesis and I’ll just raise the relevant utility or disutility to the power of 3^^^3 again as often as it takes to overcome the degree of improbability you place on the hypothesis.
However I think it takes a mischievous reading of Rawls to make this a problem. Given that the risk of the trans-multiverse travel project being hopeless (as you stipulate) is substantial and these hypothetical choosers are meant to be risk-averse, not altruistic, I think you could consistently argue that the genuinely risk-averse choice is not to pursue the project since they don’t know this worse-off person exists nor that they could do anything about it if that person did exist.
That said, diachronous (cross-time) moral obligations are a very deep philosophical problem. Given that the number of potential future people is unboundedly large, and those people are at least potentially very badly off, if you try to use moral philosophies developed to handle current-time problems and apply them to far-future diachronous problems it’s very hard to avoid the conclusion that we should dedicate 100% of the world’s surplus resources and all our free time to doing all sorts of strange and potentially contradictory things to benefit far-future people or protect them from possible harms.
This isn’t a problem that Bentham’s hedonistic utilitarianism, nor Eliezer’s gloss on it, handles any more satisfactorily than any other theory as far as I can tell.