If Omega is really offering unbounded amounts of utility, then the exact same argument as supports TORTURE over DUST SPECKS applies here. Thus:
Would you (should you) trade 0.01 seconds of Frank’s life (no matter how much of it he has left) for 1000 years of capital-F Fun for you? And then, given that that trade has already happened, another 0.01 seconds of Frank’s like for another 1000 years of Fun? Etc. I’m pretty sure the answer to the first question is yes for almost everyone (even the exceptionally altruistic; even those who would be reluctant to admit it) and it seems to me that any given 0.01s of Frank’s life is of about the same value in this respect. In which case, you can get from wherever you are to begin with, to trading off all of Frank’s remaining life for a huge number of years of Fun, by a (long) sequence of stepwise improvements to the world that you’re probably willing to make individually. In which case, if Fun is really additive, it doesn’t make any sense to prefer the status quo to trillions of years of Fun and no Frank.
(Assuming, once again, that we have the prospect of an unlimitedly long life full of Fun, whereas Frank has only an ordinary human lifespan ahead of him.)
Which feels like an appalling thing to say, of course, but I think that’s largely because in the real world we are never presented with any choice at all like that one (because real fun isn’t additive like that, and because we don’t have the option of trillions of years of it) and so, quite reasonably, our intuitions about what choices it’s decent to make implicitly assume that this sort of choice never really occurs.
As with TORTURE v DUST SPECKS, I am not claiming that the (selfish) choice of trillions of years of Fun at the expense of Frank’s life is in fact the right choice (according to my values, or yours, or those of society at large, or the Objectively Truth About Morality if there is one). Maybe it is, maybe not. But I don’t think it can reasonably be said to be obviously wrong, especially if you’re willing to grant Eliezer’s point in TORTURE v DUST SPECKS, and therefore I don’t see that this can be a conclusive or near-conclusive argument for incommensurable tiers of value.
If Omega is really offering unbounded amounts of utility, then the exact same argument as supports TORTURE over DUST SPECKS applies here. Thus:
Would you (should you) trade 0.01 seconds of Frank’s life (no matter how much of it he has left) for 1000 years of capital-F Fun for you? And then, given that that trade has already happened, another 0.01 seconds of Frank’s like for another 1000 years of Fun? Etc. I’m pretty sure the answer to the first question is yes for almost everyone (even the exceptionally altruistic; even those who would be reluctant to admit it) and it seems to me that any given 0.01s of Frank’s life is of about the same value in this respect. In which case, you can get from wherever you are to begin with, to trading off all of Frank’s remaining life for a huge number of years of Fun, by a (long) sequence of stepwise improvements to the world that you’re probably willing to make individually. In which case, if Fun is really additive, it doesn’t make any sense to prefer the status quo to trillions of years of Fun and no Frank.
(Assuming, once again, that we have the prospect of an unlimitedly long life full of Fun, whereas Frank has only an ordinary human lifespan ahead of him.)
Which feels like an appalling thing to say, of course, but I think that’s largely because in the real world we are never presented with any choice at all like that one (because real fun isn’t additive like that, and because we don’t have the option of trillions of years of it) and so, quite reasonably, our intuitions about what choices it’s decent to make implicitly assume that this sort of choice never really occurs.
As with TORTURE v DUST SPECKS, I am not claiming that the (selfish) choice of trillions of years of Fun at the expense of Frank’s life is in fact the right choice (according to my values, or yours, or those of society at large, or the Objectively Truth About Morality if there is one). Maybe it is, maybe not. But I don’t think it can reasonably be said to be obviously wrong, especially if you’re willing to grant Eliezer’s point in TORTURE v DUST SPECKS, and therefore I don’t see that this can be a conclusive or near-conclusive argument for incommensurable tiers of value.