If your utility function depends on the size of the universe—for example, if you are an “average utilitarian” who averages across all of the possible holes in reality where there might be an observer moment—then you may not run into these problems. When you learn that the universe supports TREE(100) computational steps, you downgrade the value of the life you lived so far to something negligible. But the value of the universe that lies ahead of you is still 1, just the same as ever.
(The instantiation I have in mind is UDASSA; I was prompted to post this by the recent discussion of Pascal’s mugging.)
I could totally imagine this having some other wacky behavior though...
True. Of course, average utilitarianism seems whacky to me as such, in particular if physics allows you to influence the size of the universe; it might lead you to choose a small universe if you have a better chance of filling it with people having fun.
Just to confirm… I take it from this that you consider a large universe intrinsically more valuable than a small one, and therefore any value system that leads one to choose a small universe doesn’t capture your values. Have I understood you?
Not sure, so let me phrase differently: I consider a universe with a larger number of people all leading interesting, fulfilling, diverse lives as intrinsically more valuable than a universe with a smaller number of such people; e.g., I consider a universe of size googolplex containing 10^15 such people to be more valuable than a universe of size googol containing 10^12 such people. I don’t at all feel like an FAI should choose the second option because it has a much larger fraction of (fulfilling lives per slots in the universe where a fulfilling life could be).
If your utility function depends on the size of the universe—for example, if you are an “average utilitarian” who averages across all of the possible holes in reality where there might be an observer moment—then you may not run into these problems. When you learn that the universe supports TREE(100) computational steps, you downgrade the value of the life you lived so far to something negligible. But the value of the universe that lies ahead of you is still 1, just the same as ever.
(The instantiation I have in mind is UDASSA; I was prompted to post this by the recent discussion of Pascal’s mugging.)
I could totally imagine this having some other wacky behavior though...
True. Of course, average utilitarianism seems whacky to me as such, in particular if physics allows you to influence the size of the universe; it might lead you to choose a small universe if you have a better chance of filling it with people having fun.
Just to confirm… I take it from this that you consider a large universe intrinsically more valuable than a small one, and therefore any value system that leads one to choose a small universe doesn’t capture your values. Have I understood you?
Not sure, so let me phrase differently: I consider a universe with a larger number of people all leading interesting, fulfilling, diverse lives as intrinsically more valuable than a universe with a smaller number of such people; e.g., I consider a universe of size googolplex containing 10^15 such people to be more valuable than a universe of size googol containing 10^12 such people. I don’t at all feel like an FAI should choose the second option because it has a much larger fraction of (fulfilling lives per slots in the universe where a fulfilling life could be).
OK, thanks.