Wow, someone who’s read my paper! :-) it is because of considerations like the ones you mention that I’m tempted to require bounded utilities. Or unbounded utilities but only finitely many choices to be faced (which is equivalent with a bounded utility). It’s the combination—unbounded utility, unboundedly many options—that is the problem.
One approach is just to impose an arbitrary cut-off on all worlds above a certain large size (ignore everything bigger than 3^^^3 galaxies say), and then scale utility with population all the way up to the cut-off. That would give a bounded utility function, and an effect very like SIA. Most of your decisions would be weighted towards the assumption that you are living in one of the largest worlds, with size just below the cut-off. If you’d cut-off at 4^^^^4 galaxies, you’d assume you were in one of those worlds instead. However, since there don’t seem to be many decisions that are critically affected by whether we are one of 3^^^3 or one or 4^^^^4 galaxies, this probably works.
Another approach is to use a bounded utility function of more self-centered construction. Let’s suppose you care a lot about yourself and your family, a good measure about your friends and colleagues, a little bit (rather dilutely) about anyone else on Earth now, and rather vaguely about future generations of people. But not much at all about alien civilizations, future AIs etc. In that case your utility for a world of 3^^^3 alien civilizations is clearly not going to be much bigger than your utility for a world containing only the Earth, Sun and nearby planets (plus maybe a few nearby stars to admire at night). And so your decisions won’t be heavily weighted towards such big worlds. A betting coupon which cost a cent, and paid off a million dollars if your planet was the only inhabited one in the universe would look like a very good deal. This then looks more like SSA reasoning than SIA.
This last approach looks more consistent to me, and more in-line with the utility functions humans actually have, rather than the ones we might wish them to have.
Wow, someone who’s read my paper! :-) it is because of considerations like the ones you mention that I’m tempted to require bounded utilities. Or unbounded utilities but only finitely many choices to be faced (which is equivalent with a bounded utility). It’s the combination—unbounded utility, unboundedly many options—that is the problem.
I’m interested in how you’d apply the bound.
One approach is just to impose an arbitrary cut-off on all worlds above a certain large size (ignore everything bigger than 3^^^3 galaxies say), and then scale utility with population all the way up to the cut-off. That would give a bounded utility function, and an effect very like SIA. Most of your decisions would be weighted towards the assumption that you are living in one of the largest worlds, with size just below the cut-off. If you’d cut-off at 4^^^^4 galaxies, you’d assume you were in one of those worlds instead. However, since there don’t seem to be many decisions that are critically affected by whether we are one of 3^^^3 or one or 4^^^^4 galaxies, this probably works.
Another approach is to use a bounded utility function of more self-centered construction. Let’s suppose you care a lot about yourself and your family, a good measure about your friends and colleagues, a little bit (rather dilutely) about anyone else on Earth now, and rather vaguely about future generations of people. But not much at all about alien civilizations, future AIs etc. In that case your utility for a world of 3^^^3 alien civilizations is clearly not going to be much bigger than your utility for a world containing only the Earth, Sun and nearby planets (plus maybe a few nearby stars to admire at night). And so your decisions won’t be heavily weighted towards such big worlds. A betting coupon which cost a cent, and paid off a million dollars if your planet was the only inhabited one in the universe would look like a very good deal. This then looks more like SSA reasoning than SIA.
This last approach looks more consistent to me, and more in-line with the utility functions humans actually have, rather than the ones we might wish them to have.