That should definitely be part of the solution. In fact, I would say that utility functions defined over individual world-states, rather than entire future-histories, should not have ever been considered in the first place. The effects of your actions are not restricted to a single time-slice of the universe, so you cannot maximize expected utility if your utility function takes only a single time-slice as input. (Also because special relativity.)
Suppose that a person X is born at time T: we enter the fact of “X was born” into the utility function’s memory. From now, for every future state the UF checks whether or not X is still alive. If yes, good, if not, that state loses one point of utility.
maintaining a memory of the amount of peak well-being that anyone has ever had, and if they fall below their past peak well-being, apply the difference as a penalty. So if X used to have 50 points of well-being but now only has 25, then we apply an extra −25 to the utility of that scenario.
These are kludge-y answers to special cases of a more general issue: we care about the preferences existing people have for the future. Presumably X himself would prefer a future in which he keeps his 50 points of well-being over a future where he has 25 and Y pops into existence with 25 as well, whereas Y is not yet around to have a preference. I don’t see what the peak well-being that X has ever experienced has anything to do with it. If we were considering whether to give X an additional 50 units of well-being (for a total of 100), or bring into existence Y with 50 units of well-being, it seems to me that exactly the same considerations would come into play.
That should definitely be part of the solution. In fact, I would say that utility functions defined over individual world-states, rather than entire future-histories, should not have ever been considered in the first place. The effects of your actions are not restricted to a single time-slice of the universe, so you cannot maximize expected utility if your utility function takes only a single time-slice as input. (Also because special relativity.)
These are kludge-y answers to special cases of a more general issue: we care about the preferences existing people have for the future. Presumably X himself would prefer a future in which he keeps his 50 points of well-being over a future where he has 25 and Y pops into existence with 25 as well, whereas Y is not yet around to have a preference. I don’t see what the peak well-being that X has ever experienced has anything to do with it. If we were considering whether to give X an additional 50 units of well-being (for a total of 100), or bring into existence Y with 50 units of well-being, it seems to me that exactly the same considerations would come into play.