So utilitarianism has known paradoxes if you allow infinite positive/negative utilities (basically because infinite sums don’t always behave well). On the other hand, if you restrict yourself, say to situations that only last finitely long all these paradoxes go away. If both devices last for the same amount of subjective time, this holds true in all reference frames, and thus in all reference frames you can say that the situations are equally good.
This isn’t a problem if you believe that there will only ever be finitely many people. Or if you exponentially discount (in some relativistically consistent manner) at an appropriate rate.
Hm, I think any integrable time-discounting function would also work. And the trouble with an AI that doesn’t time-discount is that it gets Pascal’s mugged by literally any chance of eternity.
So utilitarianism has known paradoxes if you allow infinite positive/negative utilities (basically because infinite sums don’t always behave well). On the other hand, if you restrict yourself, say to situations that only last finitely long all these paradoxes go away. If both devices last for the same amount of subjective time, this holds true in all reference frames, and thus in all reference frames you can say that the situations are equally good.
If you restrict to finitely long situations, you wind up with weird effects at the cutoff window.
This isn’t a problem if you believe that there will only ever be finitely many people. Or if you exponentially discount (in some relativistically consistent manner) at an appropriate rate.
Caring about times within some time limit in a single reference frame is sufficient.
The problem with a time limit is that it encourages you to not care what happens afterwards.
Hm, I think any integrable time-discounting function would also work. And the trouble with an AI that doesn’t time-discount is that it gets Pascal’s mugged by literally any chance of eternity.