The way I think about it, people change over time, so if you value your near term happiness because of your soon-future’s similarity to yourself, you may value your long term happiness a bit less so.
If this is the main reason for time discounting, it doesn’t seem appropriate to extend it into the indefinite future especially when thinking about AGI. For example, once we create superintelligence, it probably wouldn’t be very difficult to stop the kinds of changes that would cause you to value your future self less.
If this is the main reason for time discounting, it doesn’t seem appropriate to extend it into the indefinite future especially when thinking about AGI. For example, once we create superintelligence, it probably wouldn’t be very difficult to stop the kinds of changes that would cause you to value your future self less.
Good point.