I think the interesting question is why we care for our future selves at all.
As kids, we tend not to. It’s almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break. The reason is they care about their current selves, and not about their future self. Of course in due time the future becomes the present, and that same child has to spend the entire time at the end of their holiday working furiously on everything that’s been left to the last minute. At that point, they wish that their past self had chosen an alternative plan. This is still not really wisdom, as they don’t much care about their past self either—they care about their present self who now has to do the homework.
Summarising—if your utility function changes over time, then you will, as you mentioned, have conflict between your current and future self. This prevents your plans for the future from being stable—a plan that maximises utility when considered at one point no longer maximises it when considered again later. You cannot plan properly—and this undermines the very point of planning. (You may plan to diet tomorrow, but when tomorrow comes, dieting no longer seems the right answer....)
I think this is why the long view becomes the rational view—if you weight future benefits equally to your present ones, assuming (as you should) that your reward function is stable, then a plan you make now will still be valid in the future.
In fact the mathematical form that works is any kind of exponential—it’s OK to have the past be more important than the future, or the future more important than the past as long as this happens as an exponential function of time. Then as you pass through time, the actual sizes of the allocated rewards change, but the relative sizes remain the same, and planning should be stable. In practice an exponential rise pushes all the importance of reward far out into the indefinite future, and is useless for planning. Exponential decays push all the important rewards into your past, but since you can’t actually change that, it’s almost workable. But the effect of it is that you plan to maximise your immediate reward to the neglect of the future, and since when you reach the future you don’t actually think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self, this doesn’t really work either as a means of having coherent plans.
That leaves the flat case. But this is a learned fact, not an instinctive one.
If you actually implemented an exponential decay, you would think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self. The inconsistency is if you implement an exponential decay for the future but flat for the past.
It’s almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break.
You don’t need to go that far for an example. When a child is assigned homework for tomorrow, they often won’t do it (unles forced to by their parents), because they care more about not doing it now than they do about not having done it tomorrow.
It seems to me that you said something near to: “Assume planning is good or desirable” therefore I can show that long-term → rational.
To which I say: True.
But planning is only good or desirable if you need to be an agent of commitment, long term, trustworthiness, etc… which a more short-termed person might not be. Say a post “iluminated” buddhist monk, for instance.
I think the interesting question is why we care for our future selves at all.
As kids, we tend not to. It’s almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break. The reason is they care about their current selves, and not about their future self. Of course in due time the future becomes the present, and that same child has to spend the entire time at the end of their holiday working furiously on everything that’s been left to the last minute. At that point, they wish that their past self had chosen an alternative plan. This is still not really wisdom, as they don’t much care about their past self either—they care about their present self who now has to do the homework.
Summarising—if your utility function changes over time, then you will, as you mentioned, have conflict between your current and future self. This prevents your plans for the future from being stable—a plan that maximises utility when considered at one point no longer maximises it when considered again later. You cannot plan properly—and this undermines the very point of planning. (You may plan to diet tomorrow, but when tomorrow comes, dieting no longer seems the right answer....)
I think this is why the long view becomes the rational view—if you weight future benefits equally to your present ones, assuming (as you should) that your reward function is stable, then a plan you make now will still be valid in the future.
In fact the mathematical form that works is any kind of exponential—it’s OK to have the past be more important than the future, or the future more important than the past as long as this happens as an exponential function of time. Then as you pass through time, the actual sizes of the allocated rewards change, but the relative sizes remain the same, and planning should be stable. In practice an exponential rise pushes all the importance of reward far out into the indefinite future, and is useless for planning. Exponential decays push all the important rewards into your past, but since you can’t actually change that, it’s almost workable. But the effect of it is that you plan to maximise your immediate reward to the neglect of the future, and since when you reach the future you don’t actually think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self, this doesn’t really work either as a means of having coherent plans.
That leaves the flat case. But this is a learned fact, not an instinctive one.
If you actually implemented an exponential decay, you would think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self. The inconsistency is if you implement an exponential decay for the future but flat for the past.
The worse I remember something, the less I care about it, so it is something like a reversed exponential curve for the past too.
You don’t need to go that far for an example. When a child is assigned homework for tomorrow, they often won’t do it (unles forced to by their parents), because they care more about not doing it now than they do about not having done it tomorrow.
It seems to me that you said something near to: “Assume planning is good or desirable” therefore I can show that long-term → rational.
To which I say: True. But planning is only good or desirable if you need to be an agent of commitment, long term, trustworthiness, etc… which a more short-termed person might not be.
Say a post “iluminated” buddhist monk, for instance.