I have always had an uncomfortable feeling whenever I have been asked to include distant-future generations in my utilitarian moral considerations. Intuitively, I draw on my background in economics, and tell myself that the far-distant future should be discounted toward zero weight. But how do I justify the discounting morally? Let me try to sketch an argument.
I will claim that my primary moral responsibility is to the people around me. I also have a lesser responsibility to the next generation, and a responsibility lesser yet to the generation after that, and so on. A steep discount rate − 30% per generation or so. I will do my duty to the next generation, but in turn I expect the next generation to do its duty to the generation after that. After all, the next generation is in a far better position than me to forsee what problems the generation after that really faces. Their efforts will be much less likely than mine to be counterproductive.
If I were to spread my concern over too many generations, I would be shortchanging the next generation of their fair share of my concern. Far-future generations have plenty of predecessor generations to worry about their welfare. The next generation has only us. We mustn’t shortchange them!
This argument is just a sketch, of course. I just invented it today. Feedback is welcome.
In nature, the best way you can help your great grand-kids is to help your children. If there was a way to help your grandchildren at the expense of your children that ultimately benefitted the grandchildren, nature might favour it—but usually there is simply no easy way to do that.
Grandparents do sometimes favour more distant offspring in their wills—if they think the direct offspring are compromised or irresponsible, for example. Such behaviour is right and natural.
Temporal discounting is a reflection of your ignorance and impotence when it comes to the distant future. It is not really that you fundamentally care less about the far future—it is more that you don’t know and can’t help—so investing mental resources would be rather pointless.
Robin argues that few are prepared to invest now to prevent future destruction of the planet. The conclusion there seems to be that humans are not utilitarian agents.
Robin seems to claim that humans do not invest in order to pass things on to future generations—whereas in fact they do just that whenever they invest in their own offspring.
Obviously you don’t invest in your great-grandchildren directly. You invest in your offspring—they can manage your funds better than you can do so from your wheelchair or grave.
Temporal discounting make sense. Organisms do it becasue they can’t see or control the far future as well as their direct descendants can. In those rare cases where that is not true, direct descendants can sometimes be bypassed.
However, you wouldn’t want to build temporal discounting into the utility function of a machine intelligence. It knows better than you do its prediction capablities—and can figure out such things for itself.
Since that exact point was made in the Eliezer essay Robin’s post was a reply to, it isn’t clear that Robin understands that.
I don’t think you need any discounting. Your effect on the year 2012 is somewhat predictable. It is possible to choose a course of action based on known effect’s on the year 2012.
You effect on the year 3000 is unpredictable. You can’t even begin to predict what effect your actions will have on the human race in the year 3000.
Thus, there is an automatic discounting effect. An act is only as valuable as it’s expected outcome. The expected outcome on the year 1,000,000 is almost always ~zero, unless there is some near-future extinction possibility, because the probability of you having a desired impact is essentially zero.
I tend to agree, in that I also have a steep discount across time and distance (though I tend to think of it as “empathetic distance”, more about perceived self-similarity than measurable time or distance, and I tend to think of weightings in my utility function rather than using the term “moral responsibility”).
That said, it’s worth asking just how steep a discount is justifiable—WHY do you think you’re more responsible to a neighbor than to four of her great-grandchildren, and do you think this is the correct discount to apply?
And even if you do think it’s correct, remember to shut up and multiply. It’s quite possible for there to be more than 35x as much sentience in 10 generations as there is today.
Thanks for posting. Upvoted.
I have always had an uncomfortable feeling whenever I have been asked to include distant-future generations in my utilitarian moral considerations. Intuitively, I draw on my background in economics, and tell myself that the far-distant future should be discounted toward zero weight. But how do I justify the discounting morally? Let me try to sketch an argument.
I will claim that my primary moral responsibility is to the people around me. I also have a lesser responsibility to the next generation, and a responsibility lesser yet to the generation after that, and so on. A steep discount rate − 30% per generation or so. I will do my duty to the next generation, but in turn I expect the next generation to do its duty to the generation after that. After all, the next generation is in a far better position than me to forsee what problems the generation after that really faces. Their efforts will be much less likely than mine to be counterproductive.
If I were to spread my concern over too many generations, I would be shortchanging the next generation of their fair share of my concern. Far-future generations have plenty of predecessor generations to worry about their welfare. The next generation has only us. We mustn’t shortchange them!
This argument is just a sketch, of course. I just invented it today. Feedback is welcome.
In nature, the best way you can help your great grand-kids is to help your children. If there was a way to help your grandchildren at the expense of your children that ultimately benefitted the grandchildren, nature might favour it—but usually there is simply no easy way to do that.
Grandparents do sometimes favour more distant offspring in their wills—if they think the direct offspring are compromised or irresponsible, for example. Such behaviour is right and natural.
Temporal discounting is a reflection of your ignorance and impotence when it comes to the distant future. It is not really that you fundamentally care less about the far future—it is more that you don’t know and can’t help—so investing mental resources would be rather pointless.
According to Robin Hanson, our behavior proves that we don’t care about the far future.
Robin argues that few are prepared to invest now to prevent future destruction of the planet. The conclusion there seems to be that humans are not utilitarian agents.
Robin seems to claim that humans do not invest in order to pass things on to future generations—whereas in fact they do just that whenever they invest in their own offspring.
Obviously you don’t invest in your great-grandchildren directly. You invest in your offspring—they can manage your funds better than you can do so from your wheelchair or grave.
Temporal discounting make sense. Organisms do it becasue they can’t see or control the far future as well as their direct descendants can. In those rare cases where that is not true, direct descendants can sometimes be bypassed.
However, you wouldn’t want to build temporal discounting into the utility function of a machine intelligence. It knows better than you do its prediction capablities—and can figure out such things for itself.
Since that exact point was made in the Eliezer essay Robin’s post was a reply to, it isn’t clear that Robin understands that.
I don’t think you need any discounting. Your effect on the year 2012 is somewhat predictable. It is possible to choose a course of action based on known effect’s on the year 2012.
You effect on the year 3000 is unpredictable. You can’t even begin to predict what effect your actions will have on the human race in the year 3000.
Thus, there is an automatic discounting effect. An act is only as valuable as it’s expected outcome. The expected outcome on the year 1,000,000 is almost always ~zero, unless there is some near-future extinction possibility, because the probability of you having a desired impact is essentially zero.
I tend to agree, in that I also have a steep discount across time and distance (though I tend to think of it as “empathetic distance”, more about perceived self-similarity than measurable time or distance, and I tend to think of weightings in my utility function rather than using the term “moral responsibility”).
That said, it’s worth asking just how steep a discount is justifiable—WHY do you think you’re more responsible to a neighbor than to four of her great-grandchildren, and do you think this is the correct discount to apply?
And even if you do think it’s correct, remember to shut up and multiply. It’s quite possible for there to be more than 35x as much sentience in 10 generations as there is today.