Basically, temporal discounting can often usefully be regarded as instrumental.
Quite true. I’m glad you included that word “often”. Now we can discuss the real issue: whether that word “often” should be changed to “always” as EY and yourself seem to claim. Or whether utility functions can and should incorporate the discounting of the value of temporally distant outcomes and pleasure-flows for reasons over and above considerations of instrumentality.
Temporal discounting contrasts with risk aversion in this respect.
A useful contrast/analogy. You seem to be claiming that risk aversion is not purely instrumental; that it can be fundamental; that we need to ask agents about their preferences among risky alternatives, rather than simply axiomatizing that a rational agent will be risk neutral.
But I disagree that this is in contrast to the situation with temporal discounting. We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them. This is particularly the case when we consider questions like the moral value of a human life.
The question before us is whether I should place the same moral value now on a human life next year and a human life 101 years from now. I say ‘no’; EY (and you?) say yes. What is EY’s justification for his position? Well, he might invent a moral principle that he might call “time invariance of moral value” and assert that this principle absolutely forces me to accept the equality:
value@t(life@t+1) = value@t(life@t+101).
I would counter that EY is using the invalid “strong principle of time invariance”. If one uses the valid “weak principle of time invariance” then all that we can prove is that:
value@t(life@t+1) = value@t+100(life@t+101)
So, we need another moral principle to get to where EY wants to go. EY postulates that the moral discount rate must be zero. I simply reject this postulate (as would the bulk of mankind, if asked). EY and I can both agree to a weaker postulate, “time invariance of moral preference”. But this only shows that the discounting must be exponential in time; it doesn’t show that the rate must be zero.
Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero. Admittedly, I have yet to give any reason why it should be set elsewhere. This is not the place to do that. But I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon. EY says “So come up with better math!”—a response worth taking seriously. But until we have that better math in hand, I am pretty sure EY is wearing the crackpot hat here, not me.
Now we can discuss the real issue: whether that word “often” should be changed to “always” as EY and yourself seem to claim.
You can specify a method temporal discounting if you really want to. Just as you can specify a value for collecting gold atoms if you really want to. However, there are side effects and problems associated with introducing unnecessary constraints.
We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them.
If we think that such creatures are common and if we are trying to faithfully mirror and perpetuate their limitations, you mean.
Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero.
I don’t really see this as a “should” question. However, there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning, with comments such as “unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it.”
I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon.
Not a practical issue—IMO. We are resource-limited creatures, who can barely see 10 years into the future. Instrumental temporal discounting protects us from infinite maths with great effectiveness.
This is the same as in biology. Organisms act as though they want to become ancestors—not just parents or grandparents. That is the optimisation target, anyway. However, instrumental temporal discounting protects them from far-future considerations with great effectiveness.
there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning …
You did indeed. I noticed it, and meant to clarify that I am not advocating any kind of “wiring in”. Unfortunately, I failed to do so.
My position would be that human beings often have discount factors “wired in” by evolution. It is true, of course, that like every other moral instinct analyzed by EvoPsych, the ultimate adaptationist evolutionary explanation of this moral instinct is somewhat instrumental, but this doesn’t make it any less fundamental from the standpoint of the person born with this instinct.
As for moral values that we insert into AIs, these too are instrumental in terms of their final cause—we want the AIs to have particular values for our own instrumental reasons. But, for the AI, they are fundamental. But not necessarily ‘wired in’. If we, as I believe we should, give the AI a fundamental meta-value that it should construct its own fundamental values by empirically constructing some kind of CEV of mankind—if we do this then the AI will end up with a discount factor because his human models have discount factors. But it won’t be a wired-in or constant discount factor. Because the discount factors of mankind may well change over time as the expected lifespan of humans changes, as people upload and choose to run at various rates, as people are born or as they die.
I’m saying that we need to allow for an AI discount factor or factors which are not strictly instrumental, but which are not ‘wired in’ either. And especially not a wired-in discount factor of exactly zero!
I think we want a minimally myopic superintelligence—and fairly quickly. We should not aspire to program human limitations into machines—in a foolish attempt to mirror their values. If the Met. Office computer is handling orders asking it to look three months out—and an ethtics graduate says that it too future-oriented for a typical human, and it should me made to look less far out, so it better reflects human values—he should be told what an idiot he is being.
We use machines to complement human capabilities, not just to copy them. When it comes to discounting the future, machines will be able to see and influence furtther—and we would be well-advised let them.
Much harm is done today due to temporal discounting. Governments look no further than election day. Machines can help put a stop to such stupidity and negligence—but we have to know enough to let them.
As Eleizer says, he doesn’t propose doing much temporal discounting—except instrumentally. That kind of thing can be expected to go up against the wall as part of the “smarter, faster, wiser, better” part of his CEV.
And so we are in disagreement. But I hope you now understand that the disagreement is because our values are different rather than because I don’t understand the concept of values. Ironically our values differ in that I prefer to preserve my values and those of my conspecifics beyond the Singularity, whereas you distrust those values and the flawed cognition behind them, and you wish to have those imperfect human things replaced by something less messy.
I don’t see myself as doing any non-instrumental temporal discounting in the first place. So, for me personally, losing my non-instrumental temporal discounting doesn’t seem like much of a loss.
However, I do think that our temporal myopia is going to fall by the wayside. We will stop screwing over the immediate future because we don’t care about it enough. Myopic temporal discounting represents a primitive form of value—which is destined to go the way of cannibalism and slavery.
Quite true. I’m glad you included that word “often”. Now we can discuss the real issue: whether that word “often” should be changed to “always” as EY and yourself seem to claim. Or whether utility functions can and should incorporate the discounting of the value of temporally distant outcomes and pleasure-flows for reasons over and above considerations of instrumentality.
A useful contrast/analogy. You seem to be claiming that risk aversion is not purely instrumental; that it can be fundamental; that we need to ask agents about their preferences among risky alternatives, rather than simply axiomatizing that a rational agent will be risk neutral.
But I disagree that this is in contrast to the situation with temporal discounting. We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them. This is particularly the case when we consider questions like the moral value of a human life.
The question before us is whether I should place the same moral value now on a human life next year and a human life 101 years from now. I say ‘no’; EY (and you?) say yes. What is EY’s justification for his position? Well, he might invent a moral principle that he might call “time invariance of moral value” and assert that this principle absolutely forces me to accept the equality:
value@t(life@t+1) = value@t(life@t+101).
I would counter that EY is using the invalid “strong principle of time invariance”. If one uses the valid “weak principle of time invariance” then all that we can prove is that:
value@t(life@t+1) = value@t+100(life@t+101)
So, we need another moral principle to get to where EY wants to go. EY postulates that the moral discount rate must be zero. I simply reject this postulate (as would the bulk of mankind, if asked). EY and I can both agree to a weaker postulate, “time invariance of moral preference”. But this only shows that the discounting must be exponential in time; it doesn’t show that the rate must be zero.
Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero. Admittedly, I have yet to give any reason why it should be set elsewhere. This is not the place to do that. But I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon. EY says “So come up with better math!”—a response worth taking seriously. But until we have that better math in hand, I am pretty sure EY is wearing the crackpot hat here, not me.
You can specify a method temporal discounting if you really want to. Just as you can specify a value for collecting gold atoms if you really want to. However, there are side effects and problems associated with introducing unnecessary constraints.
If we think that such creatures are common and if we are trying to faithfully mirror and perpetuate their limitations, you mean.
I don’t really see this as a “should” question. However, there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning, with comments such as “unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it.”
Not a practical issue—IMO. We are resource-limited creatures, who can barely see 10 years into the future. Instrumental temporal discounting protects us from infinite maths with great effectiveness.
This is the same as in biology. Organisms act as though they want to become ancestors—not just parents or grandparents. That is the optimisation target, anyway. However, instrumental temporal discounting protects them from far-future considerations with great effectiveness.
You did indeed. I noticed it, and meant to clarify that I am not advocating any kind of “wiring in”. Unfortunately, I failed to do so.
My position would be that human beings often have discount factors “wired in” by evolution. It is true, of course, that like every other moral instinct analyzed by EvoPsych, the ultimate adaptationist evolutionary explanation of this moral instinct is somewhat instrumental, but this doesn’t make it any less fundamental from the standpoint of the person born with this instinct.
As for moral values that we insert into AIs, these too are instrumental in terms of their final cause—we want the AIs to have particular values for our own instrumental reasons. But, for the AI, they are fundamental. But not necessarily ‘wired in’. If we, as I believe we should, give the AI a fundamental meta-value that it should construct its own fundamental values by empirically constructing some kind of CEV of mankind—if we do this then the AI will end up with a discount factor because his human models have discount factors. But it won’t be a wired-in or constant discount factor. Because the discount factors of mankind may well change over time as the expected lifespan of humans changes, as people upload and choose to run at various rates, as people are born or as they die.
I’m saying that we need to allow for an AI discount factor or factors which are not strictly instrumental, but which are not ‘wired in’ either. And especially not a wired-in discount factor of exactly zero!
I think we want a minimally myopic superintelligence—and fairly quickly. We should not aspire to program human limitations into machines—in a foolish attempt to mirror their values. If the Met. Office computer is handling orders asking it to look three months out—and an ethtics graduate says that it too future-oriented for a typical human, and it should me made to look less far out, so it better reflects human values—he should be told what an idiot he is being.
We use machines to complement human capabilities, not just to copy them. When it comes to discounting the future, machines will be able to see and influence furtther—and we would be well-advised let them.
Much harm is done today due to temporal discounting. Governments look no further than election day. Machines can help put a stop to such stupidity and negligence—but we have to know enough to let them.
As Eleizer says, he doesn’t propose doing much temporal discounting—except instrumentally. That kind of thing can be expected to go up against the wall as part of the “smarter, faster, wiser, better” part of his CEV.
And so we are in disagreement. But I hope you now understand that the disagreement is because our values are different rather than because I don’t understand the concept of values. Ironically our values differ in that I prefer to preserve my values and those of my conspecifics beyond the Singularity, whereas you distrust those values and the flawed cognition behind them, and you wish to have those imperfect human things replaced by something less messy.
I don’t see myself as doing any non-instrumental temporal discounting in the first place. So, for me personally, losing my non-instrumental temporal discounting doesn’t seem like much of a loss.
However, I do think that our temporal myopia is going to fall by the wayside. We will stop screwing over the immediate future because we don’t care about it enough. Myopic temporal discounting represents a primitive form of value—which is destined to go the way of cannibalism and slavery.