Now we can discuss the real issue: whether that word “often” should be changed to “always” as EY and yourself seem to claim.
You can specify a method temporal discounting if you really want to. Just as you can specify a value for collecting gold atoms if you really want to. However, there are side effects and problems associated with introducing unnecessary constraints.
We need to allow that rational and moral agents may discount the value of future outcomes and flows for fundamental, non-instrumental reasons. We need to ask them.
If we think that such creatures are common and if we are trying to faithfully mirror and perpetuate their limitations, you mean.
Neither EY nor you has provided any reason (beyond bare assertion) why the moral discount rate should be set to zero.
I don’t really see this as a “should” question. However, there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning, with comments such as “unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it.”
I will point out that a finite discount rate permits us to avoid the mathematical absurdities arising from undiscounted utilities with an unbounded time horizon.
Not a practical issue—IMO. We are resource-limited creatures, who can barely see 10 years into the future. Instrumental temporal discounting protects us from infinite maths with great effectiveness.
This is the same as in biology. Organisms act as though they want to become ancestors—not just parents or grandparents. That is the optimisation target, anyway. However, instrumental temporal discounting protects them from far-future considerations with great effectiveness.
there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning …
You did indeed. I noticed it, and meant to clarify that I am not advocating any kind of “wiring in”. Unfortunately, I failed to do so.
My position would be that human beings often have discount factors “wired in” by evolution. It is true, of course, that like every other moral instinct analyzed by EvoPsych, the ultimate adaptationist evolutionary explanation of this moral instinct is somewhat instrumental, but this doesn’t make it any less fundamental from the standpoint of the person born with this instinct.
As for moral values that we insert into AIs, these too are instrumental in terms of their final cause—we want the AIs to have particular values for our own instrumental reasons. But, for the AI, they are fundamental. But not necessarily ‘wired in’. If we, as I believe we should, give the AI a fundamental meta-value that it should construct its own fundamental values by empirically constructing some kind of CEV of mankind—if we do this then the AI will end up with a discount factor because his human models have discount factors. But it won’t be a wired-in or constant discount factor. Because the discount factors of mankind may well change over time as the expected lifespan of humans changes, as people upload and choose to run at various rates, as people are born or as they die.
I’m saying that we need to allow for an AI discount factor or factors which are not strictly instrumental, but which are not ‘wired in’ either. And especially not a wired-in discount factor of exactly zero!
I think we want a minimally myopic superintelligence—and fairly quickly. We should not aspire to program human limitations into machines—in a foolish attempt to mirror their values. If the Met. Office computer is handling orders asking it to look three months out—and an ethtics graduate says that it too future-oriented for a typical human, and it should me made to look less far out, so it better reflects human values—he should be told what an idiot he is being.
We use machines to complement human capabilities, not just to copy them. When it comes to discounting the future, machines will be able to see and influence furtther—and we would be well-advised let them.
Much harm is done today due to temporal discounting. Governments look no further than election day. Machines can help put a stop to such stupidity and negligence—but we have to know enough to let them.
As Eleizer says, he doesn’t propose doing much temporal discounting—except instrumentally. That kind of thing can be expected to go up against the wall as part of the “smarter, faster, wiser, better” part of his CEV.
And so we are in disagreement. But I hope you now understand that the disagreement is because our values are different rather than because I don’t understand the concept of values. Ironically our values differ in that I prefer to preserve my values and those of my conspecifics beyond the Singularity, whereas you distrust those values and the flawed cognition behind them, and you wish to have those imperfect human things replaced by something less messy.
I don’t see myself as doing any non-instrumental temporal discounting in the first place. So, for me personally, losing my non-instrumental temporal discounting doesn’t seem like much of a loss.
However, I do think that our temporal myopia is going to fall by the wayside. We will stop screwing over the immediate future because we don’t care about it enough. Myopic temporal discounting represents a primitive form of value—which is destined to go the way of cannibalism and slavery.
You can specify a method temporal discounting if you really want to. Just as you can specify a value for collecting gold atoms if you really want to. However, there are side effects and problems associated with introducing unnecessary constraints.
If we think that such creatures are common and if we are trying to faithfully mirror and perpetuate their limitations, you mean.
I don’t really see this as a “should” question. However, there are consequences to wiring in instrumental values. You typically wind up with a handicapped superintelligence. I thought I already gave this as my reasoning, with comments such as “unless you think you know their practical value better than a future superintelligent agent, perhaps you are better off leaving such issues to it.”
Not a practical issue—IMO. We are resource-limited creatures, who can barely see 10 years into the future. Instrumental temporal discounting protects us from infinite maths with great effectiveness.
This is the same as in biology. Organisms act as though they want to become ancestors—not just parents or grandparents. That is the optimisation target, anyway. However, instrumental temporal discounting protects them from far-future considerations with great effectiveness.
You did indeed. I noticed it, and meant to clarify that I am not advocating any kind of “wiring in”. Unfortunately, I failed to do so.
My position would be that human beings often have discount factors “wired in” by evolution. It is true, of course, that like every other moral instinct analyzed by EvoPsych, the ultimate adaptationist evolutionary explanation of this moral instinct is somewhat instrumental, but this doesn’t make it any less fundamental from the standpoint of the person born with this instinct.
As for moral values that we insert into AIs, these too are instrumental in terms of their final cause—we want the AIs to have particular values for our own instrumental reasons. But, for the AI, they are fundamental. But not necessarily ‘wired in’. If we, as I believe we should, give the AI a fundamental meta-value that it should construct its own fundamental values by empirically constructing some kind of CEV of mankind—if we do this then the AI will end up with a discount factor because his human models have discount factors. But it won’t be a wired-in or constant discount factor. Because the discount factors of mankind may well change over time as the expected lifespan of humans changes, as people upload and choose to run at various rates, as people are born or as they die.
I’m saying that we need to allow for an AI discount factor or factors which are not strictly instrumental, but which are not ‘wired in’ either. And especially not a wired-in discount factor of exactly zero!
I think we want a minimally myopic superintelligence—and fairly quickly. We should not aspire to program human limitations into machines—in a foolish attempt to mirror their values. If the Met. Office computer is handling orders asking it to look three months out—and an ethtics graduate says that it too future-oriented for a typical human, and it should me made to look less far out, so it better reflects human values—he should be told what an idiot he is being.
We use machines to complement human capabilities, not just to copy them. When it comes to discounting the future, machines will be able to see and influence furtther—and we would be well-advised let them.
Much harm is done today due to temporal discounting. Governments look no further than election day. Machines can help put a stop to such stupidity and negligence—but we have to know enough to let them.
As Eleizer says, he doesn’t propose doing much temporal discounting—except instrumentally. That kind of thing can be expected to go up against the wall as part of the “smarter, faster, wiser, better” part of his CEV.
And so we are in disagreement. But I hope you now understand that the disagreement is because our values are different rather than because I don’t understand the concept of values. Ironically our values differ in that I prefer to preserve my values and those of my conspecifics beyond the Singularity, whereas you distrust those values and the flawed cognition behind them, and you wish to have those imperfect human things replaced by something less messy.
I don’t see myself as doing any non-instrumental temporal discounting in the first place. So, for me personally, losing my non-instrumental temporal discounting doesn’t seem like much of a loss.
However, I do think that our temporal myopia is going to fall by the wayside. We will stop screwing over the immediate future because we don’t care about it enough. Myopic temporal discounting represents a primitive form of value—which is destined to go the way of cannibalism and slavery.