FWIW, I genuinely don’t understand your perspective. The extent to which you discount the future depends on your chances of enjoying it—but also on factors like your ability to predict it—and your ability to influence it—the latter are functions of your abilities, of what you are trying to predict and of the current circumstances.
You really, really do not normally want to put those sorts of things into an agent’s utility function. You really, really do want to calculate them dynamically, depending on the agent’s current circumstances, prediction ability levels, actuator power levels, previous experience, etc.
Attempts to put that sort of thing into the utility function would normally tend to produce an inflexible agent, who has more difficulties in adapting and improving. Trying to incorporate all the dynamic learning needed to deal with the issue into the utility function might be possible in principle—but that represents a really bad idea.
Hopefully you can see my reasoning on this issue. I can’t see your reasoning, though. I can barely even imagine what it might possibly be.
Maybe you are thinking that all events have roughly the same level of unpredictability in the future, and there is roughly the same level of difficulty in influencing them, so the whole issue can be dealt with by one (or a small number of) temporal discounting “fudge factors”—and that evoution built us that way because it was too stupid to do any better.
You apparently denied that resource limitation results in temporal discounting. Maybe that is the problem (if so, see my other reply here). However, now you seem to have acknowledged that an extra year of time to worry in helps with developing plans. What I can see doesn’t seem to make very much sense.
You really, really do not normally want to put those sorts of things into an agent’s utility function.
I really, really am not advocating that we put instrumental considerations into our utility functions. The reason you think I am advocating this is that you have this fixed idea that the only justification for discounting is instrumental. So every time I offer a heuristic analogy explaining the motivation for fundamental discounting, you interpret it as a flawed argument for using discounting as a heuristic for instrumental reasons.
Since it appears that this will go on forever, and I don’t discount the future enough to make the sum of this projected infinite stream of disutility seem small, I really ought to give up. But somehow, my residual uncertainty about the future makes me think that you may eventually take Cromwell’s advice.
You really, really do not normally want to put those sorts of things into an agent’s utility function.
I really, really am not advocating that we put instrumental considerations into our utility functions. The reason you think I am advocating this is that you have this fixed idea that the only justification for discounting is instrumental.
To clarify: I do not think the only justification for discounting is instrumental. My position is more like: agents can have whatever utility functions they like (including ones with temporal discounting) without having to justify them to anyone.
However, I do think there are some problems associated with temporal discounting. Temporal discounting sacrifices the future for the sake of the present. Sometimes the future can look after itself—but sacrificing the future is also something which can be taken too far.
Axelrod suggested that when the shadow of the future grows too short, more defections happen. If people don’t sufficiently value the future, reciprocal altruism breaks down. Things get especially bad when politicians fail to value the future. We should strive to arrange things so that the future doesn’t get discounted too much.
Instrumental temporal discounting doesn’t belong in ultimate utility functions. So, we should figure out what temporal discounting is instrumental and exclude it.
If we are building a potentially-immortal machine intelligence with a low chance of dying and which doesn’t age, those are more causes of temporal discounting which could be discarded as well.
What does that leave? Not very much, IMO. The machine will still have some finite chance of being hit by a large celestial body for a while. It might die—but its chances of dying vary over time; its degree of temporal discounting should vary in response—once again, you don’t wire this in, you let the agent figure it out dynamically.
Given that you also believe that distributing your charitable giving over many charities is ‘risk management’, I suppose that should not surprise me.
FWIW, I genuinely don’t understand your perspective. The extent to which you discount the future depends on your chances of enjoying it—but also on factors like your ability to predict it—and your ability to influence it—the latter are functions of your abilities, of what you are trying to predict and of the current circumstances.
You really, really do not normally want to put those sorts of things into an agent’s utility function. You really, really do want to calculate them dynamically, depending on the agent’s current circumstances, prediction ability levels, actuator power levels, previous experience, etc.
Attempts to put that sort of thing into the utility function would normally tend to produce an inflexible agent, who has more difficulties in adapting and improving. Trying to incorporate all the dynamic learning needed to deal with the issue into the utility function might be possible in principle—but that represents a really bad idea.
Hopefully you can see my reasoning on this issue. I can’t see your reasoning, though. I can barely even imagine what it might possibly be.
Maybe you are thinking that all events have roughly the same level of unpredictability in the future, and there is roughly the same level of difficulty in influencing them, so the whole issue can be dealt with by one (or a small number of) temporal discounting “fudge factors”—and that evoution built us that way because it was too stupid to do any better.
You apparently denied that resource limitation results in temporal discounting. Maybe that is the problem (if so, see my other reply here). However, now you seem to have acknowledged that an extra year of time to worry in helps with developing plans. What I can see doesn’t seem to make very much sense.
I really, really am not advocating that we put instrumental considerations into our utility functions. The reason you think I am advocating this is that you have this fixed idea that the only justification for discounting is instrumental. So every time I offer a heuristic analogy explaining the motivation for fundamental discounting, you interpret it as a flawed argument for using discounting as a heuristic for instrumental reasons.
Since it appears that this will go on forever, and I don’t discount the future enough to make the sum of this projected infinite stream of disutility seem small, I really ought to give up. But somehow, my residual uncertainty about the future makes me think that you may eventually take Cromwell’s advice.
To clarify: I do not think the only justification for discounting is instrumental. My position is more like: agents can have whatever utility functions they like (including ones with temporal discounting) without having to justify them to anyone.
However, I do think there are some problems associated with temporal discounting. Temporal discounting sacrifices the future for the sake of the present. Sometimes the future can look after itself—but sacrificing the future is also something which can be taken too far.
Axelrod suggested that when the shadow of the future grows too short, more defections happen. If people don’t sufficiently value the future, reciprocal altruism breaks down. Things get especially bad when politicians fail to value the future. We should strive to arrange things so that the future doesn’t get discounted too much.
Instrumental temporal discounting doesn’t belong in ultimate utility functions. So, we should figure out what temporal discounting is instrumental and exclude it.
If we are building a potentially-immortal machine intelligence with a low chance of dying and which doesn’t age, those are more causes of temporal discounting which could be discarded as well.
What does that leave? Not very much, IMO. The machine will still have some finite chance of being hit by a large celestial body for a while. It might die—but its chances of dying vary over time; its degree of temporal discounting should vary in response—once again, you don’t wire this in, you let the agent figure it out dynamically.