Time travel, the past “still existing”—and utilitariainism? I don’t buy any of that either—but in the context of artificial intelligence, I do agree that building discounting functions into the agent’s ultimate values looks like bad news.
Discounting functions arise because agents don’t know about the future—and can’t predict or control it very well. However, the extent to which they can’t predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue—then it can’t ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in—and let the machine itself figure out to what extent it can predict and control the future—and so figure out the relative value of the present.
Time travel, the past “still existing”—and utilitariainism? I don’t buy any of that either—but in the context of artificial intelligence, I do agree that building discounting functions into the agent’s ultimate values looks like bad news.
Discounting functions arise because agents don’t know about the future—and can’t predict or control it very well. However, the extent to which they can’t predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue—then it can’t ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in—and let the machine itself figure out to what extent it can predict and control the future—and so figure out the relative value of the present.