You recently mentioned the possibility of dying in the interim. There’s also the possibility of aging in the interim. Such factors can affect utility calculations.
For example: I would much rather have my grandmother’s inheritance now than years down the line, when she finally falls over one last time—because I am younger and fitter now.
Significant temporal discounting makes sense sometimes—for example, if there is a substantial chance of extinction per unit time. I do think a lot of discounting is instrumental, though—rather than being a reflection of ultimate values—due to things like the future being expensive to predict and hard to influence.
My brain spends more time thinking about tomorrow than about this time next year—because I am more confident about what is going on tomorrow, and am better placed to influence it by developing cached actions, etc. Next year will be important too—but there will be a day before to allow me to prepare for it closer to the time, when I am better placed to do so. The difference is not because I will be older then—or because I might die in the mean time. It is due to instrumental factors.
Of course one reason this is of interest is because we want to know what values to program into a superintelligence. That superintelligence will probably not age—and will stand a relatively low chance of extinction per unit time. I figure its ultimate utility function should have very little temporal discounting.
The problem with wiring discount functions into the agent’s ultimate utility function is that that is what you want it to preserve as it self improves. Much discounting is actually due to resource limitation issues. It makes sense for such discounting to be dynamically reduced as more resources become cheaply available. It doesn’t make much sense to wire-in short-sightedness.
You recently mentioned the possibility of dying in the interim. There’s also the possibility of aging in the interim. Such factors can affect utility calculations.
For example: I would much rather have my grandmother’s inheritance now than years down the line, when she finally falls over one last time—because I am younger and fitter now.
Significant temporal discounting makes sense sometimes—for example, if there is a substantial chance of extinction per unit time. I do think a lot of discounting is instrumental, though—rather than being a reflection of ultimate values—due to things like the future being expensive to predict and hard to influence.
My brain spends more time thinking about tomorrow than about this time next year—because I am more confident about what is going on tomorrow, and am better placed to influence it by developing cached actions, etc. Next year will be important too—but there will be a day before to allow me to prepare for it closer to the time, when I am better placed to do so. The difference is not because I will be older then—or because I might die in the mean time. It is due to instrumental factors.
Of course one reason this is of interest is because we want to know what values to program into a superintelligence. That superintelligence will probably not age—and will stand a relatively low chance of extinction per unit time. I figure its ultimate utility function should have very little temporal discounting.
The problem with wiring discount functions into the agent’s ultimate utility function is that that is what you want it to preserve as it self improves. Much discounting is actually due to resource limitation issues. It makes sense for such discounting to be dynamically reduced as more resources become cheaply available. It doesn’t make much sense to wire-in short-sightedness.