If an agent explicitly says, “My values are such that I care more about the state of the universe a thousand years from now than the state of the universe tomorrow”, I have no firm basis for saying that’s not rational. So, yes, I can construct a “rational” agent for which the concern in this post does not apply.
If I am determined simply to be perverse, that is, rather than to be concerned with preventing the destruction of the universe by the sort of agents anyone is likely to actually construct.
An agent like that doesn’t have a time-discounting function. It only makes sense to talk about a time discounting function when your agent—like every single rational expectation-maximizing agent ever discussed, AFAIK, anywhere, ever, except in the above comment—has a utility function that evaluates states of the world at a given moment, and whose utility function for possible timelines specifies some function (possibly a constant function) describing their level of concern for the world state as a function of time.
When your agent is like that, it runs into the problem described in this post. And, if you are staying within the framework of temporal discounting, you have only a few choices:
Don’t care about the future. Eventually, accidentally destroy all life, or fail to preserve it from black swans.
Use hyperbolic discounting, or some other irrational discounting scheme, even though this may be like adding a contradiction into a system that uses resolution. (I think the problems with hyperbolic discounting may go beyond its irrationality, but that would take another post.)
Use a constant function weighting points in time (don’t use temporal discounting). Probably end up killing lots of humans.
If you downvoted the topic as unimportant because rational expectation-maximizers can take any attitude towards time-discounting they want, why did you write a post about how they should do time-discounting?
BTW, genes are an example of an agent that arguably has a reversed time-discounting function. Genes “care” about their eventual, “equilibrium” level in the population. This is a tricky example, though, because genes only “care” about the future retrospectively; the more-numerous genes that “didn’t care”, disappeared. But the body as a whole can be seen as maximizing the proportion of the population that will contain its genes in the distant future. (Believing this is relevant to theories of aging that attempt to explain the Gompertz curve.)
Kinda—but genes are not in practice of looking a million years ahead—they are lucky if they can see or influence two generations worth ahead—so: instrumental discounting applies here too.
If an agent explicitly says, “My values are such that I care more about the state of the universe a thousand years from now than the state of the universe tomorrow”, I have no firm basis for saying that’s not rational. So, yes, I can construct a “rational” agent for which the concern in this post does not apply.
If I am determined simply to be perverse, that is, rather than to be concerned with preventing the destruction of the universe by the sort of agents anyone is likely to actually construct.
An agent like that doesn’t have a time-discounting function. It only makes sense to talk about a time discounting function when your agent—like every single rational expectation-maximizing agent ever discussed, AFAIK, anywhere, ever, except in the above comment—has a utility function that evaluates states of the world at a given moment, and whose utility function for possible timelines specifies some function (possibly a constant function) describing their level of concern for the world state as a function of time.
When your agent is like that, it runs into the problem described in this post. And, if you are staying within the framework of temporal discounting, you have only a few choices:
Don’t care about the future. Eventually, accidentally destroy all life, or fail to preserve it from black swans.
Use hyperbolic discounting, or some other irrational discounting scheme, even though this may be like adding a contradiction into a system that uses resolution. (I think the problems with hyperbolic discounting may go beyond its irrationality, but that would take another post.)
Use a constant function weighting points in time (don’t use temporal discounting). Probably end up killing lots of humans.
If you downvoted the topic as unimportant because rational expectation-maximizers can take any attitude towards time-discounting they want, why did you write a post about how they should do time-discounting?
BTW, genes are an example of an agent that arguably has a reversed time-discounting function. Genes “care” about their eventual, “equilibrium” level in the population. This is a tricky example, though, because genes only “care” about the future retrospectively; the more-numerous genes that “didn’t care”, disappeared. But the body as a whole can be seen as maximizing the proportion of the population that will contain its genes in the distant future. (Believing this is relevant to theories of aging that attempt to explain the Gompertz curve.)
Kinda—but genes are not in practice of looking a million years ahead—they are lucky if they can see or influence two generations worth ahead—so: instrumental discounting applies here too.