When 0<γ<1, you’re strictly more likely to navigate to parts of the future which give you strictly more options (in a graph-theoretic sense). Plus, these parts of the future give you strictly more power.
What about the case where agents have different time horizons? My question is inspired by one of the details of an alternative theory of markets, the Fractal Market Hypothesis. The relevant detail is an investment horizon, which is how long an investor keeps the asset. To oversimplify, the theory argues that markets work normally with a lot of investors with different investment horizons; when uncertainty increases, investors shorten their horizons, and then when everyone’s horizons get very short we have a panic.
I thought this might be represented by step function in the discount rate, but reviewing the paper it looks like γ is continuous. It also occurs to me that this should be similar in terms of computation to setting γ=1 and running it over fewer turns, but this doesn’t seem like it would work as well for the case of modelling different discount rates on the same MDP.
What do you mean by “agents have different time horizons”?
To answer my best guess of what you meant: this post used “most agents do X” as shorthand for “action X is optimal with respect to a large-measure set over reward functions”, but the analysis only considers the single-agent MDP setting, and how, for a fixed reward function or reward function distribution, optimal action for an agent tends to vary with the discount rate. There aren’t multiple formal agents acting in the same environment.
The single-agent MDP setting resolves my confusion; now it is just a curiosity with respect to directions future work might go. The action varies with discount rate result is essentially what interests me, so refocusing in the context of the single-agent case: what do you think of the discount rate being discontinuous?
So we are clear there isn’t an obvious motivation for this, so my guess for the answer is something like “Don’t know and didn’t check because it cannot change the underlying intuition.”
Discontinuous with respect to what? The discount rate just is, and there just is an optimal policy set for each reward function at a given discount rate, and so it doesn’t make sense to talk about discontinuity without having something to govern what it’s discontinuous with respect to. Like, teleportation would be positionally discontinuous with respect to time.
You can talk about other quantities being continuous with respect to change in the discount rate, however, and the paper proves prove the continuity of e.g. POWER and optimality probability with respect to γ∈[0,1].
I have a question about this conclusion:
What about the case where agents have different time horizons? My question is inspired by one of the details of an alternative theory of markets, the Fractal Market Hypothesis. The relevant detail is an investment horizon, which is how long an investor keeps the asset. To oversimplify, the theory argues that markets work normally with a lot of investors with different investment horizons; when uncertainty increases, investors shorten their horizons, and then when everyone’s horizons get very short we have a panic.
I thought this might be represented by step function in the discount rate, but reviewing the paper it looks like γ is continuous. It also occurs to me that this should be similar in terms of computation to setting γ=1 and running it over fewer turns, but this doesn’t seem like it would work as well for the case of modelling different discount rates on the same MDP.
What do you mean by “agents have different time horizons”?
To answer my best guess of what you meant: this post used “most agents do X” as shorthand for “action X is optimal with respect to a large-measure set over reward functions”, but the analysis only considers the single-agent MDP setting, and how, for a fixed reward function or reward function distribution, optimal action for an agent tends to vary with the discount rate. There aren’t multiple formal agents acting in the same environment.
The single-agent MDP setting resolves my confusion; now it is just a curiosity with respect to directions future work might go. The action varies with discount rate result is essentially what interests me, so refocusing in the context of the single-agent case: what do you think of the discount rate being discontinuous?
So we are clear there isn’t an obvious motivation for this, so my guess for the answer is something like “Don’t know and didn’t check because it cannot change the underlying intuition.”
Discontinuous with respect to what? The discount rate just is, and there just is an optimal policy set for each reward function at a given discount rate, and so it doesn’t make sense to talk about discontinuity without having something to govern what it’s discontinuous with respect to. Like, teleportation would be positionally discontinuous with respect to time.
You can talk about other quantities being continuous with respect to change in the discount rate, however, and the paper proves prove the continuity of e.g. POWER and optimality probability with respect to γ∈[0,1].