Zvi mentioned hyperbolic discounting. What if an agent’s preferences are actually described by hyperbolic discounting? Then different versions of the agent in time have different preferences, so they are essentially different agents. Consider just two such agent-moments. Each agent-moment would prefer both not eating garlic bread to both eating, but prefers even more itself eating while the other doesn’t eat.
Since they have different preferences and the earlier agent-moment can’t physically force the later agent-moment to make a certain choice, the analogy with PD seems pretty good and TDT does seem relevant here.
Indeed, there is nothing irrational (in an epistemic way) about having hyperbolic time preference. However, this means that a classical decision algorithm is not conducive to achieving long term goals.
One way around this problem is to use TDT, another way is to modify your preferences to be geometric.
A geometric time preference is a bit like a moral preference… it’s a para-preference. Not something you want in the first place, but something you benefit from wanting when interacting with other agents (including your future self).
Zvi mentioned hyperbolic discounting. What if an agent’s preferences are actually described by hyperbolic discounting? Then different versions of the agent in time have different preferences, so they are essentially different agents. Consider just two such agent-moments. Each agent-moment would prefer both not eating garlic bread to both eating, but prefers even more itself eating while the other doesn’t eat.
Since they have different preferences and the earlier agent-moment can’t physically force the later agent-moment to make a certain choice, the analogy with PD seems pretty good and TDT does seem relevant here.
Indeed, there is nothing irrational (in an epistemic way) about having hyperbolic time preference. However, this means that a classical decision algorithm is not conducive to achieving long term goals.
One way around this problem is to use TDT, another way is to modify your preferences to be geometric.
A geometric time preference is a bit like a moral preference… it’s a para-preference. Not something you want in the first place, but something you benefit from wanting when interacting with other agents (including your future self).
I see.