With respect, I’ve always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person’s behavior, realize that it doesn’t follow a time-invariant utility function, and say “Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!” But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. (“Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...”)
On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can’t measure it, and you were to just look at my actions, this is what you’d say: “Look, some days he cleans his room and some days he doesn’t even though the benefit—a room clean for about 1 day—is the same. When he doesn’t clean his room, and you ask him why, he says he just really didn’t feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!” But the real reason is not that my utility function is varying. It’s that I find cleaning my room soothing on some days, whereas other days it’s torture.
Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents.
Utility theory is a normative theory of rationality; it’s not taken seriously as a descriptive theory anymore. Rationality is about how we should behave, not how we do.
Look, some days he cleans his room and some days he doesn’t even though the benefit—a room clean for about 1 day—is the same.
This is a common confusion about the what dynamic inconsistency really means, although I’m now noticing that Wikipedia doesn’t explain it so clearly, so I should give an example:
Monday self says: I should clean my room on Thursday, even if it will be extremely annoying to do so (within the usual range of how annoying the task can be), because of the real-world benefits of being able to have guests over on the weekend.
Thursday-self says: Oh, but now that it’s Thursday and I’m annoyed, I don’t think it’s worth it anymore.
This is a disagreement between what your Monday-self and your Thursday-self think you should do on Thursday. It’s a straight-up contradiction of preferences among outcomes. There’s no need to think about utility theory at all, although preferences among outcomes, and not items is exactly what it’s designed to normatively govern.
ETA: The OP now links to a lesswrongwiki article on dynamic inconsistency.
With respect, I’ve always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person’s behavior, realize that it doesn’t follow a time-invariant utility function, and say “Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!” But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. (“Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...”)
On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can’t measure it, and you were to just look at my actions, this is what you’d say: “Look, some days he cleans his room and some days he doesn’t even though the benefit—a room clean for about 1 day—is the same. When he doesn’t clean his room, and you ask him why, he says he just really didn’t feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!” But the real reason is not that my utility function is varying. It’s that I find cleaning my room soothing on some days, whereas other days it’s torture.
Utility theory is a normative theory of rationality; it’s not taken seriously as a descriptive theory anymore. Rationality is about how we should behave, not how we do.
This is a common confusion about the what dynamic inconsistency really means, although I’m now noticing that Wikipedia doesn’t explain it so clearly, so I should give an example:
Monday self says: I should clean my room on Thursday, even if it will be extremely annoying to do so (within the usual range of how annoying the task can be), because of the real-world benefits of being able to have guests over on the weekend.
Thursday-self says: Oh, but now that it’s Thursday and I’m annoyed, I don’t think it’s worth it anymore.
This is a disagreement between what your Monday-self and your Thursday-self think you should do on Thursday. It’s a straight-up contradiction of preferences among outcomes. There’s no need to think about utility theory at all, although preferences among outcomes, and not items is exactly what it’s designed to normatively govern.
ETA: The OP now links to a lesswrongwiki article on dynamic inconsistency.