As a personal anecdote, I have never felt anything that I was inclined to call “willpower depletion”. As a teenager, I decided that “willpower” was just a loaded term/metaphor for dynamic consistency, and that calling it “willpower” was harmful to the way people thought about themselves as agents. I decided that other people’s feeling of “willpower depletion” was nothing more than sensing oneself in transition from one value system to another.
But claims that the theorized “executive system”, a cognitive system whose function is almost by definition to maintain dynamic consistency, was seated in the prefrontal cortex and needed more glucose than other brain functions, made me consider that maybe “willpower” is in fact an appropriate term… but I still never actually felt anything like a “depleting resource”, which I found confusing.
So I’ll be less confused again if the belief dependency you mention is correct, and causal. In any case, I hope it is, so that people can achieve better dynamic consistency by not thinking of it as “expendable”. I’m at least one example consistent with that theory.
With respect, I’ve always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person’s behavior, realize that it doesn’t follow a time-invariant utility function, and say “Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!” But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. (“Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...”)
On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can’t measure it, and you were to just look at my actions, this is what you’d say: “Look, some days he cleans his room and some days he doesn’t even though the benefit—a room clean for about 1 day—is the same. When he doesn’t clean his room, and you ask him why, he says he just really didn’t feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!” But the real reason is not that my utility function is varying. It’s that I find cleaning my room soothing on some days, whereas other days it’s torture.
Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents.
Utility theory is a normative theory of rationality; it’s not taken seriously as a descriptive theory anymore. Rationality is about how we should behave, not how we do.
Look, some days he cleans his room and some days he doesn’t even though the benefit—a room clean for about 1 day—is the same.
This is a common confusion about the what dynamic inconsistency really means, although I’m now noticing that Wikipedia doesn’t explain it so clearly, so I should give an example:
Monday self says: I should clean my room on Thursday, even if it will be extremely annoying to do so (within the usual range of how annoying the task can be), because of the real-world benefits of being able to have guests over on the weekend.
Thursday-self says: Oh, but now that it’s Thursday and I’m annoyed, I don’t think it’s worth it anymore.
This is a disagreement between what your Monday-self and your Thursday-self think you should do on Thursday. It’s a straight-up contradiction of preferences among outcomes. There’s no need to think about utility theory at all, although preferences among outcomes, and not items is exactly what it’s designed to normatively govern.
ETA: The OP now links to a lesswrongwiki article on dynamic inconsistency.
Thank you for introducing me to dynamic (in)consistency, that is extremely helpful for resolving my understanding of will power.
I have similar experiences as you describe: except in occasions of grave illness, I experience my will as infinite, and any appearance of depletion/giving up seems to arise from a divided will (eg wanting to keep riding until 2 hours is up vs wanting to be able to move without pain tomorrow)
Dynamic consistency seems like an incredibly worthwhile area to self-improve in.
I’ve never heard of willpower depletion. I’ve heard people say that they don’t have enough willpower, but not that they’re out of willpower. Surely willpower is a long-term stat like CON, not an diminishable resource like HP.
I’ve never thought that I’ve had much willpower (possibly a nocebo effect originally generalised from a few early cases?). But on those occasions where I have used my willpower, this has always made subsequent uses easier. I can’t imagine using it up.
Yeah, I see that now, but it’s still very weird to me. And the new article seems to explain why: I think of willpower as like CON, so for me it’s like CON. Others think of it as HP, so for them it’s like HP. I just didn’t realise that there was anybody like those others before!
As a personal anecdote, I have never felt anything that I was inclined to call “willpower depletion”. As a teenager, I decided that “willpower” was just a loaded term/metaphor for dynamic consistency, and that calling it “willpower” was harmful to the way people thought about themselves as agents. I decided that other people’s feeling of “willpower depletion” was nothing more than sensing oneself in transition from one value system to another.
But claims that the theorized “executive system”, a cognitive system whose function is almost by definition to maintain dynamic consistency, was seated in the prefrontal cortex and needed more glucose than other brain functions, made me consider that maybe “willpower” is in fact an appropriate term… but I still never actually felt anything like a “depleting resource”, which I found confusing.
So I’ll be less confused again if the belief dependency you mention is correct, and causal. In any case, I hope it is, so that people can achieve better dynamic consistency by not thinking of it as “expendable”. I’m at least one example consistent with that theory.
With respect, I’ve always found the dynamic inconsistency explanation silly. Such an analysis feels like one is forcing, in the face of contradictory evidence, to model human beings as rational agents. In other words, you look at a person’s behavior, realize that it doesn’t follow a time-invariant utility function, and say “Aha! Their utility function just varies with time, in a manner leading to a temporal conflict of interests!” But given sufficient flexibility in utility function, you can model any behavior as that of a utility-maximizing agent. (“Under environmental condition #1, he assigns 1 million utility to taking action A1 at time T_A1, action B1 at time T_B1, etc. and zero utility for other strategies. Under environmental condition #2...”)
On the other hand, my personal experience is that my decision of whether to complete some beneficial goal is largely determined by the mental pain associated with it. This mental pain, which is not directly measurable, is strongly dependent on the time of day, my caffeine intake, my level of fear, etc. If you can’t measure it, and you were to just look at my actions, this is what you’d say: “Look, some days he cleans his room and some days he doesn’t even though the benefit—a room clean for about 1 day—is the same. When he doesn’t clean his room, and you ask him why, he says he just really didn’t feel like it even though he now wishes he had. Therefore, the utility he is putting assigning to clean room is varying with time. Dynamical inconsistency, QED!” But the real reason is not that my utility function is varying. It’s that I find cleaning my room soothing on some days, whereas other days it’s torture.
Utility theory is a normative theory of rationality; it’s not taken seriously as a descriptive theory anymore. Rationality is about how we should behave, not how we do.
This is a common confusion about the what dynamic inconsistency really means, although I’m now noticing that Wikipedia doesn’t explain it so clearly, so I should give an example:
Monday self says: I should clean my room on Thursday, even if it will be extremely annoying to do so (within the usual range of how annoying the task can be), because of the real-world benefits of being able to have guests over on the weekend.
Thursday-self says: Oh, but now that it’s Thursday and I’m annoyed, I don’t think it’s worth it anymore.
This is a disagreement between what your Monday-self and your Thursday-self think you should do on Thursday. It’s a straight-up contradiction of preferences among outcomes. There’s no need to think about utility theory at all, although preferences among outcomes, and not items is exactly what it’s designed to normatively govern.
ETA: The OP now links to a lesswrongwiki article on dynamic inconsistency.
Thank you for introducing me to dynamic (in)consistency, that is extremely helpful for resolving my understanding of will power. I have similar experiences as you describe: except in occasions of grave illness, I experience my will as infinite, and any appearance of depletion/giving up seems to arise from a divided will (eg wanting to keep riding until 2 hours is up vs wanting to be able to move without pain tomorrow)
Dynamic consistency seems like an incredibly worthwhile area to self-improve in.
How would one go about improving in that area? I can’t see a straightforward way to do it.
I’ve never heard of willpower depletion. I’ve heard people say that they don’t have enough willpower, but not that they’re out of willpower. Surely willpower is a long-term stat like CON, not an diminishable resource like HP.
I’ve never thought that I’ve had much willpower (possibly a nocebo effect originally generalised from a few early cases?). But on those occasions where I have used my willpower, this has always made subsequent uses easier. I can’t imagine using it up.
Maybe you leveled up.
In fact, previous research has shown that it is a lot like HP in many situations. See the citations near the beginning of the article.
Yeah, I see that now, but it’s still very weird to me. And the new article seems to explain why: I think of willpower as like CON, so for me it’s like CON. Others think of it as HP, so for them it’s like HP. I just didn’t realise that there was anybody like those others before!
For me it’s more like the limit break meter. When things get bad enough, it comes.
On the other hand, sometimes it’s like a combo meter—the more I do, the more I keep doing.
Maybe it’s more that I have a constant amount, and the challenge I’m facing varies in time.