I’m curious about whether you still believe the model in this post. At the time it seemed plausible to me but now I don’t buy it.
It seems most likely that procrastination is not aimed at avoiding pain at all. A priori we might have thought that evolutionary optimization only influences our decisions picking what we consciously want + picking what gets classified as “painful” or “pleasurable.” But that doesn’t seem to fit the evidence very well, we seem to optimize for many things other than what we consciously believe we want / in ways that we consciously believe aren’t reasonable. Attempts to mash the simple theory to the evidence result in escalating craziness about the emotional valence of thoughts themselves etc.
The whole thing reminds me of Nate’s post on stamp collectors. There may be some way to cash everything out in terms of the pain/micro-stampyness of individual thoughts, but that is probably not a good model.
Relatedly, it seems to me like you are underestimating the quality of the RL algorithms used in our brains. For tasks that get repeated over and over again, I think that most animals can easily learn to handle the 5 minute delay without conscious reasoning.
I’m curious about whether you still believe the model in this post. At the time it seemed plausible to me but now I don’t buy it.
It seems most likely that procrastination is not aimed at avoiding pain at all. A priori we might have thought that evolutionary optimization only influences our decisions picking what we consciously want + picking what gets classified as “painful” or “pleasurable.” But that doesn’t seem to fit the evidence very well, we seem to optimize for many things other than what we consciously believe we want / in ways that we consciously believe aren’t reasonable. Attempts to mash the simple theory to the evidence result in escalating craziness about the emotional valence of thoughts themselves etc.
The whole thing reminds me of Nate’s post on stamp collectors. There may be some way to cash everything out in terms of the pain/micro-stampyness of individual thoughts, but that is probably not a good model.
Relatedly, it seems to me like you are underestimating the quality of the RL algorithms used in our brains. For tasks that get repeated over and over again, I think that most animals can easily learn to handle the 5 minute delay without conscious reasoning.