Visualizing that accomplishment, and its positive rewarding consequences, until you have an urge for it to happen
I so have to try this hack. No agency without urgency?
This fits in reasonably well with an anti-akrasia framework I’ve been thinking over: Rephrase goal X as “I honestly believe that I will achieve X”, and then carry on thinking until you actually have a reasonably solid case for believing that. This particular trick translates to breaking down the statement into “I will force myself to develop an urge to do X. And once I have an urge to do X I will surely do X because I tend to follow my urges”.
“Status” / “prestige” / “signaling” / “people don’t really care about” is way overused to explain goal-urge delinkages that can be more simply explained by “humans are not agents”.
I think that those are sort of describing the same thing but at different levels of abstraction. Thinking in terms of evolutionary adaptations, our professed goals and actual behavior may differ for status/signaling reasons. But thinking about cognition we just come to the conclusion that we’re not very agenty and aren’t so bothered about the evolutionary reason why.
Rephrase goal X as “I honestly believe that I will achieve X”, and then carry on thinking until you actually have a reasonably solid case for believing that.
As a rationalist, you can frame that as “I prefer to reward future versions of me that have achieved this by having correctly predicted their behavior. ”
I so have to try this hack. No agency without urgency?
This fits in reasonably well with an anti-akrasia framework I’ve been thinking over: Rephrase goal X as “I honestly believe that I will achieve X”, and then carry on thinking until you actually have a reasonably solid case for believing that. This particular trick translates to breaking down the statement into “I will force myself to develop an urge to do X. And once I have an urge to do X I will surely do X because I tend to follow my urges”.
I think that those are sort of describing the same thing but at different levels of abstraction. Thinking in terms of evolutionary adaptations, our professed goals and actual behavior may differ for status/signaling reasons. But thinking about cognition we just come to the conclusion that we’re not very agenty and aren’t so bothered about the evolutionary reason why.
As a rationalist, you can frame that as “I prefer to reward future versions of me that have achieved this by having correctly predicted their behavior. ”