Think about your beliefs’ consequences a lot and in detail. Personalize your imagination of the outcomes, imagining consequences to you and to others as vividly as possible.
I’m pretty sure that’s the mechanism (after working on the neuroscience of motivation for a long time), but I’m not sure you should actually do that for existential risk from AGI. I work full-time on AGI risk, but I really enjoy not fully feeling the consequences of my beliefs WRT doom (I give it roughly 50% since so much of the logic is poorly worked through thus far). Would I work a little harder if I was more terrified? Probably for a while, but I might well burn out.
One possible solution is to do the opposite type of motivation: think of the consequences of succeeding in aligning AGI (or avoiding other dangers). Think about them in as much detail and as frequently as you can. The imagination has to be vivid enough to evoke emotions; tying those emotional experiences to the concepts is what makes you feel your beliefs.
Imagining a vast number of humans, transhumans, and sentient AIs enjoying their lives to the fullest and enjoying activities we can barely imagine is a way more fun way to motivate yourself.
Challenge yourself to imagine how much incredible fun people might have if we get aligned superintelligence (I like to assume near-perfect simulations, so that people can have lots of challenges and adventures without getting in each others’ way, but there are more mundane ways to have immense fun, too)
I’m not sure if it’s as effective, but for your own sake I’d recommend that over imagining how bad failure would be.
Think about your beliefs’ consequences a lot and in detail. Personalize your imagination of the outcomes, imagining consequences to you and to others as vividly as possible.
I’m pretty sure that’s the mechanism (after working on the neuroscience of motivation for a long time), but I’m not sure you should actually do that for existential risk from AGI. I work full-time on AGI risk, but I really enjoy not fully feeling the consequences of my beliefs WRT doom (I give it roughly 50% since so much of the logic is poorly worked through thus far). Would I work a little harder if I was more terrified? Probably for a while, but I might well burn out.
One possible solution is to do the opposite type of motivation: think of the consequences of succeeding in aligning AGI (or avoiding other dangers). Think about them in as much detail and as frequently as you can. The imagination has to be vivid enough to evoke emotions; tying those emotional experiences to the concepts is what makes you feel your beliefs.
Imagining a vast number of humans, transhumans, and sentient AIs enjoying their lives to the fullest and enjoying activities we can barely imagine is a way more fun way to motivate yourself.
Challenge yourself to imagine how much incredible fun people might have if we get aligned superintelligence (I like to assume near-perfect simulations, so that people can have lots of challenges and adventures without getting in each others’ way, but there are more mundane ways to have immense fun, too)
I’m not sure if it’s as effective, but for your own sake I’d recommend that over imagining how bad failure would be.