“If you have three years of runway saved up, quit your job and use the money to fund yourself. Study the AI landscape full-time. Figure out what to do. Do it.”
In an important way, saying this is more honest than asking for funding—like, it’s harder for the incentives of someone saying this to line up perversely. I’d basically say the same thing, but add in “noticing all the incentives you have to believe certain things” and “engineered pandemics” along with “the AI landscape”, because that’s just my take.
The one thing I have to wonder about, is if doing this on your own helps you get it right. Like, there’s a cadre of depressed rationalists in Berkeley who are trying, in theory, to do this for themselves. It can’t be helping that there’s social approval to be had for doing this, because that’s just a recipe (because of incentives) for people incorporating “I care about AI risk and am doing relevant things” into their narrative and self-image. If your self-esteem is tied into doing things that help with AI risk, then I empathize with you pretty hard, because everything that feels like a failure is going to hurt you, both emotionally and productivity-wise.
Grad students have a similar thing, where narrative investment into being good at research burns people out. Even productive grad students who have been having a few bad months. But if your social group is mostly other grad students who also think that being good at research is what makes one good and praiseworthy, then of course you’d have part of your self image invested into being good at research.
It’d be hard for a group of grad students to all simultaneously switch to not being emotionally invested in how good everyone else was at research. I’d say the same is true for AI-risk-oriented groups of rationalists who live near each other.
That’s why I say it’s best to study the landscape on your own. With geographic distance from others, even. Keep track of the work everyone else is doing, but keep your social group separate—if you choose your friends correctly, your self-esteem can be grounded in something more durable than your performance.
In an important way, saying this is more honest than asking for funding—like, it’s harder for the incentives of someone saying this to line up perversely. I’d basically say the same thing, but add in “noticing all the incentives you have to believe certain things” and “engineered pandemics” along with “the AI landscape”, because that’s just my take.
The one thing I have to wonder about, is if doing this on your own helps you get it right. Like, there’s a cadre of depressed rationalists in Berkeley who are trying, in theory, to do this for themselves. It can’t be helping that there’s social approval to be had for doing this, because that’s just a recipe (because of incentives) for people incorporating “I care about AI risk and am doing relevant things” into their narrative and self-image. If your self-esteem is tied into doing things that help with AI risk, then I empathize with you pretty hard, because everything that feels like a failure is going to hurt you, both emotionally and productivity-wise.
Grad students have a similar thing, where narrative investment into being good at research burns people out. Even productive grad students who have been having a few bad months. But if your social group is mostly other grad students who also think that being good at research is what makes one good and praiseworthy, then of course you’d have part of your self image invested into being good at research.
It’d be hard for a group of grad students to all simultaneously switch to not being emotionally invested in how good everyone else was at research. I’d say the same is true for AI-risk-oriented groups of rationalists who live near each other.
That’s why I say it’s best to study the landscape on your own. With geographic distance from others, even. Keep track of the work everyone else is doing, but keep your social group separate—if you choose your friends correctly, your self-esteem can be grounded in something more durable than your performance.