I want to sidestep critique of “more exploratory AI safety PhDs” for a moment and ask: why doesn’t MIRI sponsor high-calibre young researchers with a 1-3 year basic stipend and mentorship? And why did MIRI let Vivek’s team go?
I don’t speak for MIRI, but broadly I think MIRI thinks that roughly no existing research is hopeworthy, and that this isn’t likely to change soon. I think that, anyway.
In discussions like this one, I’m conditioning on something like “it’s worth it, these days, to directly try to solve AGI alignment”. That seems assumed in the post, seems assumed in lots of these discussions, seems assumed by lots of funders, and it’s why above I wrote “the main direct help we can give to AGI alignment” rather than something stronger like “the main help (simpliciter) we can give to AGI alignment” or “the main way we can decrease X-risk”.
I’m reading this as you saying something like “I’m trying to build a practical org that successfully onramps people into doing useful work. I can’t actually do that for arbitrary domains that people aren’t providing funding for. I’m trying to solve one particular part of the problem and that’s hard enough as it is.”
Yes to all this, but also I’ll go one level deeper. Even if I had tons more Manifund money to give out (and assuming all the talent needs discussed in the report are saturated with funding), it’s not immediately clear to me that “giving 1-3 year stipends to high-calibre young researchers, no questions asked” is the right play if they don’t have adequate mentorship, the ability to generate useful feedback loops, researcher support systems, access to frontier models if necessary, etc.
I want to sidestep critique of “more exploratory AI safety PhDs” for a moment and ask: why doesn’t MIRI sponsor high-calibre young researchers with a 1-3 year basic stipend and mentorship? And why did MIRI let Vivek’s team go?
I don’t speak for MIRI, but broadly I think MIRI thinks that roughly no existing research is hopeworthy, and that this isn’t likely to change soon. I think that, anyway.
In discussions like this one, I’m conditioning on something like “it’s worth it, these days, to directly try to solve AGI alignment”. That seems assumed in the post, seems assumed in lots of these discussions, seems assumed by lots of funders, and it’s why above I wrote “the main direct help we can give to AGI alignment” rather than something stronger like “the main help (simpliciter) we can give to AGI alignment” or “the main way we can decrease X-risk”.
I’m reading this as you saying something like “I’m trying to build a practical org that successfully onramps people into doing useful work. I can’t actually do that for arbitrary domains that people aren’t providing funding for. I’m trying to solve one particular part of the problem and that’s hard enough as it is.”
Is that roughly right?
Fwiw I appreciate your Manifund regrantor Request for Proposals announcement.
I’ll probably have more thoughts later.
Yes to all this, but also I’ll go one level deeper. Even if I had tons more Manifund money to give out (and assuming all the talent needs discussed in the report are saturated with funding), it’s not immediately clear to me that “giving 1-3 year stipends to high-calibre young researchers, no questions asked” is the right play if they don’t have adequate mentorship, the ability to generate useful feedback loops, researcher support systems, access to frontier models if necessary, etc.