I decided to apply, and now I’m wondering what the best schools are for AI safety.
After some preliminary research, I’m thinking these are the most likely schools to be worth applying to, in approximate order of priority:
UC Berkeley (top choice)
CMU
Georgia Tech
University of Washington
University of Toronto
Cornell
University of Illinois—Urbana-Champaign
University of Oxford
University of Cambridge
Imperial College London
UT Austin
UC San Diego
I’ll probably cut this list down significantly after researching the schools’ faculty and their relevance to AI safety, especially for schools lower on this list.
I might also consider the CDTs in the UK mentioned in Stephen McAleese’s comment. But I live in the U.S. and am hesitant about moving abroad—maybe this would involve some big logistical tradeoffs even if the school itself is good.
Anything big I missed? (Unfortunately, the Stanford deadline is tomorrow and the MIT deadline was yesterday, so those aren’t gonna happen.) Or, any schools that seem obviously worse than continuing to work as a SWE at a Big Tech company in the Bay Area? (I think the fact that I live near Berkeley is a nontrivial advantage for me, career-wise.)
UC Berkeley has historically had the largest concentration of people thinking about AI existential safety. It’s also closely coupled to the Bay Area safety community. I think you’re possibly underrating Boston universities (i.e. Harvard and Northeastern, as you say the MIT deadline has passed). There is a decent safety community there, in part due to excellent safety-focussed student groups. Toronto is also especially strong on safety imo.
Generally, I would advise thinking more about advisors with aligned interests over universities (this relates to Neel’s comment about interests), though intellectual environment does of course matter. When you apply, you’ll want to name some advisors who you might want to work with on your statement of purpose.
Yeah, I’m particularly interested in scalable oversight over long-horizon tasks and chain-of-thought faithfulness. I’d probably be pretty open to a wide range of safety-relevant topics though.
In general, what gets me most excited about AI research is trying to come up with the perfect training scheme to incentivize the AI to learn what you want it to—things like HCH, Debate, and the ELK contest were really cool to me. So I’m a bit less interested in areas like mechanistic interpretability or very theoretical math
I decided to apply, and now I’m wondering what the best schools are for AI safety.
After some preliminary research, I’m thinking these are the most likely schools to be worth applying to, in approximate order of priority:
UC Berkeley (top choice)
CMU
Georgia Tech
University of Washington
University of Toronto
Cornell
University of Illinois—Urbana-Champaign
University of Oxford
University of Cambridge
Imperial College London
UT Austin
UC San Diego
I’ll probably cut this list down significantly after researching the schools’ faculty and their relevance to AI safety, especially for schools lower on this list.
I might also consider the CDTs in the UK mentioned in Stephen McAleese’s comment. But I live in the U.S. and am hesitant about moving abroad—maybe this would involve some big logistical tradeoffs even if the school itself is good.
Anything big I missed? (Unfortunately, the Stanford deadline is tomorrow and the MIT deadline was yesterday, so those aren’t gonna happen.) Or, any schools that seem obviously worse than continuing to work as a SWE at a Big Tech company in the Bay Area? (I think the fact that I live near Berkeley is a nontrivial advantage for me, career-wise.)
UC Berkeley has historically had the largest concentration of people thinking about AI existential safety. It’s also closely coupled to the Bay Area safety community. I think you’re possibly underrating Boston universities (i.e. Harvard and Northeastern, as you say the MIT deadline has passed). There is a decent safety community there, in part due to excellent safety-focussed student groups. Toronto is also especially strong on safety imo.
Generally, I would advise thinking more about advisors with aligned interests over universities (this relates to Neel’s comment about interests), though intellectual environment does of course matter. When you apply, you’ll want to name some advisors who you might want to work with on your statement of purpose.
Do you know what topics within AI Safety you’re interested in? Or are you unsure and so looking for something that lets you keep your options open?
Yeah, I’m particularly interested in scalable oversight over long-horizon tasks and chain-of-thought faithfulness. I’d probably be pretty open to a wide range of safety-relevant topics though.
In general, what gets me most excited about AI research is trying to come up with the perfect training scheme to incentivize the AI to learn what you want it to—things like HCH, Debate, and the ELK contest were really cool to me. So I’m a bit less interested in areas like mechanistic interpretability or very theoretical math