I don’t myself work in AI risk so I’m not the ideal person to respond but I’m in the community for quite a while so given that nobody who actually works in the field answered I will try to give my answer:
One of the key features is that there’s a pretty high bar to be payed to work in AI safety.
I don’t want to apply to programs that aren’t worth it (it’s possible my qualifications are sufficient for some of the ones I’ll apply to, but I have little context to tell).
The bar to do a MIRI internship is not lower then the bar to getting into a top university. I would expect that applying for a master at the universities that the 80,000 article lists is one of your best bets.
While those universities do have high tution and you likely will be in debt after leaving, a computer science degree in those universities allows for access to very high paying jobs, so the debt can be worth it even in the case you don’t end up going into AI risk.
I saw that guide a while back and it was helpful, but it helped more with “what” than “how”—although it still does how better than most guides. For the most part, I’m concerned about things I’m missing that are obvious if you have the right context. Like that given my goals, there are better things to be prioritizing, or that I should be applying to X for achieving Y.
I’ve been thinking about it for a while since posting it, and I think I agree with you on that applying for a Master’s is the best route for me. (By the way, did you mean the universities the article mentions in the “Short-term Policy Research Options” subheading? I didn’t find any other).
One could also do academic research at any university, though it helps to be somewhere with enough people working on related issues to form a critical mass. Examples of universities with this sort of critical mass include the University of Oxford, University of Cambridge, UC Berkeley, MIT, the University of Washington, and Stanford.
While that passage isn’t directly about where to do your masters, they are places where there are people who can support you in learning about AI safety research.
I don’t myself work in AI risk so I’m not the ideal person to respond but I’m in the community for quite a while so given that nobody who actually works in the field answered I will try to give my answer:
80,000 hours has a general guide for AI risk: https://80000hours.org/articles/ai-policy-guide/ the also published a podcast.
One of the key features is that there’s a pretty high bar to be payed to work in AI safety.
The bar to do a MIRI internship is not lower then the bar to getting into a top university. I would expect that applying for a master at the universities that the 80,000 article lists is one of your best bets.
While those universities do have high tution and you likely will be in debt after leaving, a computer science degree in those universities allows for access to very high paying jobs, so the debt can be worth it even in the case you don’t end up going into AI risk.
Thank you.
I saw that guide a while back and it was helpful, but it helped more with “what” than “how”—although it still does how better than most guides. For the most part, I’m concerned about things I’m missing that are obvious if you have the right context. Like that given my goals, there are better things to be prioritizing, or that I should be applying to X for achieving Y.
I’ve been thinking about it for a while since posting it, and I think I agree with you on that applying for a Master’s is the best route for me. (By the way, did you mean the universities the article mentions in the “Short-term Policy Research Options” subheading? I didn’t find any other).
When it comes to chosing universities there’s:
While that passage isn’t directly about where to do your masters, they are places where there are people who can support you in learning about AI safety research.