I think the choice between two options is a limited action space that’s unlikely to contain the best thing you could be doing. I wrote about why a limited action space can remove the vast majority of your impact here. Have you considered
careers outside of academia
programs outside of Europe
PhD programs in computer science
taking a gap year to learn the basics of some subfield of alignment, possibly doing some independent research, then doing a PhD in alignment at somewhere like CHAI
developing aptitudes other than math ability, such that you can become the Pareto-best in the world
gathering more information on your comparative advantage before committing to a large career decision
internships
doing something like Cambridge AGISF to see what theoretical problems you fit best at
testing your skill at machine learning
distilling some papers as a cheap test of your technical writing skill
It’s also possible to do harm if you advance AI capabilities more than safety, so any plan to go into AI research has to have a story for how you differentially advance safety.
Thank you very much for your comment. Without delving into the details, some of these routes seem unfeasible right now, but others don’t. You have furthermore provided me with some useful ideas and resources I hadn’t considered or read about yet.
I think the choice between two options is a limited action space that’s unlikely to contain the best thing you could be doing. I wrote about why a limited action space can remove the vast majority of your impact here. Have you considered
careers outside of academia
programs outside of Europe
PhD programs in computer science
taking a gap year to learn the basics of some subfield of alignment, possibly doing some independent research, then doing a PhD in alignment at somewhere like CHAI
developing aptitudes other than math ability, such that you can become the Pareto-best in the world
gathering more information on your comparative advantage before committing to a large career decision
internships
doing something like Cambridge AGISF to see what theoretical problems you fit best at
testing your skill at machine learning
distilling some papers as a cheap test of your technical writing skill
It’s also possible to do harm if you advance AI capabilities more than safety, so any plan to go into AI research has to have a story for how you differentially advance safety.
Thank you very much for your comment. Without delving into the details, some of these routes seem unfeasible right now, but others don’t. You have furthermore provided me with some useful ideas and resources I hadn’t considered or read about yet.