That’s awesome that you are looking to work on AI safety. Here are some options that I don’t see you mentioning:
If you’re able to get a job working on AI or machine learning, you’ll be getting paid to improve your skills in that area. So you might choose to direct your study and independent projects towards building a resume for AI work (e.g. by participating in Kaggle competitions).
If you get in to the right graduate program, you’ll be able to take classes and do research in to AI and ML topics.
Probably quite difficult, but if you’re able to create an app that uses AI or machine learning to make money, you’d also fulfill the goal of both making money and studying AI at the same time. For example, you could earn money through this stock market prediction competition.
80000 hours has a guide on using your career to work on AI risk.
MIRI has set up a research guide for getting the background necessary to do AI safety work. (Note that if MIRI is correct, your understanding of math may be much more important than your understanding of AI in order to do AI safety research. So the previous plans I suggested might look less attractive. The best path might be to aim for a job doing AI work, and then once you have that, start studying math relevant to AI safety part time.)
BTW, the x risk career network is also a good place to ask questions like this. (Folks on that mailing list are probably more qualified than me to answer this question but they don’t browse LW that often.)
Actually I’m kind of more comfortable with MIRI math than with ML math, but the research group here is more interested in machine learning. If I recommended them to look into provability logic, they would get big eyes and say Whoa!, but no more. If, however, I do ML research in the direction of AI safety, they would get interested. (And they are getting interested, but (1) they can’t switch their research too quickly and (2) I don’t know enough Japanese and the students don’t know enough English to make any kind of lunchtime or hallway conversation about AI safety possible.)
That’s awesome that you are looking to work on AI safety. Here are some options that I don’t see you mentioning:
If you’re able to get a job working on AI or machine learning, you’ll be getting paid to improve your skills in that area. So you might choose to direct your study and independent projects towards building a resume for AI work (e.g. by participating in Kaggle competitions).
If you get in to the right graduate program, you’ll be able to take classes and do research in to AI and ML topics.
Probably quite difficult, but if you’re able to create an app that uses AI or machine learning to make money, you’d also fulfill the goal of both making money and studying AI at the same time. For example, you could earn money through this stock market prediction competition.
80000 hours has a guide on using your career to work on AI risk.
MIRI has set up a research guide for getting the background necessary to do AI safety work. (Note that if MIRI is correct, your understanding of math may be much more important than your understanding of AI in order to do AI safety research. So the previous plans I suggested might look less attractive. The best path might be to aim for a job doing AI work, and then once you have that, start studying math relevant to AI safety part time.)
BTW, the x risk career network is also a good place to ask questions like this. (Folks on that mailing list are probably more qualified than me to answer this question but they don’t browse LW that often.)
Thanks for your varied suggestions!
Actually I’m kind of more comfortable with MIRI math than with ML math, but the research group here is more interested in machine learning. If I recommended them to look into provability logic, they would get big eyes and say Whoa!, but no more. If, however, I do ML research in the direction of AI safety, they would get interested. (And they are getting interested, but (1) they can’t switch their research too quickly and (2) I don’t know enough Japanese and the students don’t know enough English to make any kind of lunchtime or hallway conversation about AI safety possible.)
It seems like Toyota has some interest in provable correct software: https://www.infoq.com/news/2015/05/provably-correct-software .