I very much appreciate your efforts both in safety research and in writing this retrospective :)
For other people who are or will be in a similar position to you: I agree that focusing on producing results immediately is a mistake. I don’t think that trying to get hired immediately is a big mistake, but I do think that trying to get hired specifically at an AI alignment research organisation is very limiting, especially if you haven’t taken much time to study up ML previously.
For example, I suspect that for most people there would be very little difference in overall impact between working as a research engineer on an AI safety team straight out of university, versus working as an ML engineer somewhere else for 1-2 years then joining an AI safety team. (Ofc this depends a lot on how much you already know + how quickly you learn + how much supervision you need).
Perhaps this wouldn’t suit people who only want to do theoretical stuff—but given that you say that you find implementing ML fun, I’m sad you didn’t end up going down the latter route. So this is a signal boost for others: there’s a lot of ways to gain ML skills and experience, no matter where you’re starting from—don’t just restrict yourself to starting with safety.
Thanks for adding your thoughts! I agree, it would have made sense to become an ML engineer just somewhere. I don’t remember why I dismissed that possibility at the time. NB: If I had not dismissed it, I would still have needed to get my head set straight about the job requirements, by talking to an engineer or researcher at a company. Daniel Ziegler described a good way of doing this on the 80,000 Hours Podcast, which is summarized in ML engineering for AI safety & robustness: a Google Brain engineer’s guide to entering the field. Danny Hernandez expanded on that in a useful way in Danny Hernandez on forecasting and the drivers of AI progress.
After I left AI alignment, I thought about spending three months polishing my ML skills, then applying for ML engineering jobs, so that I could return to AI alignment later. – Exactly what you’re suggesting, only four years late. :-) – But given the Covid chaos and my income risk aversion, I decided to stick to my guns and get a software engineering job as soon as possible. Luckily, I ended up with a high-impact one, although on in x-risk.
Final note on why I think it was bad for me to try to get hired: It used to take me up to a week to get out an application, which distracted mightily from research work.
I very much appreciate your efforts both in safety research and in writing this retrospective :)
For other people who are or will be in a similar position to you: I agree that focusing on producing results immediately is a mistake. I don’t think that trying to get hired immediately is a big mistake, but I do think that trying to get hired specifically at an AI alignment research organisation is very limiting, especially if you haven’t taken much time to study up ML previously.
For example, I suspect that for most people there would be very little difference in overall impact between working as a research engineer on an AI safety team straight out of university, versus working as an ML engineer somewhere else for 1-2 years then joining an AI safety team. (Ofc this depends a lot on how much you already know + how quickly you learn + how much supervision you need).
Perhaps this wouldn’t suit people who only want to do theoretical stuff—but given that you say that you find implementing ML fun, I’m sad you didn’t end up going down the latter route. So this is a signal boost for others: there’s a lot of ways to gain ML skills and experience, no matter where you’re starting from—don’t just restrict yourself to starting with safety.
Thanks for adding your thoughts! I agree, it would have made sense to become an ML engineer just somewhere. I don’t remember why I dismissed that possibility at the time. NB: If I had not dismissed it, I would still have needed to get my head set straight about the job requirements, by talking to an engineer or researcher at a company. Daniel Ziegler described a good way of doing this on the 80,000 Hours Podcast, which is summarized in ML engineering for AI safety & robustness: a Google Brain engineer’s guide to entering the field. Danny Hernandez expanded on that in a useful way in Danny Hernandez on forecasting and the drivers of AI progress.
After I left AI alignment, I thought about spending three months polishing my ML skills, then applying for ML engineering jobs, so that I could return to AI alignment later. – Exactly what you’re suggesting, only four years late. :-) – But given the Covid chaos and my income risk aversion, I decided to stick to my guns and get a software engineering job as soon as possible. Luckily, I ended up with a high-impact one, although on in x-risk.
Final note on why I think it was bad for me to try to get hired: It used to take me up to a week to get out an application, which distracted mightily from research work.