OpenAI recently hired Chris Olah (and his collaborator Ludwig Schubert), so *interpretability* is going to be a major and increasing focus at that org (not just deep RL). This is an important upcoming shift to have on your radar.
DeepMind has at least two groups doing safety-related research: the one we know of as “safety” is more properly the “Technical AGI Safety” team, but there is also a “Safe and Robust AI team” that does more like neural net verification and adversarial examples.
RE “General AI work in industry”—I’ve increasingly become aware of a number of somewhat-junior researchers who do work in a safety-relevant area (learning from human preferences, interpretability, robustness, safe exploration, verification, adversarial examples, etc.), and who are indeed long-term-motivated (determined once we say the right shibboleths at each other) but aren’t on a “safety team”. This gives me more evidence that if you’re able to get a job anywhere within Brain or DeepMind (or honestly any other industry research lab), you can probably hill-climb your way to relevant mentorship and start doing relevant stuff.
Important updates to your model:
OpenAI recently hired Chris Olah (and his collaborator Ludwig Schubert), so *interpretability* is going to be a major and increasing focus at that org (not just deep RL). This is an important upcoming shift to have on your radar.
DeepMind has at least two groups doing safety-related research: the one we know of as “safety” is more properly the “Technical AGI Safety” team, but there is also a “Safe and Robust AI team” that does more like neural net verification and adversarial examples.
RE “General AI work in industry”—I’ve increasingly become aware of a number of somewhat-junior researchers who do work in a safety-relevant area (learning from human preferences, interpretability, robustness, safe exploration, verification, adversarial examples, etc.), and who are indeed long-term-motivated (determined once we say the right shibboleths at each other) but aren’t on a “safety team”. This gives me more evidence that if you’re able to get a job anywhere within Brain or DeepMind (or honestly any other industry research lab), you can probably hill-climb your way to relevant mentorship and start doing relevant stuff.
Less important notes:
I’m at Google Brain right now, not OpenAI!
I wrote up a guide which I hope is moderately helpful in terms of what exactly one might do if one is interested in this path: https://80000hours.org/articles/ml-engineering-career-transition-guide/
Here’s a link for the CHAI research engineering post: https://humancompatible.ai/jobs#engineer
Thanks for the updates. Sorry about getting your organization wrong, I changed that part.