OpenAI would love to hire more alignment researchers, but there just aren’t many great researchers out there focusing on this problem.
This may well be true—but it’s hard to be a researcher focusing on this problem directly unless you have access to the ability to train near-cutting edge models. Otherwise you’re going to have to work on toy models, theory, or a totally different angle.
I’ve personally applied for the DeepMind scalable alignment team—they had a fixed, small available headcount which they filled with other people who I’m sure were better choices—but becoming a better fit for those roles is tricky, unless by just doing mostly unrelated research.
Do you have a list of ideas for research that you think is promising and possible without already being inside an org with big models?
This may well be true—but it’s hard to be a researcher focusing on this problem directly unless you have access to the ability to train near-cutting edge models. Otherwise you’re going to have to work on toy models, theory, or a totally different angle.
I’ve personally applied for the DeepMind scalable alignment team—they had a fixed, small available headcount which they filled with other people who I’m sure were better choices—but becoming a better fit for those roles is tricky, unless by just doing mostly unrelated research.
Do you have a list of ideas for research that you think is promising and possible without already being inside an org with big models?