I think many early-career researchers in AI safety are undervaluing PhDs.
I agree with this. To be blunt, it is my impression from reading LW for the last year that a few people in this community seem to have a bit of a chip on their shoulder Re: academia. It certainly has its problems, and academics love nothing more than pointing them out to each other, but you face your problems with the tools you have, and academia is the only system for producing high quality researchers that is going to exist at scale over the next few years (MATS is great, I’m impressed by what Ryan and co are doing, but it’s tiny).
I would like to see many more academics in CS, math, physics and adjacent areas start supervising students in AI safety, and more young people go into those PhDs. Also, more people with PhDs in math and physics transitioning to AI safety work.
One problem is that many of the academics who are willing to supervise PhD students in AI safety or related topics are evaporating into industry positions (subliming?). There are also long run trends that make academia relatively less attractive than it was in the past (e.g. rising corporatisation) even putting aside salary comparisons, and access to compute. So I do worry somewhat about how many PhD students in AI safety adjacent fields can actually be produced per year this decade.
and academia is the only system for producing high quality researchers that is going to exist at scale over the next few years
To be clear, I am not happy about this, but I would take bets that industry labs will produce and train many more AI alignment researchers than academia, so this statement seems relatively straightforwardly wrong (and of course we can quibble over the quality of researchers produced by different institutions, but my guess is the industry-trained researchers will perform well at least by your standards, if not mine)
Do you mean the industry labs will take people with MSc and PhD qualifications in CS, math or physics etc and retrain them to be alignment researchers, or do you mean the labs will hire people with undergraduate degrees (or no degree) and train them internally to be alignment researchers?
I don’t know how OpenAI or Anthropic look internally, but I know a little about Google and DeepMind through friends, and I have to say the internal incentives and org structure don’t strike me as really a very natural environment for producing researchers from scratch.
My personal anecdote as one of the no-undergrad people: I got into ML research on my own and published papers without much research mentorship, and then joined OpenAI. My background is definitely more in engineering than research, but I’ve spent a substantial amount of time exploring my own research directions. I get direct mentorship from my manager, but I also seek out advice from many other researchers in the organization, which I’ve found to be valuable.
My case is quite unusual, so I would caution about drawing generalized conclusions about what to do based on my experience.
I was talking about research scientists here (though my sense is 5 years of being a research engineer is still comparably good for gaining research skills, and probably somewhat better, than most PhDs). I also had a vague sense that at Deepmind being a research engineer was particularly bad for gaining research skills (compared to the same role at OpenAI or Anthropic).
I agree with this. To be blunt, it is my impression from reading LW for the last year that a few people in this community seem to have a bit of a chip on their shoulder Re: academia. It certainly has its problems, and academics love nothing more than pointing them out to each other, but you face your problems with the tools you have, and academia is the only system for producing high quality researchers that is going to exist at scale over the next few years (MATS is great, I’m impressed by what Ryan and co are doing, but it’s tiny).
I would like to see many more academics in CS, math, physics and adjacent areas start supervising students in AI safety, and more young people go into those PhDs. Also, more people with PhDs in math and physics transitioning to AI safety work.
One problem is that many of the academics who are willing to supervise PhD students in AI safety or related topics are evaporating into industry positions (subliming?). There are also long run trends that make academia relatively less attractive than it was in the past (e.g. rising corporatisation) even putting aside salary comparisons, and access to compute. So I do worry somewhat about how many PhD students in AI safety adjacent fields can actually be produced per year this decade.
To be clear, I am not happy about this, but I would take bets that industry labs will produce and train many more AI alignment researchers than academia, so this statement seems relatively straightforwardly wrong (and of course we can quibble over the quality of researchers produced by different institutions, but my guess is the industry-trained researchers will perform well at least by your standards, if not mine)
Do you mean the industry labs will take people with MSc and PhD qualifications in CS, math or physics etc and retrain them to be alignment researchers, or do you mean the labs will hire people with undergraduate degrees (or no degree) and train them internally to be alignment researchers?
I don’t know how OpenAI or Anthropic look internally, but I know a little about Google and DeepMind through friends, and I have to say the internal incentives and org structure don’t strike me as really a very natural environment for producing researchers from scratch.
OpenAI and Anthropic often hire people without PhDs (often undergraduate degrees, sometimes masters, rarely no undergrad).
Edit: And I think these people in practice get at least some research mentorship.
They typically have some prior work/research/ml experience, but not necessarily any specific one of these.
My personal anecdote as one of the no-undergrad people: I got into ML research on my own and published papers without much research mentorship, and then joined OpenAI. My background is definitely more in engineering than research, but I’ve spent a substantial amount of time exploring my own research directions. I get direct mentorship from my manager, but I also seek out advice from many other researchers in the organization, which I’ve found to be valuable.
My case is quite unusual, so I would caution about drawing generalized conclusions about what to do based on my experience.
Yes. Besides Deepmind none of the industry labs require PhDs, and I think the Deepmind requirement has also been loosening a bit.
I don’t think Deepmind has ever required a PhD for research engineers, just for research scientists.
In practice these roles are pretty different at deepmind from my cached understanding. (At least on many deepmind teams?)
I was talking about research scientists here (though my sense is 5 years of being a research engineer is still comparably good for gaining research skills, and probably somewhat better, than most PhDs). I also had a vague sense that at Deepmind being a research engineer was particularly bad for gaining research skills (compared to the same role at OpenAI or Anthropic).
(Yep, wasn’t trying to disagree with you, just clarifying.)