I’ve been leaning towards a career in academia for >3 years, and recently got a tenure track role at Cambridge. This post sketches out my reasoning for preferring academia over industry.
Thoughts on Industry Positions: A lot of people working on AI x-risk seem to think it’s better to be in industry. I think the main arguments for that side of things are:
All the usual reasons for preferring industry, e.g. less non-research obligations, more resources.
AGI is expected to be built in industry (e.g. by OpenAI, Google, or DeepMind), and if you’re there, you can influence the decision-making around development and deployment.
I think these are good reasons, but far from definitive. I’ll also note that nobody seems to be going to Google, even though they are arguably the most likely to develop AGI, since 1) they are bigger, publish more, and have more resources, 2) they can probably steal from DeepMind to some extent. So if you ARE going to industry, please consider working for Google. Also Chinese companies.
My reasons for preferring academia:
Mentorship and exponential growth: In academia, you can mentor a lot more people, and this leads to a much higher rate of exponential growth. My quick estimate is that as an academic you can produce ~10 new researchers in 5 years; in industry, it’s more like ~3. I think you might also have significant, but hard-to-measure impact through teaching and other academic activities.
Personal fit: Unlike (I think) most people in the field, I don’t like coding much. I’m also not a theoretician. I am more of a “big picture” “idea person”, and more of an extrovert. I like the idea of spending most of my time managing others, writing, giving talks, etc. I have far too many ideas to pursue on my own effectively. I also don’t like the idea of having a boss.
Better position for advocacy: There are many reasons I think academia makes for a better “bully pulpit”. - A tenure track faculty position at a top-20 institution is higher status than a research scientist position. - Many academics find employees of big tech companies somewhat suspect, e.g. viewing them as sell-outs or shills to some extent. - None of the tech companies has a sufficiently credible commitment to reducing AI x-risk (and knowing what steps to take to do that) for my taste. - Tech companies don’t support many forms of outspoken advocacy. - Tech companies are unlikely to support governance efforts that threaten their core business model. But I think radical governance solutions are likely necessary, and that political activism in alliance with critics of big tech is likely necessary as well. - Tenure provides much better job security than employment at tech companies.
Main crux: timelines? A lot of people think academia only makes sense if you have longer timelines. I think this is likely true to some extent, but I think academia starts to look like a clear win within 5-10 years, so you need to be quite confident in very short timelines to think industry is a better bet. Personally, I’m also quite pessimistic about our chances for success if timelines are that short; I think we have more leverage if timelines are longer, so it might make sense to hope that we’re lucky enough to live in a world where AGI is at least a decade away.
Conclusion: I think the main cruxes for this choice are: 1) timelines 2) personal fit 3) expected source of impact.
I discussed (1) and (2) already. By (3), I mean roughly: “Do you expect the research you personally conduct/lead to be your main source of impact? Or do you think your influence on others (e.g. mentoring students, winning hearts and minds of other researchers and important decision makers) will have a bigger impact?” I think for most people, influencing others could easily be a bigger source of impact, and I think more people working on reducing AI x-risk should focus on that more. But if someone has a clear research agenda, a model of how it will substantially reduce x-risk, and a well-examined belief that their counter-factual impact on pushing the agenda forward is large, then I think there’s a strong case for focusing on direct impact. I don’t think this really applies to me; all of the technical research I can imagine doing seems to have a fairly marginal impact.
I’ve discussed this question with a good number of people, and I think I’ve generally found my pro-academia arguments to be stronger than their pro-industry arguments (I think probably many of them would agree?). I’d love to hear arguments people think I’ve missed. EDIT: in the above, I wanted to say something more like: “I think the average trend in these conversations has been for people to update in the direction of academia being more valuable than they thought coming into the conversation”. I think this is true and important, but I’m not very confident in it, and I know I’m not providing any evidence… take it with a grain of salt I guess :).
AI x-risk reduction: why I chose academia over industry
I’ve been leaning towards a career in academia for >3 years, and recently got a tenure track role at Cambridge. This post sketches out my reasoning for preferring academia over industry.
Thoughts on Industry Positions:
A lot of people working on AI x-risk seem to think it’s better to be in industry. I think the main arguments for that side of things are:
All the usual reasons for preferring industry, e.g. less non-research obligations, more resources.
AGI is expected to be built in industry (e.g. by OpenAI, Google, or DeepMind), and if you’re there, you can influence the decision-making around development and deployment.
I think these are good reasons, but far from definitive.
I’ll also note that nobody seems to be going to Google, even though they are arguably the most likely to develop AGI, since 1) they are bigger, publish more, and have more resources, 2) they can probably steal from DeepMind to some extent. So if you ARE going to industry, please consider working for Google. Also Chinese companies.
My reasons for preferring academia:
Mentorship and exponential growth: In academia, you can mentor a lot more people, and this leads to a much higher rate of exponential growth. My quick estimate is that as an academic you can produce ~10 new researchers in 5 years; in industry, it’s more like ~3. I think you might also have significant, but hard-to-measure impact through teaching and other academic activities.
Personal fit: Unlike (I think) most people in the field, I don’t like coding much. I’m also not a theoretician. I am more of a “big picture” “idea person”, and more of an extrovert. I like the idea of spending most of my time managing others, writing, giving talks, etc. I have far too many ideas to pursue on my own effectively. I also don’t like the idea of having a boss.
Better position for advocacy: There are many reasons I think academia makes for a better “bully pulpit”.
- A tenure track faculty position at a top-20 institution is higher status than a research scientist position.
- Many academics find employees of big tech companies somewhat suspect, e.g. viewing them as sell-outs or shills to some extent.
- None of the tech companies has a sufficiently credible commitment to reducing AI x-risk (and knowing what steps to take to do that) for my taste.
- Tech companies don’t support many forms of outspoken advocacy.
- Tech companies are unlikely to support governance efforts that threaten their core business model. But I think radical governance solutions are likely necessary, and that political activism in alliance with critics of big tech is likely necessary as well.
- Tenure provides much better job security than employment at tech companies.
Main crux: timelines?
A lot of people think academia only makes sense if you have longer timelines. I think this is likely true to some extent, but I think academia starts to look like a clear win within 5-10 years, so you need to be quite confident in very short timelines to think industry is a better bet. Personally, I’m also quite pessimistic about our chances for success if timelines are that short; I think we have more leverage if timelines are longer, so it might make sense to hope that we’re lucky enough to live in a world where AGI is at least a decade away.
Conclusion:
I think the main cruxes for this choice are:
1) timelines
2) personal fit
3) expected source of impact.
I discussed (1) and (2) already. By (3), I mean roughly: “Do you expect the research you personally conduct/lead to be your main source of impact? Or do you think your influence on others (e.g. mentoring students, winning hearts and minds of other researchers and important decision makers) will have a bigger impact?” I think for most people, influencing others could easily be a bigger source of impact, and I think more people working on reducing AI x-risk should focus on that more.
But if someone has a clear research agenda, a model of how it will substantially reduce x-risk, and a well-examined belief that their counter-factual impact on pushing the agenda forward is large, then I think there’s a strong case for focusing on direct impact. I don’t think this really applies to me; all of the technical research I can imagine doing seems to have a fairly marginal impact.
I’ve discussed this question with a good number of people, and I think I’ve generally found my pro-academia arguments to be stronger than their pro-industry arguments (I think probably many of them would agree?). I’d love to hear arguments people think I’ve missed.
EDIT: in the above, I wanted to say something more like: “I think the average trend in these conversations has been for people to update in the direction of academia being more valuable than they thought coming into the conversation”. I think this is true and important, but I’m not very confident in it, and I know I’m not providing any evidence… take it with a grain of salt I guess :).