AI x-risk reduction: why I chose academia over industry
I’ve been leaning towards a career in academia for >3 years, and recently got a tenure track role at Cambridge. This post sketches out my reasoning for preferring academia over industry.
Thoughts on Industry Positions:
A lot of people working on AI x-risk seem to think it’s better to be in industry. I think the main arguments for that side of things are:
All the usual reasons for preferring industry, e.g. less non-research obligations, more resources.
AGI is expected to be built in industry (e.g. by OpenAI, Google, or DeepMind), and if you’re there, you can influence the decision-making around development and deployment.
I think these are good reasons, but far from definitive.
I’ll also note that nobody seems to be going to Google, even though they are arguably the most likely to develop AGI, since 1) they are bigger, publish more, and have more resources, 2) they can probably steal from DeepMind to some extent. So if you ARE going to industry, please consider working for Google. Also Chinese companies.
My reasons for preferring academia:
Mentorship and exponential growth: In academia, you can mentor a lot more people, and this leads to a much higher rate of exponential growth. My quick estimate is that as an academic you can produce ~10 new researchers in 5 years; in industry, it’s more like ~3. I think you might also have significant, but hard-to-measure impact through teaching and other academic activities.
Personal fit: Unlike (I think) most people in the field, I don’t like coding much. I’m also not a theoretician. I am more of a “big picture” “idea person”, and more of an extrovert. I like the idea of spending most of my time managing others, writing, giving talks, etc. I have far too many ideas to pursue on my own effectively. I also don’t like the idea of having a boss.
Better position for advocacy: There are many reasons I think academia makes for a better “bully pulpit”.
- A tenure track faculty position at a top-20 institution is higher status than a research scientist position.
- Many academics find employees of big tech companies somewhat suspect, e.g. viewing them as sell-outs or shills to some extent.
- None of the tech companies has a sufficiently credible commitment to reducing AI x-risk (and knowing what steps to take to do that) for my taste.
- Tech companies don’t support many forms of outspoken advocacy.
- Tech companies are unlikely to support governance efforts that threaten their core business model. But I think radical governance solutions are likely necessary, and that political activism in alliance with critics of big tech is likely necessary as well.
- Tenure provides much better job security than employment at tech companies.
Main crux: timelines?
A lot of people think academia only makes sense if you have longer timelines. I think this is likely true to some extent, but I think academia starts to look like a clear win within 5-10 years, so you need to be quite confident in very short timelines to think industry is a better bet. Personally, I’m also quite pessimistic about our chances for success if timelines are that short; I think we have more leverage if timelines are longer, so it might make sense to hope that we’re lucky enough to live in a world where AGI is at least a decade away.
Conclusion:
I think the main cruxes for this choice are:
1) timelines
2) personal fit
3) expected source of impact.
I discussed (1) and (2) already. By (3), I mean roughly: “Do you expect the research you personally conduct/lead to be your main source of impact? Or do you think your influence on others (e.g. mentoring students, winning hearts and minds of other researchers and important decision makers) will have a bigger impact?” I think for most people, influencing others could easily be a bigger source of impact, and I think more people working on reducing AI x-risk should focus on that more.
But if someone has a clear research agenda, a model of how it will substantially reduce x-risk, and a well-examined belief that their counter-factual impact on pushing the agenda forward is large, then I think there’s a strong case for focusing on direct impact. I don’t think this really applies to me; all of the technical research I can imagine doing seems to have a fairly marginal impact.
I’ve discussed this question with a good number of people, and I think I’ve generally found my pro-academia arguments to be stronger than their pro-industry arguments (I think probably many of them would agree?). I’d love to hear arguments people think I’ve missed.
EDIT: in the above, I wanted to say something more like: “I think the average trend in these conversations has been for people to update in the direction of academia being more valuable than they thought coming into the conversation”. I think this is true and important, but I’m not very confident in it, and I know I’m not providing any evidence… take it with a grain of salt I guess :).
- An Update on Academia vs. Industry (one year into my faculty job) by 3 Sep 2022 20:43 UTC; 122 points) (
- [AN #143]: How to make embedded agents that reason probabilistically about their environments by 24 Mar 2021 17:20 UTC; 13 points) (
- How can we secure more research positions at our universities for x-risk researchers? by 6 Sep 2022 17:17 UTC; 11 points) (
- How can we secure more research positions at our universities for x-risk researchers? by 6 Sep 2022 14:41 UTC; 3 points) (EA Forum;
I… think we’ve discussed this? But I don’t agree, at least insofar as the arguments are supposed to apply to me as well (so e.g. not the personal fit part).
Some potential disagreements:
I expect more field growth via doing good research that exposes more surface area for people to tackle, rather than mentoring people directly. Partly this is because people can get mentorship comes from generic PhD programs, and because a lot of my research aims to be conceptual clarification of the field as a whole. That second reason may not apply to you.
I wouldn’t say “radical governance solutions” or “political activism” are “likely necessary” (though I wouldn’t say they are “likely unnecessary” either, it just seems pretty uncertain).
I didn’t notice you talking about early research being more impactful than later research, which seems like an important factor to the extent you think you’ll do better + faster research in industry relative to academia (as I do believe).
You mention “all the usual reasons for preferring industry”—I want to note that those seem like pretty strong reasons; idk how strong you think those are. (I’d also note salary in addition to the ones you mention—even altruistically, you can donate a much larger amount on a typical industry salary.)
Personally, I find the “bully pulpit” argument for academia most persuasive.
Btw, planned summary for the Alignment Newsletter:
Yeah we’ve definitely discussed it! Rereading what I wrote, I did not clearly communicate what I intended to...I wanted to say that “I think the average trend was for people to update in my direction”. I will edit it accordingly.
I think the strength of the “usual reasons” has a lot to do with personal fit and what kind of research one wants to do. Personally, I basically didn’t consider salary as a factor.
I think one thing to consider is that the two paths don’t have an equal % chance to succeed. Getting a tenure track position at a top 20 university is hard. Really hard. Getting a research scientist position is, based on my very uncertain and informal understanding, less hard.
This doesn’t seem so relevant to capybaralet’s case, given that he was choosing whether to accept an academic offer that was already extended to him.
Also, you can try for a top-20 tenure-track position and, if you don’t get it, “fail” gracefully into a research scientist position. The paths for the two of them are very similar (± 2 years of postdoctoral academic work).
When you say academia looks like a clear win within 5-10 years, is that assuming “academia” means “starting a tenure-track job now?” If instead one is considering whether to begin a PhD program, for example, would you say that the clear win range is more like 10-15 years?
Also, how important is being at a top-20 institution? If the tenure track offer was instead from University of Nowhere, would you change your recommendation and say go to industry?
Would you agree that if the industry project you could work on is the one that will eventually build TAI (or be one of the leading builders, if there are multiple) then you have more influence from inside than from outside in academia?
Yes.
My cut-off was probably somewhere between top-50 and top-100, and I was prepared to go anywhere in the world. If I couldn’t make into top 100, I think I would definitely have reconsidered academia. If you’re ready to go anywhere, I think it makes it much easier to find somewhere with high EV (but might have to move up the risk/reward curve a lot).
Yes. But ofc it’s hard to know if that’s the case. I also think TAI is a less important category for me than x-risk inducing AI.
Makes sense. I think we don’t disagree dramatically then.
Also makes sense—just checking, does x-risk-inducing AI roughly match the concept of “AI-induced potential point of no return” or is it importantly different? It’s certainly less of a mouthful so if it means roughly the same thing maybe I’ll switch terms. :)
um sorta modulo a type error… risk is risk. It doesn’t mean the thing has happened (we need to start using some sort of phrase like “x-event” or something for that, I think).
I’ve started using the phrase “existential catastrophe” in my thinking about this; “x-catastrophe” doesn’t really have much of a ring to it though, so maybe we need something else that abbreviates better?
My personal thought is:
a. So fundamental algorithms are cool and critical contributions to CS. But where did we get the stuff we have now? Well, arguably the things we have now are (1) absurdly powerful silicon devices (2) a large amount of open source and proprietary software. Most of all of this was developed by industry or non-academic open source contributors.
b. What will it take to make AGI a reality? The thing is, I think it is similar to comparing Werner Von Braun’s work before he had a nation backing him, and after. I think the most probable route to AGI is as follows:
1. Today we are trying to build practical systems that work as [sensor inputs] → [intermediate state abstractions: example, collidable objects on a grid around the agent] → [goal state abstractions: example, predicted $ for each path the agent takes, with negative dollars for risks ]. They are fairly simple.
2. I think AGI will essentially be “more meta”. There will be many more feeder subsystems that supply more complex intermediate states. And more layers of meta states that ultimately result in high level abstractions like ‘awareness’ and ‘self desires’ and so on.
To me all this looks like immense scale. You need a gigantic software infrastructure that gets reused thousands of times over. Modern example: wordpress. You need a massive hardware infrastructure to host it and giga-dollar budgets.
Also I think that many of the fundamental R&D steps—finding better activation functions, better neural network architectures, finding optimal configurations for a given problem, alternative algorithms—can be found better by massive autonomous systems that explore the possibility space. I don’t think human researchers will be able to contribute much directly. Here’s one paper as an example.
If you want to be a researcher who can exploit this you need to be a damn good programmer, and you need a big budget for cloud runs.
c. Exponential progress is going to come not from throwing more humans at the problem, but by building clever software that bootstraps early progress in AI to make further progress. Example, a neural network to generate potential functions for regression—which may not even be neural networks—to solve general regression problems.
What are your thoughts for subfields of ML where research impact/quality depends a lot on having lots of compute?
In NLP, many people have the view that almost all of the high impact work has come from industry over the past 3 years, and that the trend looks like it will continue indefinitely. Even safety-relevant work in NLP seems much easier to do with access to larger models with better capabilities (Debate/IDA are pretty hard to test without good language models). Thus, safety-minded NLP faculty might end up in a situation where none of their direct work is very impactful, but all of the expected impact is by graduating students who end up going to work in industry labs in particular. How would you think about this kind of situation?
You can try to partner with industry, and/or advocate for big government $$$.
I am generally more optimistic about toy problems than most people, I think, even for things like Debate.
Also, scaling laws can probably help here.
Maybe we need a “something else” category? An alternative other than simply business/industry and academics?
Also, while this is maybe something of an old topic, I took some notes regarding my thoughts on this topic and and related matters posted them to:
https://mflb.com/ai_alignment_1/academic_or_industry_out.pdf