My argument here is very related to what jacquesthibs mentions.
Right now it seems like the biggest bottleneck for the AI Alignment field is senior researchers. There are tons of junior people joining the field and I think there are many opportunities for junior people to up-skill and do some programs for a few months (e.g. SERI MATS, MLAB, REMIX, AGI Safety Fundamentals, etc.). The big problem (in my view) is that there are not enough organizations to actually absorb all the rather “junior” people at the moment. My sense is that 80K and most programs encourage people to up-skill and then try to get a job at a big organization (like Deepmind, Anthropic, OpenAI, Conjecture, etc.). Realistically speaking though, these organizations can only absorb a few people in a year. In my experience, it’s extremely competitive to get a job at these organizations even if you’re a more experienced researcher (e.g. having done a couple of years of research, a Ph.D., or similar). This means that while there are many opportunities for junior people to get a stand in the field, there are actually very few paths that actually allow you to have a full-time career in this field (this is also for more experienced researchers who don’t get a big lab). So the bottleneck in my view is not having enough organizations, which is a result of not having enough senior researchers. Funding an org is super hard, you want to have experienced people, with good research taste, and some kind of research agenda. So if you don’t have many senior people in a field, it will be hard to find people that fund those additional orgs.
Now, one career path that many people are currently taking, is being an “independent researcher” and being funded through a grant. I would claim that this is currently the default path for any researcher who do not get a full-time position and want to stay in the field. I believe that there are people out there who will do great as independent researchers and actually contribute to solving problems (e.g. Marius Hobbhahn and John Wenthworth talk bout being an independent researchers). I am however quite skeptical about most people doing independent research without any kind of supervision. I am not saying one can’t make progress, but it’s super hard to do this without a lot of research experience, a structured environment, good supervision, etc. I am especially skeptical about independent researchers becoming great senior researchers if they can’t work with people who are already very experienced and learn from them. Intuitively I think that no other field has junior people independently working without clear structures and supervision, so I feel like my skepticism is warranted.
In terms of career capital, being an independent researcher is also very risky. If your research fails, i.e. you don’t get a lot of good output (papers, code libraries, or whatever), “having done independent research for a couple of years” will not sound great in your CV. As a comparison, if you somehow do a very mediocre Ph.D. with no great insights, but you do manage to get the title, at least you have that in your CV (having a Ph.D. can be pretty useful in many cases).
So overall I believe that decision makers and AI field builders should put their main attention on how we can “groom” senior researchers in the field and get more full-time positions through organizations. I don’t claim to have the answers on how to solve this. But it does seem the greatest bottleneck for field building in my opinion. It seems like the field was able to get a lot more people excited about AI safety and to change their careers (we still have by far not enough people though). However right I think that many people are kind of stuck as junior researchers, having done some programs, and not being able to get full-time positions. Note that I am aware that some programs such as SERI MATS do in some sense have the ambition of grooming senior researchers. However, in practice, it still feels like there is a big gap right now.
My background (in case this is useful): I’ve been doing ML research throughout my Bachelor’s and Masters. I’ve worked at FAR AI on “AI alignment” for the last 1.5 years, so I was lucky to get a full-time position. I don’t consider myself a “senior” researcher as defined in this comment, but I definitely have a lot of research experience in the field. From my own experience, it’s pretty hard to find a new full-time position in the field, especially if you are also geographically constrained.
Intuitively I think that no other field has junior people independently working without clear structures and supervision, so I feel like my skepticism is warranted.
Einstein had his miracle year in such a context.
Modern academia has few junior people independently working without clear structures and supervision, but pre-Great Stagnation that happened more.
Generally, pre-paradigmatic work is likely easier to do independently than post-paradigmatic work. That still means that most researchers won’t produce anything useful but that’s generally common for academic work and if a few researches manage to do great paradigm founding work it can still be worth it over all.
I agree that it would be great to have more senior researchers in alignment
I agree that, ideally, it would be easier for independent researchers to get funding.
I don’t think it’s necessarily a bad thing that the field of AI alignment research is reasonably competitive.
My impression is that there’s still a lot of funding (and a lot of interest in funding) independent alignment researchers.
My impression is that it’s still considerably easier to get funding for independent alignment research than many other forms of independent non-commercial research. For example, many PhD programs have acceptance rates <10% (and many require that you apply for independent grants or that you spend many of your hours as a teaching assistant).
I think the past ~2 months has been especially tough for people seeking independent funding, given that funders have been figuring out what to do in light of the FTX stuff & have been more overwhelmed than usual.
I am concerned that, in the absence of independent funding, people will be more inclined to join AGI labs even if that’s not the best option for them. (To be clear, I think some AGI lab safety teams are doing reasonable work. But I expect that they will obtain increasingly more money/prestigé in the upcoming years, which could harm peoples’ ability to impartially assess their options, especially if independent funding is difficult to acquire).
Overall, empathize with concerns about funding, but I wish the narrative included (a) the fact that the field is competitive is not necessarily a bad thing and (b) funding is still much more available than for most other independent research fields.
Finally, I think part of the problem is that people often don’t know what they’re supposed to do in order to (honestly and transparently) present themselves to funders, or even which funders they should be applying to, or even what they’re able to ask for.. If you’re in this situation, feel free to reach out! I often have conversations with people about career & funding options in AI safety (Disclaimer: I’m not a grantmaker).
Thanks for your comments Akash. I think I have two main points I want to address.
I agree that it’s very good that the field of AI Alignment is very competitive! I did not want to imply that this is a bad thing. I was mainly trying to point out that from my point of view, it seems like overall there are more qualified and experienced people than there are jobs at large organizations. And in order to fill that gap we would need more senior researchers, who then can follow their research agendas and hire people (and fund orgs), which is however hard to achieve. One disclaimer I want to note is that I do not work at a large org, and I do not precisely know what kinds of hiring criteria they have, i.e. it is possible that in their view we still lack talented enough people. However, from the outside, it definitely does look like there are many experienced researchers.
It is possible that my previous statement may have been misinterpreted. I wish to clarify that my concerns do not pertain to funding being a challenge. I did not want to make an assertion about funding in general, and if my words gave that impression, I apologize. I do not know enough about the funding landscape to know whether there is a lot or not enough funding (especially in recent months).
I agree with you that, for all I know, it’s feasible to get funding for independent researchers (and definitely easier than doing a Ph.D. or getting a full-time position). I also agree that independent research seems to be more heavily funded than in other fields.
My point was mainly the following:
Many people have joined the field (which is great!), or at least it looks like it from the outside. 80000 hours etc. still recommend switching to AI Alignment, so it seems likely that more people will join.
I believe that there are many opportunities for people to up-skill to a certain level if they want to join the field (Seri Mats, AI safety camp, etc.).
However full-time positions (for example at big labs) are very limited. This also makes sense, since they can only hire so many people a year.
It seems like the most obvious option for people who want to stay in the field is to do independent research (and apply for grants). I think it’s great that people do independent research and that one has the opportunity to get grants.
However, doing independent research is not always ideal for many reasons (as outlined in my main comment). Note I’m not saying it doesn’t make sense at all, it definitely has its merits.
In order to have more full-time positions we need more senior people, who can then fund their organizations, or independently hire people, etc. Independent research does not seem like a promising avenue to me, to groom senior researchers. It’s essential that you can learn from people that are better than you and be in a good environment (yes there are exceptions like Einstein, but I think most researchers I know would agree with that statement).
So to me, the biggest bottleneck of all is how can we get many great researchers and groom them to be senior researchers who can lead their own orgs. I think that so far we have really optimized for getting people into the field (which is great). But we haven’t really found a solution to grooming senior researchers (again, some programs try to do that and I’m aware that this takes time). Overall I believe that this is a hard problem and probably others have already thought about it. I’m just trying to make that point in case nobody has written it up yet. Especially if people are trying to do AI safety field building it seems to me that, coming up with ways to groom senior researchers is a top priority.
Ultimately I’m not even sure whether there is a clear solution to this problem. The field is still very new and it’s amazing what has already happened. It’s probable that it just takes time for the field to mature and people getting more experience. I think I mostly wanted to point this out, even if it is maybe obvious.
Overall I believe that this is a hard problem and probably others have already thought about it.
I’m not sure people seriously thought about this before, your perspective seems rather novel.
I think existing labs themselves are the best vehicle to groom new senior researchers. Anthropic, Redwood Research, ARC, and probably other labs were all found by ex-staff of existing labs at the time (except that maybe one shouldn’t credit OpenAI for “grooming” Paul Cristiano to senior level, but anyways).
It’s unclear what field-building projects could incentivise labs to part with their senior researchers and let them spin off their own labs. Or to groom senior researchers “faster”, so to speak.
If the theory that AI alignment is extremely competitive is right, then logically both the labs shouldn’t cling to their senior people too much (because it will be relatively easy to replace them), and senior researchers shouldn’t worry about starting their own projects too much because they know they can assemble a very competent team very quickly.
It seems that it’s only the funding for these new labs and their organisational strategy which could be a point of uncertainty for senior researchers that could deter them from starting their own projects (apart from, of course, just being content with the project they are involved in at their current jobs, and their level of influence on research agendas).
So, maybe the best field-building project that could be done in this area is someone offering knowledge about and support through founding, funding, and setting a strategy for new labs (which may range from brief informal consultation to more structured support, a-la “incubator for AI safety labs”) and advertise this offering among the staff of existing AI labs.
Overall, empathize with concerns about funding, but I wish the narrative included (a) the fact that the field is competitive is not necessarily a bad thing and (b) funding is still much more available than for most other independent research fields.
I didn’t mention this in my comment, but I also agree with this. Apologies if it seemed otherwise. I was mostly expressing a bit of concern about how how funding will be dispursed going forward, from a macro-perspective.
Ah, thanks for the clarifications. I agree with the clarified versions :)
Quick note on getting senior researchers:
It seems like one of the main bottlenecks is “having really good models of alignment.”
It seems plausible to me that investing in junior alignment researchers today means we’ll increase the number of senior alignment researchers (or at least “people who are capable of mentoring new alignment researchers, starting new orgs, leading teams, etc.).
My vibes-level guess is that the top junior alignment researchers are ready to lead teams within about a year or two of doing alignment research on their own. EG I expect some people in this post to be ready to mentor/advise/lead teams in the upcoming year. (And some of them already are).
I’m definitely feeling like I sacrificed both income and career capital by deciding to do alignment research full time. I don’t feel like I’m being ‘hurt’ by the world though, I feel like the world is hurting. In a saner world, there would be more resources devoted to this, and it is to the world’s detriment that this is not the case. I could go back to doing mainstream machine learning if I wasn’t overwhelmed by a sense of impending doom and compelled by a feeling of duty to do what I can to help. I’m going to keep trying my best, but I would be a lot more effective if I were working as part of a group. Even just things like being able to share the burden of some of the boilerplate code I need to write in order to do my experiments would speed things up a lot, or having a reviewer to help point out mistakes to me.
Proposal: If other people are doing independent research in London I’d be really interested in co-working and doing some regular feedback and updates. (Could be elsewhere but I find being in person important for me personally). If anyone would be interested reply here or message me and we’ll see if we can set something up :)
General comment: This feels accurate to me. I’ve been working as an independent researcher for the last few months, after 9 months of pure skill building and have got close but not succeeded in getting jobs at the local research orgs in London (DeepMind, Conjecture).
It’s a great way to build some skills, having to build your own stack, but it’s also hard to build research skills without people with more experience giving feedback, and because iteration of ideas is slow, it’s difficult to know whether to stick with something or try something else.
In particular it forces you to be super proactive if you want to get any feedback.
I’m not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.
I am a person entering the field right now, I also know several people in a position similar to mine, and there are just no positions for people like me, even though I think I am very proactive and have valuable experience
Yep, the field is sort of underfunded, especially after the FTX crash. That’s why I suggested grantwriting as a potential career path.
In general, for newcomers to the field, I very strongly recommend booking a career coaching call with AI Safety Support. They have a policy of not turning anyone down, and quite a bit of experience in funneling newcomers at any stage of their career into the field. https://80000hours.org/ are also a worthwhile address, though they can’t make the time to talk with everyone.
My argument here is very related to what jacquesthibs mentions.
Right now it seems like the biggest bottleneck for the AI Alignment field is senior researchers. There are tons of junior people joining the field and I think there are many opportunities for junior people to up-skill and do some programs for a few months (e.g. SERI MATS, MLAB, REMIX, AGI Safety Fundamentals, etc.). The big problem (in my view) is that there are not enough organizations to actually absorb all the rather “junior” people at the moment. My sense is that 80K and most programs encourage people to up-skill and then try to get a job at a big organization (like Deepmind, Anthropic, OpenAI, Conjecture, etc.). Realistically speaking though, these organizations can only absorb a few people in a year. In my experience, it’s extremely competitive to get a job at these organizations even if you’re a more experienced researcher (e.g. having done a couple of years of research, a Ph.D., or similar). This means that while there are many opportunities for junior people to get a stand in the field, there are actually very few paths that actually allow you to have a full-time career in this field (this is also for more experienced researchers who don’t get a big lab). So the bottleneck in my view is not having enough organizations, which is a result of not having enough senior researchers. Funding an org is super hard, you want to have experienced people, with good research taste, and some kind of research agenda. So if you don’t have many senior people in a field, it will be hard to find people that fund those additional orgs.
Now, one career path that many people are currently taking, is being an “independent researcher” and being funded through a grant. I would claim that this is currently the default path for any researcher who do not get a full-time position and want to stay in the field. I believe that there are people out there who will do great as independent researchers and actually contribute to solving problems (e.g. Marius Hobbhahn and John Wenthworth talk bout being an independent researchers). I am however quite skeptical about most people doing independent research without any kind of supervision. I am not saying one can’t make progress, but it’s super hard to do this without a lot of research experience, a structured environment, good supervision, etc. I am especially skeptical about independent researchers becoming great senior researchers if they can’t work with people who are already very experienced and learn from them. Intuitively I think that no other field has junior people independently working without clear structures and supervision, so I feel like my skepticism is warranted.
In terms of career capital, being an independent researcher is also very risky. If your research fails, i.e. you don’t get a lot of good output (papers, code libraries, or whatever), “having done independent research for a couple of years” will not sound great in your CV. As a comparison, if you somehow do a very mediocre Ph.D. with no great insights, but you do manage to get the title, at least you have that in your CV (having a Ph.D. can be pretty useful in many cases).
So overall I believe that decision makers and AI field builders should put their main attention on how we can “groom” senior researchers in the field and get more full-time positions through organizations. I don’t claim to have the answers on how to solve this. But it does seem the greatest bottleneck for field building in my opinion. It seems like the field was able to get a lot more people excited about AI safety and to change their careers (we still have by far not enough people though). However right I think that many people are kind of stuck as junior researchers, having done some programs, and not being able to get full-time positions. Note that I am aware that some programs such as SERI MATS do in some sense have the ambition of grooming senior researchers. However, in practice, it still feels like there is a big gap right now.
My background (in case this is useful): I’ve been doing ML research throughout my Bachelor’s and Masters. I’ve worked at FAR AI on “AI alignment” for the last 1.5 years, so I was lucky to get a full-time position. I don’t consider myself a “senior” researcher as defined in this comment, but I definitely have a lot of research experience in the field. From my own experience, it’s pretty hard to find a new full-time position in the field, especially if you are also geographically constrained.
Einstein had his miracle year in such a context.
Modern academia has few junior people independently working without clear structures and supervision, but pre-Great Stagnation that happened more.
Generally, pre-paradigmatic work is likely easier to do independently than post-paradigmatic work. That still means that most researchers won’t produce anything useful but that’s generally common for academic work and if a few researches manage to do great paradigm founding work it can still be worth it over all.
A few thoughts:
I agree that it would be great to have more senior researchers in alignment
I agree that, ideally, it would be easier for independent researchers to get funding.
I don’t think it’s necessarily a bad thing that the field of AI alignment research is reasonably competitive.
My impression is that there’s still a lot of funding (and a lot of interest in funding) independent alignment researchers.
My impression is that it’s still considerably easier to get funding for independent alignment research than many other forms of independent non-commercial research. For example, many PhD programs have acceptance rates <10% (and many require that you apply for independent grants or that you spend many of your hours as a teaching assistant).
I think the past ~2 months has been especially tough for people seeking independent funding, given that funders have been figuring out what to do in light of the FTX stuff & have been more overwhelmed than usual.
I am concerned that, in the absence of independent funding, people will be more inclined to join AGI labs even if that’s not the best option for them. (To be clear, I think some AGI lab safety teams are doing reasonable work. But I expect that they will obtain increasingly more money/prestigé in the upcoming years, which could harm peoples’ ability to impartially assess their options, especially if independent funding is difficult to acquire).
Overall, empathize with concerns about funding, but I wish the narrative included (a) the fact that the field is competitive is not necessarily a bad thing and (b) funding is still much more available than for most other independent research fields.
Finally, I think part of the problem is that people often don’t know what they’re supposed to do in order to (honestly and transparently) present themselves to funders, or even which funders they should be applying to, or even what they’re able to ask for.. If you’re in this situation, feel free to reach out! I often have conversations with people about career & funding options in AI safety (Disclaimer: I’m not a grantmaker).
Thanks for your comments Akash. I think I have two main points I want to address.
I agree that it’s very good that the field of AI Alignment is very competitive! I did not want to imply that this is a bad thing. I was mainly trying to point out that from my point of view, it seems like overall there are more qualified and experienced people than there are jobs at large organizations. And in order to fill that gap we would need more senior researchers, who then can follow their research agendas and hire people (and fund orgs), which is however hard to achieve. One disclaimer I want to note is that I do not work at a large org, and I do not precisely know what kinds of hiring criteria they have, i.e. it is possible that in their view we still lack talented enough people. However, from the outside, it definitely does look like there are many experienced researchers.
It is possible that my previous statement may have been misinterpreted. I wish to clarify that my concerns do not pertain to funding being a challenge. I did not want to make an assertion about funding in general, and if my words gave that impression, I apologize. I do not know enough about the funding landscape to know whether there is a lot or not enough funding (especially in recent months).
I agree with you that, for all I know, it’s feasible to get funding for independent researchers (and definitely easier than doing a Ph.D. or getting a full-time position). I also agree that independent research seems to be more heavily funded than in other fields.
My point was mainly the following:
Many people have joined the field (which is great!), or at least it looks like it from the outside. 80000 hours etc. still recommend switching to AI Alignment, so it seems likely that more people will join.
I believe that there are many opportunities for people to up-skill to a certain level if they want to join the field (Seri Mats, AI safety camp, etc.).
However full-time positions (for example at big labs) are very limited. This also makes sense, since they can only hire so many people a year.
It seems like the most obvious option for people who want to stay in the field is to do independent research (and apply for grants). I think it’s great that people do independent research and that one has the opportunity to get grants.
However, doing independent research is not always ideal for many reasons (as outlined in my main comment). Note I’m not saying it doesn’t make sense at all, it definitely has its merits.
In order to have more full-time positions we need more senior people, who can then fund their organizations, or independently hire people, etc. Independent research does not seem like a promising avenue to me, to groom senior researchers. It’s essential that you can learn from people that are better than you and be in a good environment (yes there are exceptions like Einstein, but I think most researchers I know would agree with that statement).
So to me, the biggest bottleneck of all is how can we get many great researchers and groom them to be senior researchers who can lead their own orgs. I think that so far we have really optimized for getting people into the field (which is great). But we haven’t really found a solution to grooming senior researchers (again, some programs try to do that and I’m aware that this takes time). Overall I believe that this is a hard problem and probably others have already thought about it. I’m just trying to make that point in case nobody has written it up yet. Especially if people are trying to do AI safety field building it seems to me that, coming up with ways to groom senior researchers is a top priority.
Ultimately I’m not even sure whether there is a clear solution to this problem. The field is still very new and it’s amazing what has already happened. It’s probable that it just takes time for the field to mature and people getting more experience. I think I mostly wanted to point this out, even if it is maybe obvious.
I’m not sure people seriously thought about this before, your perspective seems rather novel.
I think existing labs themselves are the best vehicle to groom new senior researchers. Anthropic, Redwood Research, ARC, and probably other labs were all found by ex-staff of existing labs at the time (except that maybe one shouldn’t credit OpenAI for “grooming” Paul Cristiano to senior level, but anyways).
It’s unclear what field-building projects could incentivise labs to part with their senior researchers and let them spin off their own labs. Or to groom senior researchers “faster”, so to speak.
If the theory that AI alignment is extremely competitive is right, then logically both the labs shouldn’t cling to their senior people too much (because it will be relatively easy to replace them), and senior researchers shouldn’t worry about starting their own projects too much because they know they can assemble a very competent team very quickly.
It seems that it’s only the funding for these new labs and their organisational strategy which could be a point of uncertainty for senior researchers that could deter them from starting their own projects (apart from, of course, just being content with the project they are involved in at their current jobs, and their level of influence on research agendas).
So, maybe the best field-building project that could be done in this area is someone offering knowledge about and support through founding, funding, and setting a strategy for new labs (which may range from brief informal consultation to more structured support, a-la “incubator for AI safety labs”) and advertise this offering among the staff of existing AI labs.
I didn’t mention this in my comment, but I also agree with this. Apologies if it seemed otherwise. I was mostly expressing a bit of concern about how how funding will be dispursed going forward, from a macro-perspective.
Ah, thanks for the clarifications. I agree with the clarified versions :)
Quick note on getting senior researchers:
It seems like one of the main bottlenecks is “having really good models of alignment.”
It seems plausible to me that investing in junior alignment researchers today means we’ll increase the number of senior alignment researchers (or at least “people who are capable of mentoring new alignment researchers, starting new orgs, leading teams, etc.).
My vibes-level guess is that the top junior alignment researchers are ready to lead teams within about a year or two of doing alignment research on their own. EG I expect some people in this post to be ready to mentor/advise/lead teams in the upcoming year. (And some of them already are).
I’m definitely feeling like I sacrificed both income and career capital by deciding to do alignment research full time. I don’t feel like I’m being ‘hurt’ by the world though, I feel like the world is hurting. In a saner world, there would be more resources devoted to this, and it is to the world’s detriment that this is not the case. I could go back to doing mainstream machine learning if I wasn’t overwhelmed by a sense of impending doom and compelled by a feeling of duty to do what I can to help. I’m going to keep trying my best, but I would be a lot more effective if I were working as part of a group. Even just things like being able to share the burden of some of the boilerplate code I need to write in order to do my experiments would speed things up a lot, or having a reviewer to help point out mistakes to me.
Proposal: If other people are doing independent research in London I’d be really interested in co-working and doing some regular feedback and updates. (Could be elsewhere but I find being in person important for me personally). If anyone would be interested reply here or message me and we’ll see if we can set something up :)
General comment: This feels accurate to me. I’ve been working as an independent researcher for the last few months, after 9 months of pure skill building and have got close but not succeeded in getting jobs at the local research orgs in London (DeepMind, Conjecture).
It’s a great way to build some skills, having to build your own stack, but it’s also hard to build research skills without people with more experience giving feedback, and because iteration of ideas is slow, it’s difficult to know whether to stick with something or try something else.
In particular it forces you to be super proactive if you want to get any feedback.
I’m not in London, but aisafety.community (the afaik most comprehensive and way too unknown resource on AI safety communities) suggests the London AI Safety Hub. There are some remote alignment communities mentioned on aisafety.community as well. You might want to consider them as fallback options, but probably already know most if not all of them.
Let me know if that’s at all helpful.
Cheers Severin yeah that’s useful, I’ve not seen aisafety.community (almost certainly my fault, I don’t do enough to find out what’s going on).
That Slack link doesn’t work for me though, it just asks me to sign into one of my existing workspaces..
Flagged the broken link to the team. I found this, which may or may not be the same project: https://www.safeailondon.org/
It’s not the same thing; the link was broken because Slack links expire after a month. Fixed for now.
I 100% agree with you.
I am a person entering the field right now, I also know several people in a position similar to mine, and there are just no positions for people like me, even though I think I am very proactive and have valuable experience
Yep, the field is sort of underfunded, especially after the FTX crash. That’s why I suggested grantwriting as a potential career path.
In general, for newcomers to the field, I very strongly recommend booking a career coaching call with AI Safety Support. They have a policy of not turning anyone down, and quite a bit of experience in funneling newcomers at any stage of their career into the field. https://80000hours.org/ are also a worthwhile address, though they can’t make the time to talk with everyone.