Overall I believe that this is a hard problem and probably others have already thought about it.
I’m not sure people seriously thought about this before, your perspective seems rather novel.
I think existing labs themselves are the best vehicle to groom new senior researchers. Anthropic, Redwood Research, ARC, and probably other labs were all found by ex-staff of existing labs at the time (except that maybe one shouldn’t credit OpenAI for “grooming” Paul Cristiano to senior level, but anyways).
It’s unclear what field-building projects could incentivise labs to part with their senior researchers and let them spin off their own labs. Or to groom senior researchers “faster”, so to speak.
If the theory that AI alignment is extremely competitive is right, then logically both the labs shouldn’t cling to their senior people too much (because it will be relatively easy to replace them), and senior researchers shouldn’t worry about starting their own projects too much because they know they can assemble a very competent team very quickly.
It seems that it’s only the funding for these new labs and their organisational strategy which could be a point of uncertainty for senior researchers that could deter them from starting their own projects (apart from, of course, just being content with the project they are involved in at their current jobs, and their level of influence on research agendas).
So, maybe the best field-building project that could be done in this area is someone offering knowledge about and support through founding, funding, and setting a strategy for new labs (which may range from brief informal consultation to more structured support, a-la “incubator for AI safety labs”) and advertise this offering among the staff of existing AI labs.
I’m not sure people seriously thought about this before, your perspective seems rather novel.
I think existing labs themselves are the best vehicle to groom new senior researchers. Anthropic, Redwood Research, ARC, and probably other labs were all found by ex-staff of existing labs at the time (except that maybe one shouldn’t credit OpenAI for “grooming” Paul Cristiano to senior level, but anyways).
It’s unclear what field-building projects could incentivise labs to part with their senior researchers and let them spin off their own labs. Or to groom senior researchers “faster”, so to speak.
If the theory that AI alignment is extremely competitive is right, then logically both the labs shouldn’t cling to their senior people too much (because it will be relatively easy to replace them), and senior researchers shouldn’t worry about starting their own projects too much because they know they can assemble a very competent team very quickly.
It seems that it’s only the funding for these new labs and their organisational strategy which could be a point of uncertainty for senior researchers that could deter them from starting their own projects (apart from, of course, just being content with the project they are involved in at their current jobs, and their level of influence on research agendas).
So, maybe the best field-building project that could be done in this area is someone offering knowledge about and support through founding, funding, and setting a strategy for new labs (which may range from brief informal consultation to more structured support, a-la “incubator for AI safety labs”) and advertise this offering among the staff of existing AI labs.