However, many of these people might not have a sufficient “toolbox” or research experience to have much marginal impact in short timelines worlds.
I think this is true for some people, but I also think people tend to overestimate the amount of years it takes to have enough research experience to contribute.
I think a few people have been able to make useful contributions within their first year (though in fairness they generally had backgrounds in ML or AI, so they weren’t starting completely from scratch), and several highly respected senior researchers have just a few years of research experience. (And they, on average, had less access to mentorship/infrastructure than today’s folks).
I also think people often overestimate the amount of time it takes to become an expert in a specific area relevant to AI risk (like subtopics in compute governance, information security, etc.)
Finally, I think people should try to model community growth & neglectedness of AI risk in their estimates. Many people have gotten interested in AI safety in the last 1-3 years. I expect that many more will get interested in AI safety in the upcoming years. Being one researcher in a field of 300 seems more useful than being one researcher in a field of 1500.
With all that in mind, I really like this exercise, and I expect that I’ll encourage people to do this in the future:
Write out your credences for AGI being realized in 2027, 2032, and 2042;
Write out your plans if you had 100% credence in each of 2027, 2032, and 2042;
Write out your marginal impact in lowering P(doom) via each of those three plans;
Work towards the plan that is the argmax of your marginal impact, weighted by your credence in the respective AGI timelines.
I think this is true for some people, but I also think people tend to overestimate the amount of years it takes to have enough research experience to contribute.
I think a few people have been able to make useful contributions within their first year (though in fairness they generally had backgrounds in ML or AI, so they weren’t starting completely from scratch), and several highly respected senior researchers have just a few years of research experience. (And they, on average, had less access to mentorship/infrastructure than today’s folks).
I also think people often overestimate the amount of time it takes to become an expert in a specific area relevant to AI risk (like subtopics in compute governance, information security, etc.)
Finally, I think people should try to model community growth & neglectedness of AI risk in their estimates. Many people have gotten interested in AI safety in the last 1-3 years. I expect that many more will get interested in AI safety in the upcoming years. Being one researcher in a field of 300 seems more useful than being one researcher in a field of 1500.
With all that in mind, I really like this exercise, and I expect that I’ll encourage people to do this in the future: