I think it’s important for AI safety initiatives to screen for participants that are very likely to go into AI safety research because:
AI safety initiatives eat up valuable free energy in the form of AI safety researchers, engineers, and support staff that could benefit other initiatives;
Longtermist funding is ~30% depleted post-FTX, and therefore the quality and commitment of participants funded by longtermist money are more important now;
Some programs like MLAB might counterfactually improve a participant’s ability to get hired as an AI capabilities researcher, which might mean the program contributes insufficiently to the field of alignment relative to accelerating capabilities.
These concerns might be addressed by:
Requiring all participants in MLAB-style programs to engage with AGISF first;
Selecting ML talent for research programs (like MATS is trying) rather than building ML talent with engineer upskilling programs;
Encouraging participants to seek non-longtermist funding and mentorship for their projects, perhaps through supporting research projects in academia that leverage non-AI safety academic ML research mentorship and funding for AI safety-relevant projects;
Interviewing applicants to assess their motivations;
Offering ~30% less money (and slightly less prestige) than tech internships to filter out people who will leave safety research and work on capabilities after the program.
I think it’s important for AI safety initiatives to screen for participants that are very likely to go into AI safety research because:
AI safety initiatives eat up valuable free energy in the form of AI safety researchers, engineers, and support staff that could benefit other initiatives;
Longtermist funding is ~30% depleted post-FTX, and therefore the quality and commitment of participants funded by longtermist money are more important now;
Some programs like MLAB might counterfactually improve a participant’s ability to get hired as an AI capabilities researcher, which might mean the program contributes insufficiently to the field of alignment relative to accelerating capabilities.
These concerns might be addressed by:
Requiring all participants in MLAB-style programs to engage with AGISF first;
Selecting ML talent for research programs (like MATS is trying) rather than building ML talent with engineer upskilling programs;
Encouraging participants to seek non-longtermist funding and mentorship for their projects, perhaps through supporting research projects in academia that leverage non-AI safety academic ML research mentorship and funding for AI safety-relevant projects;
Interviewing applicants to assess their motivations;
Offering ~30% less money (and slightly less prestige) than tech internships to filter out people who will leave safety research and work on capabilities after the program.