Do you think there are some cultural things that ought to be examined to figure out why scaling labs are so much more attractive than options that at-least-to-me seem more impactful in expectation?
As a naive guess, I would consider the main reasons to be:
People seeking jobs in AI safety often want to take on “heroic responsibility.” Work on evals and policy, while essential, might be seen as “passing the buck” onto others, often at scaling labs, who have to “solve the wicked problem of AI alignment/control” (quotes indicate my caricature of a hypothetical person). Anecdotally, I’ve often heard people in-community disparage AI safety strategies that primarily “buy time” without “substantially increasing the odds AGI is aligned.” Programs like MATS emphasizing the importance of AI governance and including AI strategy workshops might help shift this mindset, if it exists.
Roles in AI gov/policy, while impactful at reducing AI risk, likely have worse quality-of-life features (e.g., wages, benefits, work culture) than similarly impactful roles in scaling labs. People seeking jobs in AI safety might choose between two high-impact roles based on these salient features without considering how many others making the same decisions will affect the talent flow en masse. Programs like MATS might contribute to this problem, but only if the labs keep hiring talent (unlikely given poor returns on scale) and the AI gov/policy orgs don’t make attractive offers (unlikely given METR and Apollo pay pretty good wages, high status, and work cultures comparable to labs; AISIs might be limited because government roles don’t typically pay well, but it seems there are substantial status benefits to working there).
AI risk might be particularly appealing as a cause area to people who are dispositionally and experientially suited to technical work and scaling labs might be the most impactful place to do many varieties of technical work. Programs like MATS are definitely not a detriment here, as they mostly attract individuals who were already going to work in technical careers, expose them to governance-adjacent research like evals, and recommend potential careers in AI gov/policy.
As a naive guess, I would consider the main reasons to be:
People seeking jobs in AI safety often want to take on “heroic responsibility.” Work on evals and policy, while essential, might be seen as “passing the buck” onto others, often at scaling labs, who have to “solve the wicked problem of AI alignment/control” (quotes indicate my caricature of a hypothetical person). Anecdotally, I’ve often heard people in-community disparage AI safety strategies that primarily “buy time” without “substantially increasing the odds AGI is aligned.” Programs like MATS emphasizing the importance of AI governance and including AI strategy workshops might help shift this mindset, if it exists.
Roles in AI gov/policy, while impactful at reducing AI risk, likely have worse quality-of-life features (e.g., wages, benefits, work culture) than similarly impactful roles in scaling labs. People seeking jobs in AI safety might choose between two high-impact roles based on these salient features without considering how many others making the same decisions will affect the talent flow en masse. Programs like MATS might contribute to this problem, but only if the labs keep hiring talent (unlikely given poor returns on scale) and the AI gov/policy orgs don’t make attractive offers (unlikely given METR and Apollo pay pretty good wages, high status, and work cultures comparable to labs; AISIs might be limited because government roles don’t typically pay well, but it seems there are substantial status benefits to working there).
AI risk might be particularly appealing as a cause area to people who are dispositionally and experientially suited to technical work and scaling labs might be the most impactful place to do many varieties of technical work. Programs like MATS are definitely not a detriment here, as they mostly attract individuals who were already going to work in technical careers, expose them to governance-adjacent research like evals, and recommend potential careers in AI gov/policy.