If people start losing jobs from automation, that could finally build political momentum for serious regulation.
Suggested in Zvi’s comments the other month (22 likes):
The real problem here is that AI safety feels completely theoretical right now. Climate folks can at least point to hurricanes and wildfires (even if connecting those dots requires some fancy statistical footwork). But AI safety advocates are stuck making arguments about hypothetical future scenarios that sound like sci-fi to most people. It’s hard to build political momentum around “trust us, this could be really bad, look at this scenario I wrote that will remind you of a James Cameron movie”
Here’s the thing though—the e/acc crowd might accidentally end up doing AI safety advocates a huge favor. They want to race ahead with AI development, no guardrails, full speed ahead. That could actually force the issue. Once AI starts really replacing human workers—not just a few translators here and there, but entire professions getting automated away—suddenly everyone’s going to start paying attention. Nothing gets politicians moving like angry constituents who just lost their jobs.
Here’s a wild thought: instead of focusing on theoretical safety frameworks that nobody seems to care about, maybe we should be working on dramatically accelerating workplace automation. Build the systems that will make it crystal clear just how transformative AI can be. It feels counterintuitive—like we’re playing into the e/acc playbook. But like extreme weather events create space to talk about carbon emissions, widespread job displacement could finally get people to take AI governance seriously. The trick is making sure this wake-up call happens before it’s too late to do anything about the bigger risks lurking around the corner.
Rather than make things worse as a means of compelling others to make things better, I would rather just make things better.
Brinksmanship and accelerationism (in the Marxist sense) are high variance strategies ill-suited to the stakes of this particular game.
[one way this makes things worse is stimulating additional investment on the frontier; another is attracting public attention to the wrong problem, which will mostly just generate action on solutions to that problem, and not to the problem we care most about. Importantly, the contingent of people-mostly-worried-about-jobs are not yet our allies, and it’s likely their regulatory priorities would not address our concerns, even though I share in some of those concerns.]
My guess would be that making RL envs for broad automation of the economy is bad[1] and making benchmarks which measure how good AIs are at automating jobs is somewhat good[2].
Regardless, IMO this seems worse for the world than other activities Matthew, Tamay, and Ege might do.
I’d guess the skills will transfer to AI R&D etc insofar as the environments are good. I’m sign uncertain about broad automation which doesn’t transfer (which would be somewhat confusing/surprising) as this would come down to increased awareness earlier vs speeding up AI development due to increased investment.
It’s probably better if you don’t make these benchmarks easy to iterate on and focus on determining+forecasting whether models have high levels of threat-model-relevant capability. And being able to precisely compare models with similar performance isn’t directly important.
Update: they want “to build virtual work environments for automating software engineering—and then the rest of the economy.” Software engineering seems like one of the few things I really think shouldn’t accelerate :(.
Accelerating AI R&D automation would be bad. But they want to accelerate misc labor automation. The sign of this is unclear to me.
Their main effect will be to accelerate AI R&D automation, as best I can tell.
If people start losing jobs from automation, that could finally build political momentum for serious regulation.
Suggested in Zvi’s comments the other month (22 likes):
Source: https://thezvi.substack.com/p/the-paris-ai-anti-safety-summit/comment/92963364
Just skimming the thread, I didn’t see anyone offer a serious attempt at counterargument, either.
Rather than make things worse as a means of compelling others to make things better, I would rather just make things better.
Brinksmanship and accelerationism (in the Marxist sense) are high variance strategies ill-suited to the stakes of this particular game.
[one way this makes things worse is stimulating additional investment on the frontier; another is attracting public attention to the wrong problem, which will mostly just generate action on solutions to that problem, and not to the problem we care most about. Importantly, the contingent of people-mostly-worried-about-jobs are not yet our allies, and it’s likely their regulatory priorities would not address our concerns, even though I share in some of those concerns.]
My guess would be that making RL envs for broad automation of the economy is bad[1] and making benchmarks which measure how good AIs are at automating jobs is somewhat good[2].
Regardless, IMO this seems worse for the world than other activities Matthew, Tamay, and Ege might do.
I’d guess the skills will transfer to AI R&D etc insofar as the environments are good. I’m sign uncertain about broad automation which doesn’t transfer (which would be somewhat confusing/surprising) as this would come down to increased awareness earlier vs speeding up AI development due to increased investment.
It’s probably better if you don’t make these benchmarks easy to iterate on and focus on determining+forecasting whether models have high levels of threat-model-relevant capability. And being able to precisely compare models with similar performance isn’t directly important.
Update: they want “to build virtual work environments for automating software engineering—and then the rest of the economy.” Software engineering seems like one of the few things I really think shouldn’t accelerate :(.