Both industry and government are now strongly committed to an AI arms race
A lot of the non-AI-Safety opponents of AI want a permanent stop/ban in the fields they care about, not a pause, so it lacks for allies
It’s not clear that meaningful technical AI safety work on today’s frontier AI models could have been done before they were invented; therefore a lot of technical AI safety researchers believe we still need to push capabilities further before a pause would truly be useful
PauseAI could gain substantial support if there’s a major AI-caused disaster, so it’s good that some people are keeping the torch lit for that possibility, but supporting it now means burning political capital for little reason. We’d get enough credit for “being right all along” just by having pointed out the risks ahead of time, and we want to influence regulation/industry now, so we shouldn’t make Pause demands that get you thrown out of the room. In an ideal world we’d spend more time understanding current models, though.
supporting it now means burning political capital for little reason
I think this is wrong—the cost in political capital for saying that it’s the best solution seems relatively low, especially if coupled with an admission that it’s not politically viable. What I see instead is people dismissing it as a useful idea even in theory, saying it would be bad if it were taken seriously by anyone, and moving on from there. And if nothing else, that’s acting as a way to narrow the Overton window for other proposals!
I’m generally pretty receptive to “adjust the Overton window” arguments, which is why I think it’s good PauseAI exists, but I do think there’s a cost in political capital to saying “I want a Pause, but I am willing to negotiate”. It’s easy for your opponents to cite your public Pause support and then say, “look, they want to destroy America’s main technological advantage over its rivals” or “look, they want to bomb datacenters, they’re unserious”. (yes Pause as typically imagined requires international treaties, the attack lines would probably still work, there was tons of lying in the California SB 1047 fight and we lost in the end)
The political position AI safety has mostly taken instead on US regulation is “we just want some basic reporting and transparency” which is much harder to argue against, achievable, and still pretty valuable.
I can’t say I know for sure this is the right approach to public policy. There’s a reason politics is a dark art, there’s a lot of triangulating between “real” and “public” stances, and it’s not costless to compromise your dedication to the truth like that. But I think it’s part of why there isn’t as much support for PauseAI as you might expect. (the other main part being what 1a3orn says, that PauseAI is on the radical end of opinions in AI safety and it’s natural there’d be a gap between moderates and them)
Very briefly, the fact that “The political position AI safety has mostly taken” is a single stance is evidence that there’s no room for even other creative solutions, so we’ve failed hard at expanding that Overton window. And unless you are strongly confident in that as the only possibly useful strategy, that is a horribly bad position for the world to be in as AI continues to accelerate and likely eliminate other potential policy options.
I think the concept of Pausing AI just feels unrealistic at this point.
Previous AI safety pause efforts (GPT-2 release delay, 2023 Open Letter calling for a 6 month pause) have come to be seen as false alarms and overreactions
Both industry and government are now strongly committed to an AI arms race
A lot of the non-AI-Safety opponents of AI want a permanent stop/ban in the fields they care about, not a pause, so it lacks for allies
It’s not clear that meaningful technical AI safety work on today’s frontier AI models could have been done before they were invented; therefore a lot of technical AI safety researchers believe we still need to push capabilities further before a pause would truly be useful
PauseAI could gain substantial support if there’s a major AI-caused disaster, so it’s good that some people are keeping the torch lit for that possibility, but supporting it now means burning political capital for little reason. We’d get enough credit for “being right all along” just by having pointed out the risks ahead of time, and we want to influence regulation/industry now, so we shouldn’t make Pause demands that get you thrown out of the room. In an ideal world we’d spend more time understanding current models, though.
I think this is wrong—the cost in political capital for saying that it’s the best solution seems relatively low, especially if coupled with an admission that it’s not politically viable. What I see instead is people dismissing it as a useful idea even in theory, saying it would be bad if it were taken seriously by anyone, and moving on from there. And if nothing else, that’s acting as a way to narrow the Overton window for other proposals!
I’m generally pretty receptive to “adjust the Overton window” arguments, which is why I think it’s good PauseAI exists, but I do think there’s a cost in political capital to saying “I want a Pause, but I am willing to negotiate”. It’s easy for your opponents to cite your public Pause support and then say, “look, they want to destroy America’s main technological advantage over its rivals” or “look, they want to bomb datacenters, they’re unserious”. (yes Pause as typically imagined requires international treaties, the attack lines would probably still work, there was tons of lying in the California SB 1047 fight and we lost in the end)
The political position AI safety has mostly taken instead on US regulation is “we just want some basic reporting and transparency” which is much harder to argue against, achievable, and still pretty valuable.
I can’t say I know for sure this is the right approach to public policy. There’s a reason politics is a dark art, there’s a lot of triangulating between “real” and “public” stances, and it’s not costless to compromise your dedication to the truth like that. But I think it’s part of why there isn’t as much support for PauseAI as you might expect. (the other main part being what 1a3orn says, that PauseAI is on the radical end of opinions in AI safety and it’s natural there’d be a gap between moderates and them)
Very briefly, the fact that “The political position AI safety has mostly taken” is a single stance is evidence that there’s no room for even other creative solutions, so we’ve failed hard at expanding that Overton window. And unless you are strongly confident in that as the only possibly useful strategy, that is a horribly bad position for the world to be in as AI continues to accelerate and likely eliminate other potential policy options.