Minor: The first linked post is not about pausing AI development. It mentions various interventions for “buying time” (like evals and outreach) but it’s not about an AI pause. (When I hear the phrase “pausing AI development” I think more about the FLI version of this which is like “let’s all pause for X months” and less about things like “let’s have labs do evals so that they can choose to pause if they see clear evidence of risk”.)
At a basic level, we want to estimate how muchworse (or, perhaps, better) it would be for the United States to completely cede the race for TAI to the PRC.
My impression is that (most? many?) pause advocates are not talking about completely ceding the race to the PRC. I would guess that if you asked (most? many?) people who describe themselves as “pro-pause”, they would say things like “I want to pause to give governments time to catch up and figure out what regulations are needed” or “I want to pause to see if we can develop AGI in a more secure way, such as (but not limited to) something like MAGIC.”
I doubt many of them would say “I would be in favor of a pause if it meant that the US stopped doing AI development and we completely ceded the race to China.” I would suspect many of them might say something like “I would be in favor of a pause in which the US sees if China is down to cooperate, but if China is not down to cooperate, then I would be in favor of the US lifting the pause.”
I doubt many of them would say “I would be in favor of a pause if it meant that the US stopped doing AI development and we completely ceded the race to China.” I would suspect many of them might say something like “I would be in favor of a pause in which the US sees if China is down to cooperate, but if China is not down to cooperate, then I would be in favor of the US lifting the pause.”
FWIW, I don’t think this super tracks my model here. My model is “Ideally, if China is not down to cooperate, the U.S. threatens conventional escalation in order to get China to slow down as well, while being very transparent about not planning to develop AGI itself”.
Political feasibility of this does seem low, but it seems valuable and important to be clear about what a relatively ideal policy would be, and honestly, I don’t think it’s an implausible outcome (I think AGI is terrifying and as that becomes more obvious it seems totally plausible for the U.S. to threaten escalation towards China if they are developing vastly superior weapons of mass destruction while staying away from the technology themselves).
Minor: The first linked post is not about pausing AI development. It mentions various interventions for “buying time” (like evals and outreach) but it’s not about an AI pause. (When I hear the phrase “pausing AI development” I think more about the FLI version of this which is like “let’s all pause for X months” and less about things like “let’s have labs do evals so that they can choose to pause if they see clear evidence of risk”.)
My impression is that (most? many?) pause advocates are not talking about completely ceding the race to the PRC. I would guess that if you asked (most? many?) people who describe themselves as “pro-pause”, they would say things like “I want to pause to give governments time to catch up and figure out what regulations are needed” or “I want to pause to see if we can develop AGI in a more secure way, such as (but not limited to) something like MAGIC.”
I doubt many of them would say “I would be in favor of a pause if it meant that the US stopped doing AI development and we completely ceded the race to China.” I would suspect many of them might say something like “I would be in favor of a pause in which the US sees if China is down to cooperate, but if China is not down to cooperate, then I would be in favor of the US lifting the pause.”
FWIW, I don’t think this super tracks my model here. My model is “Ideally, if China is not down to cooperate, the U.S. threatens conventional escalation in order to get China to slow down as well, while being very transparent about not planning to develop AGI itself”.
Political feasibility of this does seem low, but it seems valuable and important to be clear about what a relatively ideal policy would be, and honestly, I don’t think it’s an implausible outcome (I think AGI is terrifying and as that becomes more obvious it seems totally plausible for the U.S. to threaten escalation towards China if they are developing vastly superior weapons of mass destruction while staying away from the technology themselves).