A unilateral pause on large AI training runs in the West, without a pause on new computing hardware, would have more ambiguous impacts on global catastrophic risk. The primary negative effects on risk are leading to faster catch-up growth in a later period with more hardware and driving AI development into laxer jurisdictions.
Generally interesting and thoughtful post. This is timely and it’s updated me a bit on your (Paul’s) position on pausing.
The paragraph quoted above strikes me as wrong/lazy, though. I wrote a bit about this here and on my blog. Which laxer jurisdictions are poised to capture talent/hardware/etc. right now? It seems like ‘The West’ (interpreted as Silicon Valley) is close to the laxest jurisdiction on Earth when it comes to tech! (If we interpret ‘The West’ more broadly, this no longer holds, thankfully.)
I assume your caveat about ‘a pause on new computing hardware’ indicates that you think that business-as-usual capitalism means that pausing capital-intensive frontier development unilaterally doesn’t buy much, because hardware (and talent and data etc.) will flow basically-resistance-free to other places? This seems like a crux: one I don’t feel well-equipped to evaluate, but which I do feel it’s appropriate to be quite uncertain on.
Which laxer jurisdictions are poised to capture talent/hardware/etc. right now? It seems like ‘The West’ (interpreted as Silicon Valley) is close to the laxest jurisdiction on Earth when it comes to tech! (If we interpret ‘The West’ more broadly, this no longer holds, thankfully.)
If you implemented a unilateral pause on AI training runs in the West, then anyone who wasn’t pausing AI would be a much laxer jurisdiction.
Regarding the situation today, I don’t believe that any jurisdiction has regulations that meaningfully reduce catastrophic risk, but that the US, EU, and UK seem by far the closest, which I’d call “the West.”
I assume your caveat about ‘a pause on new computing hardware’ indicates that you think that business-as-usual capitalism means that pausing capital-intensive frontier development unilaterally doesn’t buy much, because hardware (and talent and data etc.) will flow basically-resistance-free to other places? This seems like a crux: one I don’t feel well-equipped to evaluate, but which I do feel it’s appropriate to be quite uncertain on.
I think a unilateral pause in the US would slow down AI development materially, there is obviously a ton of resistance. Over the long term I do think you will bounce back significantly to the previous trajectory from catch-up growth, despite resistance, and I think the open question is more like whether that bounce back is 10% or 50% or 90%. So I end up ambivalent; the value of a year of pause now is pretty low compared to the value of a year of pause later, and you are concentrating development in time and shifting it to places that are (by hypothesis) less inclined to regulate risk.
Generally interesting and thoughtful post. This is timely and it’s updated me a bit on your (Paul’s) position on pausing.
The paragraph quoted above strikes me as wrong/lazy, though. I wrote a bit about this here and on my blog. Which laxer jurisdictions are poised to capture talent/hardware/etc. right now? It seems like ‘The West’ (interpreted as Silicon Valley) is close to the laxest jurisdiction on Earth when it comes to tech! (If we interpret ‘The West’ more broadly, this no longer holds, thankfully.)
I assume your caveat about ‘a pause on new computing hardware’ indicates that you think that business-as-usual capitalism means that pausing capital-intensive frontier development unilaterally doesn’t buy much, because hardware (and talent and data etc.) will flow basically-resistance-free to other places? This seems like a crux: one I don’t feel well-equipped to evaluate, but which I do feel it’s appropriate to be quite uncertain on.
If you implemented a unilateral pause on AI training runs in the West, then anyone who wasn’t pausing AI would be a much laxer jurisdiction.
Regarding the situation today, I don’t believe that any jurisdiction has regulations that meaningfully reduce catastrophic risk, but that the US, EU, and UK seem by far the closest, which I’d call “the West.”
I think a unilateral pause in the US would slow down AI development materially, there is obviously a ton of resistance. Over the long term I do think you will bounce back significantly to the previous trajectory from catch-up growth, despite resistance, and I think the open question is more like whether that bounce back is 10% or 50% or 90%. So I end up ambivalent; the value of a year of pause now is pretty low compared to the value of a year of pause later, and you are concentrating development in time and shifting it to places that are (by hypothesis) less inclined to regulate risk.