“Pause AI” could refer to many different possible policies.
I think that if humanity avoided building superintelligent AI, we’d massively reduce the risk of AI takeover and other catastrophic outcomes.
I suspect that at some point in the future, AI companies will face a choice between proceeding more slowly with AI development than they’re incentivized to, and proceeding more quickly while imposing huge risks. In particular, I suspect it’s going to be very dangerous to develop ASI.
I don’t think that it would be clearly good to pause AI development now. This is mostly because I don’t think that the models being developed literally right now pose existential risk.
Maybe it would be better to pause AI development right now because this will improve the situation later (e.g. maybe we should pause until frontier labs implement good enough security that we can be sure their Slack won’t be hacked again, leaking algorithmic secrets). But this is unclear and I don’t think it immediately follows from “we could stop AI takeover risk by pausing AI development before the AIs are able to take over”.
Many of the plausible “pause now” actions seem to overall increase risk. For example, I think it would be bad for relatively responsible AI developers to unilaterally pause, and I think it would probably be bad for the US to unilaterally force all US AI developers to pause if they didn’t simultaneously somehow slow down non-US development.
(They could slow down non-US development with actions like export controls.)
Even in the cases where I support something like pausing, it’s not clear that I want to spend effort on the margin actively supporting it; maybe there are other things I could push on instead that have better ROI.
I’m not super enthusiastic about PauseAI the organization; they sometimes seem to not be very well-informed, they sometimes argue for conclusions that I think are wrong, and I find Holly pretty unpleasant to interact with, because she seems uninformed and prone to IMO unfair accusations that I’m conspiring with AI companies. My guess is that there could be an organization with similar goals to PauseAI that I felt much more excited for.
I think it would probably be bad for the US to unilaterally force all US AI developers to pause if they didn’t simultaneously somehow slow down non-US development.
It seems to me that to believe this, you have to believe all of these four things are true:
Solving AI alignment is basically easy
Non-US frontier AI developers are not interested in safety
Non-US frontier AI developers will quickly catch up to the US
If US developers slow down, then non-US developers are very unlikely to also slow down—either voluntarily, or because the US strong-arms them into signing a non-proliferation treaty, or whatever
I think #3 is sort-of true and the others are probably false, so the probability of all four being simultaneously true is quite low.
(Statements I’ve seen from Chinese developers lead me to believe that they are less interested in racing and more concerned about safety.)
I made a quick Squiggle model on racing vs. slowing down. Based on my first-guess parameters, it suggests that racing to build AI destroys ~half the expected value of the future compared to not racing. Parameter values are rough, of course.
I disagree that you have to believe those four things in order to believe what I said. I believe some of those and find others too ambiguously phrased to evaluate.
Re your model: I think your model is basically just: if we race, we go from 70% chance that US “wins” to a 75% chance the US wins, and we go from a 50% chance of “solving alignment” to a 25% chance? Idk how to apply that here: isn’t your squiggle model talking about whether racing is good, rather than whether unilaterally pausing is good? Maybe you’re using “race” to mean “not pause” and “not race” to mean “pause”; if so, that’s super confusing terminology. If we unilaterally paused indefinitely, surely we’d have less than 70% chance of winning.
In general, I think you’re modeling this extremely superficially in your comments on the topic. I wish you’d try modeling this with more granularity than “is alignment hard” or whatever. I think that if you try to actually make such a model, you’ll likely end up with a much better sense of where other people are coming from. If you’re trying to do this, I recommend reading posts where people explain strategies for passing safely through the singularity, e.g. like this.
isn’t your squiggle model talking about whether racing is good, rather than whether unilaterally pausing is good?
Yes the model is more about racing than about pausing but I thought it was applicable here. My thinking was that there is a spectrum of development speed with “completely pause” on one end and “race as fast as possible” on the other. Pushing more toward the “pause” side of the spectrum has the ~opposite effect as pushing toward the “race” side.
I wish you’d try modeling this with more granularity than “is alignment hard” or whatever
I’ve never seen anyone else try to quantitatively model it. As far as I know, my model is the most granular quantitative model ever made. Which isn’t to say it’s particularly granular (I spent less than an hour on it) but this feels like an unfair criticism.
In general I am not a fan of criticisms of the form “this model is too simple”. All models are too simple. What, specifically, is wrong with it?
I had a quick look at the linked post and it seems to be making some implicit assumptions, such as
the plan of “use AI to make AI safe” has a ~100% chance of working (the post explicitly says this is false, but then proceeds as if it’s true)
there is a ~100% chance of slow takeoff
if you unilaterally pause, this doesn’t increase the probability that anyone else pauses, doesn’t make it easier to get regulations passed, etc.
I would like to see some quantification of the from “we think there is a 30% chance that we can bootstrap AI alignment using AI; a unilateral pause will only increase the probability of a global pause by 3 percentage points; and there’s only a 50% chance that the 2nd-leading company will attempt to align AI in a way we’d find satisfactory, therefore we think the least-risky plan is to stay at the front of the race and then bootstrap AI alignment.” (Or a more detailed version of that.)
I think we basically agree, but I think the Overton window needs to be expanded, and Pause is (unfortunately) already outside that window. So I differentiate between the overall direction, which I support strongly, and the concrete proposals and the organizations involved.
Some quick takes:
“Pause AI” could refer to many different possible policies.
I think that if humanity avoided building superintelligent AI, we’d massively reduce the risk of AI takeover and other catastrophic outcomes.
I suspect that at some point in the future, AI companies will face a choice between proceeding more slowly with AI development than they’re incentivized to, and proceeding more quickly while imposing huge risks. In particular, I suspect it’s going to be very dangerous to develop ASI.
I don’t think that it would be clearly good to pause AI development now. This is mostly because I don’t think that the models being developed literally right now pose existential risk.
Maybe it would be better to pause AI development right now because this will improve the situation later (e.g. maybe we should pause until frontier labs implement good enough security that we can be sure their Slack won’t be hacked again, leaking algorithmic secrets). But this is unclear and I don’t think it immediately follows from “we could stop AI takeover risk by pausing AI development before the AIs are able to take over”.
Many of the plausible “pause now” actions seem to overall increase risk. For example, I think it would be bad for relatively responsible AI developers to unilaterally pause, and I think it would probably be bad for the US to unilaterally force all US AI developers to pause if they didn’t simultaneously somehow slow down non-US development.
(They could slow down non-US development with actions like export controls.)
Even in the cases where I support something like pausing, it’s not clear that I want to spend effort on the margin actively supporting it; maybe there are other things I could push on instead that have better ROI.
I’m not super enthusiastic about PauseAI the organization; they sometimes seem to not be very well-informed, they sometimes argue for conclusions that I think are wrong, and I find Holly pretty unpleasant to interact with, because she seems uninformed and prone to IMO unfair accusations that I’m conspiring with AI companies. My guess is that there could be an organization with similar goals to PauseAI that I felt much more excited for.
It seems to me that to believe this, you have to believe all of these four things are true:
Solving AI alignment is basically easy
Non-US frontier AI developers are not interested in safety
Non-US frontier AI developers will quickly catch up to the US
If US developers slow down, then non-US developers are very unlikely to also slow down—either voluntarily, or because the US strong-arms them into signing a non-proliferation treaty, or whatever
I think #3 is sort-of true and the others are probably false, so the probability of all four being simultaneously true is quite low.
(Statements I’ve seen from Chinese developers lead me to believe that they are less interested in racing and more concerned about safety.)
I made a quick Squiggle model on racing vs. slowing down. Based on my first-guess parameters, it suggests that racing to build AI destroys ~half the expected value of the future compared to not racing. Parameter values are rough, of course.
I disagree that you have to believe those four things in order to believe what I said. I believe some of those and find others too ambiguously phrased to evaluate.
Re your model: I think your model is basically just: if we race, we go from 70% chance that US “wins” to a 75% chance the US wins, and we go from a 50% chance of “solving alignment” to a 25% chance? Idk how to apply that here: isn’t your squiggle model talking about whether racing is good, rather than whether unilaterally pausing is good? Maybe you’re using “race” to mean “not pause” and “not race” to mean “pause”; if so, that’s super confusing terminology. If we unilaterally paused indefinitely, surely we’d have less than 70% chance of winning.
In general, I think you’re modeling this extremely superficially in your comments on the topic. I wish you’d try modeling this with more granularity than “is alignment hard” or whatever. I think that if you try to actually make such a model, you’ll likely end up with a much better sense of where other people are coming from. If you’re trying to do this, I recommend reading posts where people explain strategies for passing safely through the singularity, e.g. like this.
Yes the model is more about racing than about pausing but I thought it was applicable here. My thinking was that there is a spectrum of development speed with “completely pause” on one end and “race as fast as possible” on the other. Pushing more toward the “pause” side of the spectrum has the ~opposite effect as pushing toward the “race” side.
I’ve never seen anyone else try to quantitatively model it. As far as I know, my model is the most granular quantitative model ever made. Which isn’t to say it’s particularly granular (I spent less than an hour on it) but this feels like an unfair criticism.
In general I am not a fan of criticisms of the form “this model is too simple”. All models are too simple. What, specifically, is wrong with it?
I had a quick look at the linked post and it seems to be making some implicit assumptions, such as
the plan of “use AI to make AI safe” has a ~100% chance of working (the post explicitly says this is false, but then proceeds as if it’s true)
there is a ~100% chance of slow takeoff
if you unilaterally pause, this doesn’t increase the probability that anyone else pauses, doesn’t make it easier to get regulations passed, etc.
I would like to see some quantification of the from “we think there is a 30% chance that we can bootstrap AI alignment using AI; a unilateral pause will only increase the probability of a global pause by 3 percentage points; and there’s only a 50% chance that the 2nd-leading company will attempt to align AI in a way we’d find satisfactory, therefore we think the least-risky plan is to stay at the front of the race and then bootstrap AI alignment.” (Or a more detailed version of that.)
I think we basically agree, but I think the Overton window needs to be expanded, and Pause is (unfortunately) already outside that window. So I differentiate between the overall direction, which I support strongly, and the concrete proposals and the organizations involved.