While I’m not a general fan of compute overhang, I do think that it’s at least somewhat relevant in worlds where AI pauses are very close to when a system is able to automate at least the entire AI R&D process, if not the entire AI economy itself, and I do suspect realistic pauses imposed by governments will likely only come once a massive amount of people lose their jobs, which can create incentives to go to algorithmic progress, and even small algorithmic progress might immediately blow up the pause agreement crafted in the aftermath of many people losing their jobs.
Basically, my statement in short terms is that conditional on AI pause happening because of massive job losses from AI that is barely unable to take-over the world, then even small saving in compute via better algorithms due to algorithmic research not being banned would incentivize more algorithmic research, which then lowers the compute enough to make the AI pause untenable and the AI takes over the world.
So for this argument to be worth bringing up in some general context where a pause is discussed, the person arguing it should probably believe:
We are far and away most likely to get a pause only as a response to unemployment.
An AI that precipitates pause-inducing levels of unemployment is inches from automating AI R+D.
The period between implementing the pause and massive algorithmic advancements is long enough that we’re able to increase compute stock...
....but short enough that we’re not able to make meaningful safety progress before algorithmic advancements make the pause ineffective (because, i.e., we regulated FLOPS and it just now takes 100x fewer FLOPS to build the dangerous thing).
I think the conjunct probability of all these things is low, and I think their likelihood is sensitive to the terms of the pause agreement itself. I agree that the design of a pause should consider a broad range of possibilities, and try to maximize its own odds of attaining its ends (Keep Everyone Alive).
I’m also not sure how this goes better in the no-pause world? Unless this person also has really high odds on multipolar going well and expects some Savior AI trained and aligned in the same length of time as the effective window of the theoretical pause to intervene? But that’s a rare position among people who care about safety ~at all; it’s kind of a George Hotz take or something...
(I don’t think we disagree; you did flag that this as ”...somewhat relevant in worlds where...” which is often code for “I really don’t expect this to happen, but Someone Somewhere should hold this possibility in mind.” Just want to make sure I’m actually following!)
I think 1 and 2 are actually pretty likely, but 3 and 4 is where I’m a lot less confident in actually happening.
A big reason for this is that I suspect one of the reasons people aren’t reacting to AI progress is they assume it won’t take their job, so it will likely require massive job losses for humans to make a lot of people care about AI, and depending on how concentrated AI R&D is, there’s a real possibility that AI has fully automated AI R&D before massive job losses begin in a way that matters to regular people.
Cool! I think we’re in agreement at a high level. Thanks for taking the extra time to make sure you were understood.
In more detail, though:
I think I disagree with 1 being all that likely; there are just other things I could see happening that would make a pause or stop politically popular (i.e. warning shots, An Inconvenient Truth AI Edition, etc.), likely not worth getting into here. I also think ‘if we pause it will be for stupid reasons’ is a very sad take.
I think I disagree with 2 being likely, as well; probably yes, a lot of the bottleneck on development is ~make-work that goes away when you get a drop-in replacement for remote workers, and also yes, AI coding is already an accelerant // effectively doing gradient descent on gradient descent (RLing the RL’d researcher to RL the RL...) is intelligence-explosion fuel. But I think there’s a big gap between the capabilities you need for politically worrisome levels of unemployment, and the capabilities you need for an intelligence explosion, principally because >30 percent of human labor in developed nations could be automated with current tech if the economics align a bit (hiring 200+k/year ML engineers to replace your 30k/year call center employee is only just now starting to make sense economically). I think this has been true of current tech since ~GPT-4, and that we haven’t seen a concomitant massive acceleration in capabilities on the frontier (things are continuing to move fast, and the proliferation is scary, but it’s not an explosion).
I take “depending on how concentrated AI R&D is” to foreshadow that you’d reply to the above with something like: “This is about lab priorities; the labs with the most impressive models are the labs focusing the most on frontier model development, and they’re unlikely to set their sights on comprehensive automation of shit jobs when they can instead double-down on frontier models and put some RL in the RL to RL the RL that’s been RL’d by the...”
I think that’s right about lab priorities. However, I expect the automation wave to mostly come from middle-men, consultancies, what have you, who take all of the leftover ML researchers not eaten up by the labs and go around automating things away individually (yes, maybe the frontier moves too fast for this to be right, because the labs just end up with a drop-in remote worker ‘for free’ as long as they keep advancing down the tech tree, but I don’t quite think this is true, because human jobs are human-shaped, and buyers are going to want pretty rigorous role-specific guarantees from whoever’s selling this service, even if they’re basically unnecessary, and the one-size-fits-all solution is going to have fewer buyers than the thing marketed as ‘bespoke’).
In general, I don’t like collapsing the various checkpoints between here and superintelligence; there are all these intermediate states, and their exact features matter a lot, and we really don’t know what we’re going to get. ‘By the time we’ll have x, we’ll certainly have y’ is not a form of prediction that anyone has a particularly good track record making.
I think I disagree with 1 being all that likely; there are just other things I could see happening that would make a pause or stop politically popular (i.e. warning shots, An Inconvenient Truth AI Edition, etc.), likely not worth getting into here. I also think ‘if we pause it will be for stupid reasons’ is a very sad take.
I generally don’t think the Inconvenient truth movie mattered that much for solving climate change, compared to technological solutions like renewable energy, and made the issue a little more partisan (though environmentalism/climate change was unusually partisan by then) and I think social movements to affect AI already had less impact on AI safety than technical work (in a broad sense) for reducing doom, and I expect this trend to continue.
I think warning shots could scare the public, but I worry that the level of warning shots necessary to clear AI is in a fairly narrow band, and I also expect AI control to have a reasonable probability of containing human-level scheming models that do work, so I wouldn’t pick this at all.
I agree it’s a sad take that “if we pause it will be for stupid reasons”, but I also think this is the very likely attractor, if AI does become a subject that is salient in politics, because people hate nuance, and nuance matters way more than the average person wants to deal with on AI (For example, I think the second species argument critically misses important differences that make the human-AI relationship more friendly than the human-gorilla relationship, and that’s without the subject being politicized).
To address this:
But I think there’s a big gap between the capabilities you need for politically worrisome levels of unemployment, and the capabilities you need for an intelligence explosion, principally because >30 percent of human labor in developed nations could be automated with current tech if the economics align a bit (hiring 200+k/year ML engineers to replace your 30k/year call center employee is only just now starting to make sense economically). I think this has been true of current tech since ~GPT-4, and that we haven’t seen a concomitant massive acceleration in capabilities on the frontier (things are continuing to move fast, and the proliferation is scary, but it’s not an explosion).
I think the key crux is I believe that the unreliability of GPT-4 would doom any attempt to automate 30% of jobs, and I think at most 0-1% of jobs could be automated, and while in principle you could improve reliability without improving capabilities too much, I also don’t think the incentives yet favor this option.
In general, I don’t like collapsing the various checkpoints between here and superintelligence; there are all these intermediate states, and their exact features matter a lot, and we really don’t know what we’re going to get. ‘By the time we’ll have x, we’ll certainly have y’ is not a form of prediction that anyone has a particularly good track record making.
I agree with this sort of argument, and in general I am not a fan of collapsing checkpoints between today’s AI and God AIs, which is a big mistake I think MIRI did, but my main claim is that the checkpoints would be illegible enough to the average citizen such that they don’t notice the progress until it’s too late, and that the reliability improvements will in practice also be coupled with capabilities improvements that matter to the AI explosion, but not very visible to the average citizen for the reason Garrison Lovely describes here:
There’s a vibe that AI progress has stalled out in the last ~year, but I think it’s more accurate to say that progress has become increasingly illegible. Since 6⁄23, perf. on PhD level science questions went from barely better than random guessing to matching domain experts.
While I’m not a general fan of compute overhang, I do think that it’s at least somewhat relevant in worlds where AI pauses are very close to when a system is able to automate at least the entire AI R&D process, if not the entire AI economy itself, and I do suspect realistic pauses imposed by governments will likely only come once a massive amount of people lose their jobs, which can create incentives to go to algorithmic progress, and even small algorithmic progress might immediately blow up the pause agreement crafted in the aftermath of many people losing their jobs.
I think it would be very helpful to me if you broke that sentence up a bit more. I took a stab at it but didn’t get very far.
Sorry for my failure to parse!
Basically, my statement in short terms is that conditional on AI pause happening because of massive job losses from AI that is barely unable to take-over the world, then even small saving in compute via better algorithms due to algorithmic research not being banned would incentivize more algorithmic research, which then lowers the compute enough to make the AI pause untenable and the AI takes over the world.
So for this argument to be worth bringing up in some general context where a pause is discussed, the person arguing it should probably believe:
We are far and away most likely to get a pause only as a response to unemployment.
An AI that precipitates pause-inducing levels of unemployment is inches from automating AI R+D.
The period between implementing the pause and massive algorithmic advancements is long enough that we’re able to increase compute stock...
....but short enough that we’re not able to make meaningful safety progress before algorithmic advancements make the pause ineffective (because, i.e., we regulated FLOPS and it just now takes 100x fewer FLOPS to build the dangerous thing).
I think the conjunct probability of all these things is low, and I think their likelihood is sensitive to the terms of the pause agreement itself. I agree that the design of a pause should consider a broad range of possibilities, and try to maximize its own odds of attaining its ends (Keep Everyone Alive).
I’m also not sure how this goes better in the no-pause world? Unless this person also has really high odds on multipolar going well and expects some Savior AI trained and aligned in the same length of time as the effective window of the theoretical pause to intervene? But that’s a rare position among people who care about safety ~at all; it’s kind of a George Hotz take or something...
(I don’t think we disagree; you did flag that this as ”...somewhat relevant in worlds where...” which is often code for “I really don’t expect this to happen, but Someone Somewhere should hold this possibility in mind.” Just want to make sure I’m actually following!)
I think 1 and 2 are actually pretty likely, but 3 and 4 is where I’m a lot less confident in actually happening.
A big reason for this is that I suspect one of the reasons people aren’t reacting to AI progress is they assume it won’t take their job, so it will likely require massive job losses for humans to make a lot of people care about AI, and depending on how concentrated AI R&D is, there’s a real possibility that AI has fully automated AI R&D before massive job losses begin in a way that matters to regular people.
Cool! I think we’re in agreement at a high level. Thanks for taking the extra time to make sure you were understood.
In more detail, though:
I think I disagree with 1 being all that likely; there are just other things I could see happening that would make a pause or stop politically popular (i.e. warning shots, An Inconvenient Truth AI Edition, etc.), likely not worth getting into here. I also think ‘if we pause it will be for stupid reasons’ is a very sad take.
I think I disagree with 2 being likely, as well; probably yes, a lot of the bottleneck on development is ~make-work that goes away when you get a drop-in replacement for remote workers, and also yes, AI coding is already an accelerant // effectively doing gradient descent on gradient descent (RLing the RL’d researcher to RL the RL...) is intelligence-explosion fuel. But I think there’s a big gap between the capabilities you need for politically worrisome levels of unemployment, and the capabilities you need for an intelligence explosion, principally because >30 percent of human labor in developed nations could be automated with current tech if the economics align a bit (hiring 200+k/year ML engineers to replace your 30k/year call center employee is only just now starting to make sense economically). I think this has been true of current tech since ~GPT-4, and that we haven’t seen a concomitant massive acceleration in capabilities on the frontier (things are continuing to move fast, and the proliferation is scary, but it’s not an explosion).
I take “depending on how concentrated AI R&D is” to foreshadow that you’d reply to the above with something like: “This is about lab priorities; the labs with the most impressive models are the labs focusing the most on frontier model development, and they’re unlikely to set their sights on comprehensive automation of shit jobs when they can instead double-down on frontier models and put some RL in the RL to RL the RL that’s been RL’d by the...”
I think that’s right about lab priorities. However, I expect the automation wave to mostly come from middle-men, consultancies, what have you, who take all of the leftover ML researchers not eaten up by the labs and go around automating things away individually (yes, maybe the frontier moves too fast for this to be right, because the labs just end up with a drop-in remote worker ‘for free’ as long as they keep advancing down the tech tree, but I don’t quite think this is true, because human jobs are human-shaped, and buyers are going to want pretty rigorous role-specific guarantees from whoever’s selling this service, even if they’re basically unnecessary, and the one-size-fits-all solution is going to have fewer buyers than the thing marketed as ‘bespoke’).
In general, I don’t like collapsing the various checkpoints between here and superintelligence; there are all these intermediate states, and their exact features matter a lot, and we really don’t know what we’re going to get. ‘By the time we’ll have x, we’ll certainly have y’ is not a form of prediction that anyone has a particularly good track record making.
I generally don’t think the Inconvenient truth movie mattered that much for solving climate change, compared to technological solutions like renewable energy, and made the issue a little more partisan (though environmentalism/climate change was unusually partisan by then) and I think social movements to affect AI already had less impact on AI safety than technical work (in a broad sense) for reducing doom, and I expect this trend to continue.
I think warning shots could scare the public, but I worry that the level of warning shots necessary to clear AI is in a fairly narrow band, and I also expect AI control to have a reasonable probability of containing human-level scheming models that do work, so I wouldn’t pick this at all.
I agree it’s a sad take that “if we pause it will be for stupid reasons”, but I also think this is the very likely attractor, if AI does become a subject that is salient in politics, because people hate nuance, and nuance matters way more than the average person wants to deal with on AI (For example, I think the second species argument critically misses important differences that make the human-AI relationship more friendly than the human-gorilla relationship, and that’s without the subject being politicized).
To address this:
I think the key crux is I believe that the unreliability of GPT-4 would doom any attempt to automate 30% of jobs, and I think at most 0-1% of jobs could be automated, and while in principle you could improve reliability without improving capabilities too much, I also don’t think the incentives yet favor this option.
I agree with this sort of argument, and in general I am not a fan of collapsing checkpoints between today’s AI and God AIs, which is a big mistake I think MIRI did, but my main claim is that the checkpoints would be illegible enough to the average citizen such that they don’t notice the progress until it’s too late, and that the reliability improvements will in practice also be coupled with capabilities improvements that matter to the AI explosion, but not very visible to the average citizen for the reason Garrison Lovely describes here:
https://x.com/GarrisonLovely/status/1866945509975638493