I don’t have well-considered cached numbers, more like a vague sense for how close various things feel. So these are made up on the spot and please don’t take them too seriously except as a ballpark estimate:
AI can go from most Github issues to correct PRs (similar to https://sweep.dev/ but works for things that would take a human dev a few days with a bunch of debugging): 25% by end of 2026, 50% by end of 2028.
This kind of thing seems to me like plausibly one of the earliest important parts of AI R&D that AIs could mostly automate.
I expect that once we’re at roughly that point, AIs will be accelerating further AI development significantly (not just through coding, they’ll also be helpful for other things even if they can’t fully automate them yet). On the other hand, the bottleneck might just become compute, so how long it takes to get strongly superhuman AI (assuming for simplicity labs push for that as fast as they can) depends on a lot of factors like how much compute is needed for that with current algorithms, how much we can get out of algorithmic improvements if AIs make researcher time cheaper relative to compute, or how quickly we can get more/better chips (in particular with AI help).
So I have pretty big error bars on this part, but call it 25% that it takes <=6 months to get from the previous point to automating ~every economically important thing humans (and being better and way faster at most of them), and 50% by 2 years.
So if you want a single number, end of 2030 as a median for automating most stuff seems roughly right to me at the moment.
Caveat that I haven’t factored in big voluntary or regulatory slowdowns, or slowdowns from huge disruptions like big wars here. Probably doesn’t change my numbers by a ton but would lengthen timelines by a bit.
How much time do you think there is between “ability to automate” and “actually this has been automated”? Are your numbers for actual automation, or just ability? I personally would agree to your numbers if they are about ability to automate, but I think it will take much longer to actually automate, due to people’s inertia and normal regulatory hurdles (though I find it confusing to think about, because we might have vastly superhuman AI and potentially loss of control before everything is actually automated.)
Good question, I think I was mostly visualizing ability to automate while writing this. Though for software development specifically I expect the gap to be pretty small (lower regulatory hurdles than elsewhere, has a lot of relevance to the people who’d do the automation, already starting to happen right now).
In general I’d expect inertia to become less of a factor as the benefits of AI become bigger and more obvious—at least for important applications where AI could provide many many billions of dollars of economic value, I’d guess it won’t take too long for someone to reap those benefits.
My best guess is regulations won’t slow this down too much except in a few domains where there are already existing regulations (like driving cars or medical things). But pretty unsure about that.
I also think it depends on whether by “ability to automate” you mean “this base model could do it with exactly the right scaffolding or finetuning” vs “we actually know how to do it and it’s just a question of using it at scale”. For that part, I was thinking more about the latter.
I don’t have well-considered cached numbers, more like a vague sense for how close various things feel. So these are made up on the spot and please don’t take them too seriously except as a ballpark estimate:
AI can go from most Github issues to correct PRs (similar to https://sweep.dev/ but works for things that would take a human dev a few days with a bunch of debugging): 25% by end of 2026, 50% by end of 2028.
This kind of thing seems to me like plausibly one of the earliest important parts of AI R&D that AIs could mostly automate.
I expect that once we’re at roughly that point, AIs will be accelerating further AI development significantly (not just through coding, they’ll also be helpful for other things even if they can’t fully automate them yet). On the other hand, the bottleneck might just become compute, so how long it takes to get strongly superhuman AI (assuming for simplicity labs push for that as fast as they can) depends on a lot of factors like how much compute is needed for that with current algorithms, how much we can get out of algorithmic improvements if AIs make researcher time cheaper relative to compute, or how quickly we can get more/better chips (in particular with AI help).
So I have pretty big error bars on this part, but call it 25% that it takes <=6 months to get from the previous point to automating ~every economically important thing humans (and being better and way faster at most of them), and 50% by 2 years.
So if you want a single number, end of 2030 as a median for automating most stuff seems roughly right to me at the moment.
Caveat that I haven’t factored in big voluntary or regulatory slowdowns, or slowdowns from huge disruptions like big wars here. Probably doesn’t change my numbers by a ton but would lengthen timelines by a bit.
How much time do you think there is between “ability to automate” and “actually this has been automated”? Are your numbers for actual automation, or just ability? I personally would agree to your numbers if they are about ability to automate, but I think it will take much longer to actually automate, due to people’s inertia and normal regulatory hurdles (though I find it confusing to think about, because we might have vastly superhuman AI and potentially loss of control before everything is actually automated.)
Good question, I think I was mostly visualizing ability to automate while writing this. Though for software development specifically I expect the gap to be pretty small (lower regulatory hurdles than elsewhere, has a lot of relevance to the people who’d do the automation, already starting to happen right now).
In general I’d expect inertia to become less of a factor as the benefits of AI become bigger and more obvious—at least for important applications where AI could provide many many billions of dollars of economic value, I’d guess it won’t take too long for someone to reap those benefits.
My best guess is regulations won’t slow this down too much except in a few domains where there are already existing regulations (like driving cars or medical things). But pretty unsure about that.
I also think it depends on whether by “ability to automate” you mean “this base model could do it with exactly the right scaffolding or finetuning” vs “we actually know how to do it and it’s just a question of using it at scale”. For that part, I was thinking more about the latter.