I agree the discussion holds up well in terms of the remaining live cruxes. Since this exchange, my timelines have gotten substantially shorter. They’re now pretty similar to Ryan’s (they feel a little bit slower but within the noise from operationalizations being fuzzy; I find it a bit hard to think about what 10x labor inputs exactly looks like).
The main reason they’ve gotten shorter is that performance on few-hour agentic tasks has moved almost twice as fast as I expected, and this seems broadly non-fake (i.e. it seems to be translating into real world use with only a moderate lag rather than a huge lag), though this second part is noisier and more confusing.
This dialogue occurred a few months after METR released their pilot report on autonomous replication and adaptation tasks. At the time it seemed like agents (GPT-4 and Claude 3 Sonnet iirc) were starting to be able to do tasks that would take a human a few minutes (looking something up on Wikipedia, making a phone call, searching a file system, writing short programs).
Right around when I did this dialogue, I launched an agent benchmarks RFP to build benchmarks testing LLM agents on many-step real-world tasks. Through this RFP, in late-2023 and early-2024, we funded a bunch of agent benchmarks consisting of tasks that take experts between 15 minutes and a few hours.
Roughly speaking, I was expecting that the benchmarks we were funding would get saturated around early-to-late 2026 (within 2-3 years). By EOY 2024 (one year out), I had expected these benchmarks to be halfway toward saturation — qualitatively I guessed that agents would be able to reliably perform moderately difficult 30 minute tasks as well as experts in a variety of domains but struggle with the 1-hour-plus tasks. This would have roughly been the same trajectory that the previous generation of benchmarks followed: e.g. MATH was introduced in Jan 2021, got halfway there in June 2022 (1.5 years), then saturated probably like another year after that (for a total of 2.5 years).
Instead, based on agent benchmarks like RE Bench and CyBench and SWE Bench Verified and various bio benchmarks, it looks like agents are already able to perform self-contained programming tasks that would take human experts multiple hours (although they perform these tasks in a more one-shot way than human experts perform them, and I’m sure there is a lot of jaggedness); these benchmarks seem on track to saturate by early 2025. If that holds up, it’d be about twice as fast as I would have guessed (1-1.5 years vs 2-3 years).
There’s always some lag between benchmark performance and real world use, and it’s very hard for me to gauge this lag myself because it seems like AI agents are way disproportionately useful to programmers and ML engineers compared to everyone else. But from friends who use AI systems regularly, it seems like they are regularly assigning agents tasks that would take them between a few minutes and an hour and getting actual value out of them.
On a meta level I now defer heavily to Ryan and people in his reference class (METR and Redwood engineers) on AI timelines, because they have a similarly deep understanding of the conceptual arguments I consider most important while having much more hands-on experience with the frontier of useful AI capabilities (I still don’t use AI systems regularly in my work). Of course AI company employees have the most hands-on experience, but I’ve found that they don’t seem to think as rigorously about the conceptual arguments, and some of them have a track record of overshooting and predicting AGI between 2020 and 2025 (as you might expect from their incentives and social climate).
One thing that I think is interesting, which doesn’t affect my timelines that much but cuts in the direction of slower: once again I overestimated how much real world use anyone who wasn’t a programmer would get. I definitely expected an off-the-shelf agent product that would book flights and reserve restaurants and shop for simple goods, one that worked well enough I would actually use it (and I expected that to happen before the one hour plus coding tasks were solved; I expected it to be concurrent with half hour coding tasks).
I can’t tell if the fact that AI agents continue to be useless to me is a portent that the incredible benchmark performance won’t translate as well as the bullish people expect to real world acceleration; I’m largely deferring to the consensus in my local social circle that it’s not a big deal. My personal intuitions are somewhat closer to what Steve Newman describes in this comment thread.
It seems like anecdotally folks are getting like +5%-30% productivity boost from using AI; it does feel somewhat aggressive for that to go to 10x productivity boost within a couple years.
Of course AI company employees have the most hands-on experience
FWIW I am not sure this is right—most AI company employees work on things other than “try to get as much work as possible from current AI systems, and understand the trajectory of how useful the AIs will be”. E.g. I think I have more personal experience with running AI agents than people at AI companies who don’t actively work on AI agents.
There are some people at AI companies who work on AI agents that use non-public models, and those people are ahead of the curve. But that’s a minority.
Interestingly, I’ve heard from tons of skeptics I’ve talked to (e.g. Tim Lee, CSET people, AI Snake Oil) that timelines to actual impacts in the world (such as significant R&D acceleration or industrial acceleration) are going to be way longer than we say because AIs are too unreliable and risky, therefore people won’t use them. I was more dismissive of this argument before but:
It matches my own lived experience (e.g. I still use search way more than LLMs, even to learn about complex topics, because I have good Google Fu and LLMs make stuff up too much).
As you say, it seems like a plausible explanation for why my weird friends make way more use out of coding agents than giant AI companies.
I tentatively remain dismissive of this argument. My claim was never “AIs are actually reliable and safe now” such that your lived experience would contradict it. I too predicted that AIs would be unreliable and risky in the near-term. My prediction is that after the intelligence explosion the best AIs will be reliable and safe (insofar as they want to be, that is.)
...I guess just now I was responding to a hypothetical interlocutor who agrees that AI R&D automation could come soon but thinks that that doesn’t count as “actual impacts in the world.” I’ve met many such people, people who think that software-only singularity is unlikely, people who like to talk about real-world bottlenecks, etc. But you weren’t describing such a person, you were describing someone who also thinks we won’t be able to automate AI R&D for a long time.
There I’d say… well, we’ll see. I agree that AIs are unreliable and risky and that therefore they’ll be able to do impressive-seeming stuff that looks like they could automate AI R&D well before they actually automate AI R&D in practice. But… probably by the end of 2025 they’ll be hitting that first milestone (imagine e.g. an AI that crushes RE-Bench and also can autonomously research & write ML papers, except the ML papers are often buggy and almost always banal / unimportant, and the experiments done to make them had a lot of bugs and wasted compute, and thus AI companies would laugh at the suggestion of putting said AI in charge of a bunch of GPUs and telling it to cook.) And then two years later maybe they’ll be able to do it for real, reliably, in practice, such that AGI takeoff happens.
Maybe another thing I’d say is “One domain where AIs seem to be heavily used in practice, is coding, especially coding at frontier AI companies (according to friends who work at these companies and report fairly heavy usage). This suggests that AI R&D automation will happen more or less on schedule.”
I’m not talking about narrowly your claim; I just think this very fundamentally confuses most people’s basic models of the world. People expect, from their unspoken models of “how technological products improve,” that long before you get a mind-bendingly powerful product that’s so good it can easily kill you, you get something that’s at least a little useful to you (and then you get something that’s a little more useful to you, and then something that’s really useful to you, and so on). And in fact that is roughly how it’s working — for programmers, not for a lot of other people.
Because I’ve engaged so much with the conceptual case for an intelligence explosion (i.e. the case that this intuitive model of technology might be wrong), I roughly buy it even though I am getting almost no use out of AIs still. But I have a huge amount of personal sympathy for people who feel really gaslit by it all.
To put it another way: we probably both agree that if we had gotten AI personal assistants that shop for you and book meetings for you in 2024, that would have been at least some evidence for shorter timelines. So their absence is at least some evidence for longer timelines. The question is what your underlying causal model was: did you think that if we were going to get superintelligence by 2027, then we really should see personal assistants in 2024? A lot of people strongly believe that, you (Daniel) hardly believe it at all, and I’m somewhere in the middle.
If we had gotten both the personal assistants I was expecting, and the 2x faster benchmark progress than I was expecting, my timelines would be the same as yours are now.
That’s reasonable. Seems worth mentioning that I did make predictions in What 2026 Looks Like, and eyeballing them now I don’t think I was saying that we’d have personal assistants that shop for you and book meetings for you in 2024, at least not in a way that really works. (I say at the beginning of 2026 “The age of the AI assistant has finally dawned.”) In other words I think even in 2021 I was thinking that widespread actually useful AI assistants would happen about a year or two before superintelligence. (Not because I have opinions about the orderings of technologies in general, but because I think that once an AGI company has had a popular working personal assistant for two years they should be able to figure out how to make a better version that dramatically speeds up their R&D.)
Indeed, I believe this is the main explanation for why my median timelines are longer than say situational awareness, and why AI isn’t nearly as impactful as people used to think back in the day.
The big difference from a lot of skeptics is I believe this adds at most 1-2 decades to the timeline, not multiple decades to make AI very, very useful.
i’ve recently done more AI agents running amok and i’ve found Claude was actually more aligned and did stuff i asked it not to much less than oai models enough that it actaully made a difference lol
lol what? Can you compile/summarize a list of examples of AI agents running amok in your personal experience? To what extent was it an alignment problem vs. a capabilities problem?
not running amock, just not reliably following instructions “only modify files in this folder” or “don’t install pip packages”. Claude follows instructions correctly, some other models are mode collapsed into a certain way of doing things, eg gpt-4o always thinks it’s running python in chatgpt code interpreter and you need very strong prompting to make it behave in a way specific to your computer
a hypothetical typical example would be it tries to use the file /usr/bin/python because it’s memorized that that’s the path to python, that fails, then it concludes it must create that folder which would require sudo permissions, if it can it could potentially mess something
You mentioned CyBench here. I think CyBench provides evidence against the claim “agents are already able to perform self-contained programming tasks that would take human experts multiple hours”. AFAIK, the most up-to-date CyBench run is in the joint AISI o1 evals. In this study (see Table 4.1, and note the caption), all existing models (other than o3, which was not evaluated here) succeed on 0⁄10 attempts at almost all the Cybench tasks that take >40 minutes for humans to complete.
I believe Cybench first solve times are based on the fastest top professional teams, rather than typical individual CTF competitors or cyber employees, for which the time to complete would probably be much higher (especially for the latter).
Do you think that cyber professionals would take multiple hours to do the tasks with 20-40 min first-solve times? I’m intuitively skeptical.
One (edit: minor) component of my skepticism is that someone told me that the participants in these competitions are less capable than actual cyber professionals, because the actual professionals have better things to do than enter competitions. I have no idea how big that selection effect is, but it at least provides some countervailing force against the selection effect you’re describing.
Do you think that cyber professionals would take multiple hours to do the tasks with 20-40 min first-solve times? I’m intuitively skeptical.
Yes, that would be my guess, medium confidence.
One component of my skepticism is that someone told me that the participants in these competitions are less capable than actual cyber professionals, because the actual professionals have better things to do than enter competitions. I have no idea how big that selection effect is, but it at least provides some countervailing force against the selection effect you’re describing.
I’m skeptical of your skepticism. Not knowing basically anything about the CTF scene but using the competitive programming scene as an example, I think the median competitor is much more capable than the median software engineering professional, not less. People like competing at things they’re good at.
I agree the discussion holds up well in terms of the remaining live cruxes. Since this exchange, my timelines have gotten substantially shorter. They’re now pretty similar to Ryan’s (they feel a little bit slower but within the noise from operationalizations being fuzzy; I find it a bit hard to think about what 10x labor inputs exactly looks like).
The main reason they’ve gotten shorter is that performance on few-hour agentic tasks has moved almost twice as fast as I expected, and this seems broadly non-fake (i.e. it seems to be translating into real world use with only a moderate lag rather than a huge lag), though this second part is noisier and more confusing.
This dialogue occurred a few months after METR released their pilot report on autonomous replication and adaptation tasks. At the time it seemed like agents (GPT-4 and Claude 3 Sonnet iirc) were starting to be able to do tasks that would take a human a few minutes (looking something up on Wikipedia, making a phone call, searching a file system, writing short programs).
Right around when I did this dialogue, I launched an agent benchmarks RFP to build benchmarks testing LLM agents on many-step real-world tasks. Through this RFP, in late-2023 and early-2024, we funded a bunch of agent benchmarks consisting of tasks that take experts between 15 minutes and a few hours.
Roughly speaking, I was expecting that the benchmarks we were funding would get saturated around early-to-late 2026 (within 2-3 years). By EOY 2024 (one year out), I had expected these benchmarks to be halfway toward saturation — qualitatively I guessed that agents would be able to reliably perform moderately difficult 30 minute tasks as well as experts in a variety of domains but struggle with the 1-hour-plus tasks. This would have roughly been the same trajectory that the previous generation of benchmarks followed: e.g. MATH was introduced in Jan 2021, got halfway there in June 2022 (1.5 years), then saturated probably like another year after that (for a total of 2.5 years).
Instead, based on agent benchmarks like RE Bench and CyBench and SWE Bench Verified and various bio benchmarks, it looks like agents are already able to perform self-contained programming tasks that would take human experts multiple hours (although they perform these tasks in a more one-shot way than human experts perform them, and I’m sure there is a lot of jaggedness); these benchmarks seem on track to saturate by early 2025. If that holds up, it’d be about twice as fast as I would have guessed (1-1.5 years vs 2-3 years).
There’s always some lag between benchmark performance and real world use, and it’s very hard for me to gauge this lag myself because it seems like AI agents are way disproportionately useful to programmers and ML engineers compared to everyone else. But from friends who use AI systems regularly, it seems like they are regularly assigning agents tasks that would take them between a few minutes and an hour and getting actual value out of them.
On a meta level I now defer heavily to Ryan and people in his reference class (METR and Redwood engineers) on AI timelines, because they have a similarly deep understanding of the conceptual arguments I consider most important while having much more hands-on experience with the frontier of useful AI capabilities (I still don’t use AI systems regularly in my work). Of course AI company employees have the most hands-on experience, but I’ve found that they don’t seem to think as rigorously about the conceptual arguments, and some of them have a track record of overshooting and predicting AGI between 2020 and 2025 (as you might expect from their incentives and social climate).
One thing that I think is interesting, which doesn’t affect my timelines that much but cuts in the direction of slower: once again I overestimated how much real world use anyone who wasn’t a programmer would get. I definitely expected an off-the-shelf agent product that would book flights and reserve restaurants and shop for simple goods, one that worked well enough I would actually use it (and I expected that to happen before the one hour plus coding tasks were solved; I expected it to be concurrent with half hour coding tasks).
I can’t tell if the fact that AI agents continue to be useless to me is a portent that the incredible benchmark performance won’t translate as well as the bullish people expect to real world acceleration; I’m largely deferring to the consensus in my local social circle that it’s not a big deal. My personal intuitions are somewhat closer to what Steve Newman describes in this comment thread.
It seems like anecdotally folks are getting like +5%-30% productivity boost from using AI; it does feel somewhat aggressive for that to go to 10x productivity boost within a couple years.
FWIW I am not sure this is right—most AI company employees work on things other than “try to get as much work as possible from current AI systems, and understand the trajectory of how useful the AIs will be”. E.g. I think I have more personal experience with running AI agents than people at AI companies who don’t actively work on AI agents.
There are some people at AI companies who work on AI agents that use non-public models, and those people are ahead of the curve. But that’s a minority.
Yeah, good point, I’ve been surprised by how uninterested the companies have been in agents.
Another effect here is that the AI companies often don’t want to be as reckless as I am, e.g. letting agents run amok on my machines.
Interestingly, I’ve heard from tons of skeptics I’ve talked to (e.g. Tim Lee, CSET people, AI Snake Oil) that timelines to actual impacts in the world (such as significant R&D acceleration or industrial acceleration) are going to be way longer than we say because AIs are too unreliable and risky, therefore people won’t use them. I was more dismissive of this argument before but:
It matches my own lived experience (e.g. I still use search way more than LLMs, even to learn about complex topics, because I have good Google Fu and LLMs make stuff up too much).
As you say, it seems like a plausible explanation for why my weird friends make way more use out of coding agents than giant AI companies.
I tentatively remain dismissive of this argument. My claim was never “AIs are actually reliable and safe now” such that your lived experience would contradict it. I too predicted that AIs would be unreliable and risky in the near-term. My prediction is that after the intelligence explosion the best AIs will be reliable and safe (insofar as they want to be, that is.)
...I guess just now I was responding to a hypothetical interlocutor who agrees that AI R&D automation could come soon but thinks that that doesn’t count as “actual impacts in the world.” I’ve met many such people, people who think that software-only singularity is unlikely, people who like to talk about real-world bottlenecks, etc. But you weren’t describing such a person, you were describing someone who also thinks we won’t be able to automate AI R&D for a long time.
There I’d say… well, we’ll see. I agree that AIs are unreliable and risky and that therefore they’ll be able to do impressive-seeming stuff that looks like they could automate AI R&D well before they actually automate AI R&D in practice. But… probably by the end of 2025 they’ll be hitting that first milestone (imagine e.g. an AI that crushes RE-Bench and also can autonomously research & write ML papers, except the ML papers are often buggy and almost always banal / unimportant, and the experiments done to make them had a lot of bugs and wasted compute, and thus AI companies would laugh at the suggestion of putting said AI in charge of a bunch of GPUs and telling it to cook.) And then two years later maybe they’ll be able to do it for real, reliably, in practice, such that AGI takeoff happens.
Maybe another thing I’d say is “One domain where AIs seem to be heavily used in practice, is coding, especially coding at frontier AI companies (according to friends who work at these companies and report fairly heavy usage). This suggests that AI R&D automation will happen more or less on schedule.”
I’m not talking about narrowly your claim; I just think this very fundamentally confuses most people’s basic models of the world. People expect, from their unspoken models of “how technological products improve,” that long before you get a mind-bendingly powerful product that’s so good it can easily kill you, you get something that’s at least a little useful to you (and then you get something that’s a little more useful to you, and then something that’s really useful to you, and so on). And in fact that is roughly how it’s working — for programmers, not for a lot of other people.
Because I’ve engaged so much with the conceptual case for an intelligence explosion (i.e. the case that this intuitive model of technology might be wrong), I roughly buy it even though I am getting almost no use out of AIs still. But I have a huge amount of personal sympathy for people who feel really gaslit by it all.
To put it another way: we probably both agree that if we had gotten AI personal assistants that shop for you and book meetings for you in 2024, that would have been at least some evidence for shorter timelines. So their absence is at least some evidence for longer timelines. The question is what your underlying causal model was: did you think that if we were going to get superintelligence by 2027, then we really should see personal assistants in 2024? A lot of people strongly believe that, you (Daniel) hardly believe it at all, and I’m somewhere in the middle.
If we had gotten both the personal assistants I was expecting, and the 2x faster benchmark progress than I was expecting, my timelines would be the same as yours are now.
That’s reasonable. Seems worth mentioning that I did make predictions in What 2026 Looks Like, and eyeballing them now I don’t think I was saying that we’d have personal assistants that shop for you and book meetings for you in 2024, at least not in a way that really works. (I say at the beginning of 2026 “The age of the AI assistant has finally dawned.”) In other words I think even in 2021 I was thinking that widespread actually useful AI assistants would happen about a year or two before superintelligence. (Not because I have opinions about the orderings of technologies in general, but because I think that once an AGI company has had a popular working personal assistant for two years they should be able to figure out how to make a better version that dramatically speeds up their R&D.)
Indeed, I believe this is the main explanation for why my median timelines are longer than say situational awareness, and why AI isn’t nearly as impactful as people used to think back in the day.
The big difference from a lot of skeptics is I believe this adds at most 1-2 decades to the timeline, not multiple decades to make AI very, very useful.
Yeah TBC, I’m at even less than 1-2 decades added, more like 1-5 years.
i’ve recently done more AI agents running amok and i’ve found Claude was actually more aligned and did stuff i asked it not to much less than oai models enough that it actaully made a difference lol
lol what? Can you compile/summarize a list of examples of AI agents running amok in your personal experience? To what extent was it an alignment problem vs. a capabilities problem?
not running amock, just not reliably following instructions “only modify files in this folder” or “don’t install pip packages”. Claude follows instructions correctly, some other models are mode collapsed into a certain way of doing things, eg gpt-4o always thinks it’s running python in chatgpt code interpreter and you need very strong prompting to make it behave in a way specific to your computer
a hypothetical typical example would be it tries to use the file /usr/bin/python because it’s memorized that that’s the path to python, that fails, then it concludes it must create that folder which would require sudo permissions, if it can it could potentially mess something
You mentioned CyBench here. I think CyBench provides evidence against the claim “agents are already able to perform self-contained programming tasks that would take human experts multiple hours”. AFAIK, the most up-to-date CyBench run is in the joint AISI o1 evals. In this study (see Table 4.1, and note the caption), all existing models (other than o3, which was not evaluated here) succeed on 0⁄10 attempts at almost all the Cybench tasks that take >40 minutes for humans to complete.
I believe Cybench first solve times are based on the fastest top professional teams, rather than typical individual CTF competitors or cyber employees, for which the time to complete would probably be much higher (especially for the latter).
Do you think that cyber professionals would take multiple hours to do the tasks with 20-40 min first-solve times? I’m intuitively skeptical.
One (edit: minor) component of my skepticism is that someone told me that the participants in these competitions are less capable than actual cyber professionals, because the actual professionals have better things to do than enter competitions. I have no idea how big that selection effect is, but it at least provides some countervailing force against the selection effect you’re describing.
Yes, that would be my guess, medium confidence.
I’m skeptical of your skepticism. Not knowing basically anything about the CTF scene but using the competitive programming scene as an example, I think the median competitor is much more capable than the median software engineering professional, not less. People like competing at things they’re good at.