Can you point to any challenges that seem (a) necessary for speeding up AI R&D by 5x, and (b) not engineering challenges?
We’d discussed that some before, but one way to distill it is… I think autonomously doing nontrivial R&D engineering projects requires sustaining coherent agency across a large “inferential distance”. “Time” in the sense of “long-horizon tasks” is a solid proxy for it, but not really the core feature. Instead, it’s about being able to maintain a stable picture of the project even as you move from a fairly simple-in-terms-of-memorized-templates version of that project, to some sprawling, highly specific, real-life mess.
My sense is that, even now, LLMs are terrible at this[1] (including Anthropic’s recent coding agent), and that scaling along this dimension has not at all been good. So the straightforward projection of the current trends is not in fact “autonomous R&D agents in <3 years”, and some qualitative advancement is needed to get there.
Making them more useful seems analogous to selective breeding or animal training
Are they useful? Yes. Can they be made more useful? For sure. Is the impression that the rate at which they’re getting more useful would result in them 5x’ing AI R&D in <3 years a deceptive impression, the result of us setting up a selection process that would spit out something fooling us into forming this impression? Potentially yes, I argue.
Having looked it up now, METR’s benchmark admits that the environments in which they test are unrealistically “clean”, such that, I imagine, solving the task correctly is the “path of least resistance” in a certain sense (see “systematic differences from the real world” here).
We’d discussed that some before, but one way to distill it is… I think autonomously doing nontrivial R&D engineering projects requires sustaining coherent agency across a large “inferential distance”. “Time” in the sense of “long-horizon tasks” is a solid proxy for it, but not really the core feature. Instead, it’s about being able to maintain a stable picture of the project even as you move from a fairly simple-in-terms-of-memorized-templates version of that project, to some sprawling, highly specific, real-life mess.
My sense is that, even now, LLMs are terrible at this[1] (including Anthropic’s recent coding agent), and that scaling along this dimension has not at all been good. So the straightforward projection of the current trends is not in fact “autonomous R&D agents in <3 years”, and some qualitative advancement is needed to get there.
Are they useful? Yes. Can they be made more useful? For sure. Is the impression that the rate at which they’re getting more useful would result in them 5x’ing AI R&D in <3 years a deceptive impression, the result of us setting up a selection process that would spit out something fooling us into forming this impression? Potentially yes, I argue.
Having looked it up now, METR’s benchmark admits that the environments in which they test are unrealistically “clean”, such that, I imagine, solving the task correctly is the “path of least resistance” in a certain sense (see “systematic differences from the real world” here).