But as a secondary point, I think today’s models can already use bash tools reasonably well.
Perhaps that’s true, I haven’t seen a lot of examples of them trying. I did see Buck’s anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).
I don’t work in AI so maybe I don’t know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn’t had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.
[disclaimers: I have some association with the org that ran that (I write some code for them) but I don’t speak for them, opinions are my own]
Also, Anthropic have a trigger in their RSP which is somewhat similar to what you’re describing, I’ll quote part of it:
Autonomous AI Research and Development: The ability to either: (1) Fully automate the work of an entry-level remote-only Researcher at Anthropic, as assessed by performance on representative tasks or (2) cause dramatic acceleration in the rate of effective scaling.
Also, in Dario’s interview, he spoke about AI being applied to programming.
My point is—lots of people have their eyes on this, it seems not to be solved yet, it takes more than connecting an LLM to bash.
Perhaps that’s true, I haven’t seen a lot of examples of them trying. I did see Buck’s anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).
I don’t work in AI so maybe I don’t know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn’t had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.
Your guesses on AI R&D are reasonable!
Apparently this has been tested extensively, for example:
https://x.com/METR_Evals/status/1860061711849652378
[disclaimers: I have some association with the org that ran that (I write some code for them) but I don’t speak for them, opinions are my own]
Also, Anthropic have a trigger in their RSP which is somewhat similar to what you’re describing, I’ll quote part of it:
Also, in Dario’s interview, he spoke about AI being applied to programming.
My point is—lots of people have their eyes on this, it seems not to be solved yet, it takes more than connecting an LLM to bash.
Still, I don’t want to accelerate this.