I suspect current approaches probably significantly or even drastically under-elicit automated ML research capabilities.
I’d guess the average cost of producing a decent ML paper is at least 10k$ (in the West, at least) and probably closer to 100k’s $.
In contrast, Sakana’s AI scientist cost on average 15$/paper and .50$/review. PaperQA2, which claims superhuman performance at some scientific Q&A and lit review tasks, costs something like 4$/query. Other papers with claims of human-range performance on ideation or reviewing also probably have costs of <10$/idea or review.
Even the auto ML R&D benchmarks from METR or UK AISI don’t give me at all the vibes of coming anywhere near close enough to e.g. what a 100-person team at OpenAI could accomplish in 1 year, if they tried really hard to automate ML.
A fairer comparison would probably be to actually try hard at building the kind of scaffold which could use ~10k$ in inference costs productively. I suspect the resulting agent would probably not do much better than with 100$ of inference, but it seems hard to be confident. And it seems harder still to be confident about what will happen even in just 3 years’ time, given that pretraining compute seems like it will probably grow about 10x/year and that there might be stronger pushes towards automated ML.
This seems pretty bad both w.r.t. underestimating the probability of shorter timelines and faster takeoffs, and in more specific ways too. E.g. we could be underestimating by a lot the risks of open-weights Llama-3 (or 4 soon) given all the potential under-elicitation.
In contrast, Sakana’s AI scientist cost on average 15$/paper and .50$/review.
The Sakana AI stuff is basically total bogus, as I’ve pointed out on like 4 other threads (and also as Scott Alexander recently pointed out). It does not produce anything close to fully formed scientific papers. It’s output is really not better than just prompting o1 yourself. Of course, o1 and even Sonnet and GPT-4 are very impressive, but there is no update to be made after you’ve played around with that.
I agree that ML capabilities are under-elicited, but the Sakana AI stuff really is very little evidence on that, besides someone being good at marketing and setting up some scaffolding that produces fake prestige signals.
It does not produce anything close to fully formed scientific papers. It’s output is really not better than just prompting o1 yourself. Of course, o1 and even Sonnet and GPT-4 are very impressive, but there is no update to be made after you’ve played around with that.
(Again) I think this is missing the point that we’ve now (for the first time, to my knowledge) observed an early demo the full research workflow being automatable, as flawed as the outputs might be.
I completely agree, and we should just obviously build an organization around this. Automating alignment research while also getting a better grasp on maximum current capabilities (and a better picture of how we expect it to grow).
(This is my intention, and I have had conversations with Bogdan about this, but I figured I’d make it more public in case anyone has funding or ideas they would like to share.)
A fairer comparison would probably be to actually try hard at building the kind of scaffold which could use ~10k$ in inference costs productively. I suspect the resulting agent would probably not do much better than with 100$ of inference, but it seems hard to be confident.
Also, there are notable researchers and companies working on developing ‘a truly general way of scaling inference compute’ right now and I think it would be cautious to consider what happens if they succeed.
(This also has implications for automating AI safety research).
(This also has implications for automating AI safety research).
To spell it out more explicitly, the current way of scaling inference (CoT) seems pretty good vs. some of the most worrying threat models, which often depend on opaque model internals.
A fairer comparison would probably be to actually try hard at building the kind of scaffold which could use ~10k$ in inference costs productively. I suspect the resulting agent would probably not do much better than with 100$ of inference, but it seems hard to be confident. And it seems harder still to be confident about what will happen even in just 3 years’ time, given that pretraining compute seems like it will probably grow about 10x/year and that there might be stronger pushes towards automated ML.
A related announcement, explicitly targeting ‘building an epistemically sound research agent @elicitorg that can use unlimited test-time compute while keeping reasoning transparent & verifiable’: https://x.com/stuhlmueller/status/1869080354658890009.
I suspect current approaches probably significantly or even drastically under-elicit automated ML research capabilities.
I’d guess the average cost of producing a decent ML paper is at least 10k$ (in the West, at least) and probably closer to 100k’s $.
In contrast, Sakana’s AI scientist cost on average 15$/paper and .50$/review. PaperQA2, which claims superhuman performance at some scientific Q&A and lit review tasks, costs something like 4$/query. Other papers with claims of human-range performance on ideation or reviewing also probably have costs of <10$/idea or review.
Even the auto ML R&D benchmarks from METR or UK AISI don’t give me at all the vibes of coming anywhere near close enough to e.g. what a 100-person team at OpenAI could accomplish in 1 year, if they tried really hard to automate ML.
A fairer comparison would probably be to actually try hard at building the kind of scaffold which could use ~10k$ in inference costs productively. I suspect the resulting agent would probably not do much better than with 100$ of inference, but it seems hard to be confident. And it seems harder still to be confident about what will happen even in just 3 years’ time, given that pretraining compute seems like it will probably grow about 10x/year and that there might be stronger pushes towards automated ML.
This seems pretty bad both w.r.t. underestimating the probability of shorter timelines and faster takeoffs, and in more specific ways too. E.g. we could be underestimating by a lot the risks of open-weights Llama-3 (or 4 soon) given all the potential under-elicitation.
The Sakana AI stuff is basically total bogus, as I’ve pointed out on like 4 other threads (and also as Scott Alexander recently pointed out). It does not produce anything close to fully formed scientific papers. It’s output is really not better than just prompting o1 yourself. Of course, o1 and even Sonnet and GPT-4 are very impressive, but there is no update to be made after you’ve played around with that.
I agree that ML capabilities are under-elicited, but the Sakana AI stuff really is very little evidence on that, besides someone being good at marketing and setting up some scaffolding that produces fake prestige signals.
(Again) I think this is missing the point that we’ve now (for the first time, to my knowledge) observed an early demo the full research workflow being automatable, as flawed as the outputs might be.
I completely agree, and we should just obviously build an organization around this. Automating alignment research while also getting a better grasp on maximum current capabilities (and a better picture of how we expect it to grow).
(This is my intention, and I have had conversations with Bogdan about this, but I figured I’d make it more public in case anyone has funding or ideas they would like to share.)
Figures 3 and 4 from MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering seem like some amount of evidence for this view:
Also, there are notable researchers and companies working on developing ‘a truly general way of scaling inference compute’ right now and I think it would be cautious to consider what happens if they succeed.
(This also has implications for automating AI safety research).
To spell it out more explicitly, the current way of scaling inference (CoT) seems pretty good vs. some of the most worrying threat models, which often depend on opaque model internals.
A related announcement, explicitly targeting ‘building an epistemically sound research agent
@elicitorg that can use unlimited test-time compute while keeping reasoning transparent & verifiable’: https://x.com/stuhlmueller/status/1869080354658890009.