Well known complete bunk can be useful for gesturing at an idea, even as it gives no evidence about related facts. It can be a good explanatory tool when there is little risk that people will take away related invalid inferences.
(Someone strong-downvoted Bogdan’s comment, which I opposed with a strong-upvote, since it doesn’t by itself seem to be committing the error of believing the Sakana hype, and it gates my reply that I don’t want to be hidden because the comment it happens to be in reply to gets to the deep negatives in Karma.)
To maybe further clarify, I think of the Sakana paper roughly like how I think of autoGPT. LM agents were overhyped initially and autoGPT specifically didn’t work anywhere near as well as some people expected. But I expect LM agents as a whole will be a huge deal.
Well known complete bunk can be useful for gesturing at an idea, even as it gives no evidence about related facts. It can be a good explanatory tool when there is little risk that people will take away related invalid inferences.
(Someone strong-downvoted Bogdan’s comment, which I opposed with a strong-upvote, since it doesn’t by itself seem to be committing the error of believing the Sakana hype, and it gates my reply that I don’t want to be hidden because the comment it happens to be in reply to gets to the deep negatives in Karma.)
To maybe further clarify, I think of the Sakana paper roughly like how I think of autoGPT. LM agents were overhyped initially and autoGPT specifically didn’t work anywhere near as well as some people expected. But I expect LM agents as a whole will be a huge deal.