Is this coming from deep knowledge about Symbolica’s method, or just on outside view considerations like “usually people trying to think too big-brained end up failing when it comes to AI”.
Or at least that’s approximately true. I’ll have a post on why I expect the bitter lesson to hold eventually, but is likely to be a while. If you read this blog post you can probably predict my reasoning for why I expect “learn only clean composable abstraction where the boundaries cut reality at the joints” to break down as an approach.
I don’t think the bitter lesson strictly applies here. Since they’re doing learning, and the bitter lesson says “learning and search is all that is good”, I think they’re in the clear, as long as what they do is compute scalable.
(this is different from saying there aren’t other reasons an ignorant person (a word I like more than outside view in this context since it doesn’t hide the lack of knowledge) may use to conclude they won’t succeed)
By building models which reason inductively, we tackle complex formal language tasks with immense commercial value: code synthesis and theorem proving.
There are commercially valuable uses for tools for code synthesis and theorem proving. But structured approaches of that flavor don’t have a great track record of e.g. doing classification tasks where the boundary conditions are messy and chaotic, and similarly for a bunch of other tasks where gradient-descent-lol-stack-more-layer-ML shines.
Is this coming from deep knowledge about Symbolica’s method, or just on outside view considerations like “usually people trying to think too big-brained end up failing when it comes to AI”.
Outside view (bitter lesson).
Or at least that’s approximately true. I’ll have a post on why I expect the bitter lesson to hold eventually, but is likely to be a while. If you read this blog post you can probably predict my reasoning for why I expect “learn only clean composable abstraction where the boundaries cut reality at the joints” to break down as an approach.
I don’t think the bitter lesson strictly applies here. Since they’re doing learning, and the bitter lesson says “learning and search is all that is good”, I think they’re in the clear, as long as what they do is compute scalable.
(this is different from saying there aren’t other reasons an ignorant person (a word I like more than outside view in this context since it doesn’t hide the lack of knowledge) may use to conclude they won’t succeed)
There are commercially valuable uses for tools for code synthesis and theorem proving. But structured approaches of that flavor don’t have a great track record of e.g. doing classification tasks where the boundary conditions are messy and chaotic, and similarly for a bunch of other tasks where gradient-descent-lol-stack-more-layer-ML shines.