I don’t think the bitter lesson strictly applies here. Since they’re doing learning, and the bitter lesson says “learning and search is all that is good”, I think they’re in the clear, as long as what they do is compute scalable.
(this is different from saying there aren’t other reasons an ignorant person (a word I like more than outside view in this context since it doesn’t hide the lack of knowledge) may use to conclude they won’t succeed)
By building models which reason inductively, we tackle complex formal language tasks with immense commercial value: code synthesis and theorem proving.
There are commercially valuable uses for tools for code synthesis and theorem proving. But structured approaches of that flavor don’t have a great track record of e.g. doing classification tasks where the boundary conditions are messy and chaotic, and similarly for a bunch of other tasks where gradient-descent-lol-stack-more-layer-ML shines.
I don’t think the bitter lesson strictly applies here. Since they’re doing learning, and the bitter lesson says “learning and search is all that is good”, I think they’re in the clear, as long as what they do is compute scalable.
(this is different from saying there aren’t other reasons an ignorant person (a word I like more than outside view in this context since it doesn’t hide the lack of knowledge) may use to conclude they won’t succeed)
There are commercially valuable uses for tools for code synthesis and theorem proving. But structured approaches of that flavor don’t have a great track record of e.g. doing classification tasks where the boundary conditions are messy and chaotic, and similarly for a bunch of other tasks where gradient-descent-lol-stack-more-layer-ML shines.