Now that I think about it, I think this is basically the path that LLMs likely take, albeit I’d say it caps out a little lower than humans in general. And I give it over 50% probability.
The basic issue here is that the reasoning Transformers do is too inefficient for multi-step problems, and I expect a lot of real world applications of AI outperforming humans will require good multi-step reasoning.
The unexpected success of LLMs isn’t as much about AI progress, as it is about how much our reasoning often is pretty bad in scenarios outside of our ancestral environment. It is less a story of AI progress and more a story of how humans inflate their own strengths like intelligence.
Now that I think about it, I think this is basically the path that LLMs likely take, albeit I’d say it caps out a little lower than humans in general. And I give it over 50% probability.
The basic issue here is that the reasoning Transformers do is too inefficient for multi-step problems, and I expect a lot of real world applications of AI outperforming humans will require good multi-step reasoning.
The unexpected success of LLMs isn’t as much about AI progress, as it is about how much our reasoning often is pretty bad in scenarios outside of our ancestral environment. It is less a story of AI progress and more a story of how humans inflate their own strengths like intelligence.