It seems plausible to me that we can achieve improvements in the cognition of such agents the same way we improve human cognition, using various rationality techniques to organise thoughts in a more productive manner.
For example, instead of just asking LLM “Develop me a plan to achieve X” and simply going with it, We then promt the model to find possible failure modes in this plan, and then to find a way around these failure modes, alternative options and so on.
We may not get 10000 IQ intelligence, totally leaving all humans in the dust in ten years. And this is another good thing, a future where we try to make smarter and smarter LLM-based agents with clever chains of promt ingeneiring looks more like a slow take off, than a fast one. But I believe we would be able to achive human and a bit higther than human level AGI this way.
It seems plausible to me that we can achieve improvements in the cognition of such agents the same way we improve human cognition, using various rationality techniques to organise thoughts in a more productive manner.
For example, instead of just asking LLM “Develop me a plan to achieve X” and simply going with it, We then promt the model to find possible failure modes in this plan, and then to find a way around these failure modes, alternative options and so on.
We may not get 10000 IQ intelligence, totally leaving all humans in the dust in ten years. And this is another good thing, a future where we try to make smarter and smarter LLM-based agents with clever chains of promt ingeneiring looks more like a slow take off, than a fast one. But I believe we would be able to achive human and a bit higther than human level AGI this way.