The big question here, it seems like, is: does intelligence stack? Does a hundred thousand instances of GPT4 working together make an intelligence as smart as GPT7?
This far the answer seems to be no. There are some intelligence improvements from combining multiple calls in tree of thought type setups, but not much. And those need carefully hand-structured algorithms.
So I think the limitation is in scaffolding techniques, not the sheer number of instances you can run. I do expect scaffolding LLMs into cognitive architectures to achieve human level fully general AGI, but how and when we get there is tricky to predict.
When we have that, I expect it to stack a lot like human organizations. They can do a lot more work at once, but they’re not much smarter than a single individual because it’s really hard to coordinate and stack all of that cognitive work.
The big question here, it seems like, is: does intelligence stack? Does a hundred thousand instances of GPT4 working together make an intelligence as smart as GPT7?
This far the answer seems to be no. There are some intelligence improvements from combining multiple calls in tree of thought type setups, but not much. And those need carefully hand-structured algorithms.
So I think the limitation is in scaffolding techniques, not the sheer number of instances you can run. I do expect scaffolding LLMs into cognitive architectures to achieve human level fully general AGI, but how and when we get there is tricky to predict.
When we have that, I expect it to stack a lot like human organizations. They can do a lot more work at once, but they’re not much smarter than a single individual because it’s really hard to coordinate and stack all of that cognitive work.