1) The currently leading models (LLMs) are ultimate hot messes;
2) The whole point of G in AGI is that it can do many things; focusing on a single goal is possible, but is not a “natural mode” for general intelligence.
Against:
A superintelligent system will probably have enough capacity overhang to create multiple threads which would look to us like supercoherent superintelligent threads, so even a single system is likely to lead to multiple “virtual supercoherent superintelligent AIs” among other less coherent and more exploratory behaviors it would also perform.
But it’s a good argument against a supercoherent superintelligent singleton (even a single system which does have supercoherent superintelligent subthreads is likely to have a variety of those).
Very interesting.
In favor:
1) The currently leading models (LLMs) are ultimate hot messes;
2) The whole point of G in AGI is that it can do many things; focusing on a single goal is possible, but is not a “natural mode” for general intelligence.
Against:
A superintelligent system will probably have enough capacity overhang to create multiple threads which would look to us like supercoherent superintelligent threads, so even a single system is likely to lead to multiple “virtual supercoherent superintelligent AIs” among other less coherent and more exploratory behaviors it would also perform.
But it’s a good argument against a supercoherent superintelligent singleton (even a single system which does have supercoherent superintelligent subthreads is likely to have a variety of those).