The problem is in the size of the system, relative to human cognition. Using specialization and management can increase the size of the system we can manage, but not without limit. That is why a self-improving AI is a potential threat, it can increase the size of the system it can manage well beyond what we can understand. It is also why I don’t think provably Friendly AI is possible (though I hope I am wrong about that) and that GAI will be developed incrementally from specialized AIs or from general but less than intelligent systems. Also it is what gives me some hope for intelligence amplification to keep up with GAIs, at least for a while; we don’t need to start from scratch, just keep improving the size of systems we can manage.
Control and knowledge don’t care about scale. One can learn stuff about whole galaxies by observing them. When you want to “manage” an AI, the complexity of your concern is restricted to the complexity of your wish.
Size in describing a system isn’t about scale, it’s the number of interacting components and the complexity of their interactions. And I don’t understand what you mean in your second sentence, it doesn’t make sense to me.
A galaxy also isn’t “just” about scale: it does contain more stuff, more components (but how do you know that and what does it mean?). Second sentence: using a telescope to make precise observations.
The problem is in the size of the system, relative to human cognition. Using specialization and management can increase the size of the system we can manage, but not without limit. That is why a self-improving AI is a potential threat, it can increase the size of the system it can manage well beyond what we can understand. It is also why I don’t think provably Friendly AI is possible (though I hope I am wrong about that) and that GAI will be developed incrementally from specialized AIs or from general but less than intelligent systems. Also it is what gives me some hope for intelligence amplification to keep up with GAIs, at least for a while; we don’t need to start from scratch, just keep improving the size of systems we can manage.
Control and knowledge don’t care about scale. One can learn stuff about whole galaxies by observing them. When you want to “manage” an AI, the complexity of your concern is restricted to the complexity of your wish.
Size in describing a system isn’t about scale, it’s the number of interacting components and the complexity of their interactions. And I don’t understand what you mean in your second sentence, it doesn’t make sense to me.
A galaxy also isn’t “just” about scale: it does contain more stuff, more components (but how do you know that and what does it mean?). Second sentence: using a telescope to make precise observations.