You miss my point. Once we have a GAI, we can have many GAI, and if things scale amazingly in number of humans I see no reason they shouldn’t scale similarly in number of AI. From “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than any individual human” to “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than all collective humans” involves the passage of non-zero time, but I don’t expect it to be significant compared to the time to get there in the first place without significant other considerations.
Would the first AI want more AI’s around? Wouldn’t it compete more with AI’s than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI’s into something even smarter?
Either way, the scaling issue is interesting. I would expect the gain from networking AI’s to differ from the gain from networking humans, but I’m not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn’t expect complementarity among a bunch of identical AI’s. Generating useful differences would be an interesting problem.
If there is more to be gained by adding an additional AI then there is to be gained by scaling up the individual AI, then the best strategy for the AI is to create more AI’s with the same utility function.
Edited to add: Unless, perhaps, the AI had an explicit dislike of creating others, in which case it would be a matter of which effect was stronger.
You miss my point. Once we have a GAI, we can have many GAI, and if things scale amazingly in number of humans I see no reason they shouldn’t scale similarly in number of AI. From “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than any individual human” to “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than all collective humans” involves the passage of non-zero time, but I don’t expect it to be significant compared to the time to get there in the first place without significant other considerations.
Would the first AI want more AI’s around? Wouldn’t it compete more with AI’s than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI’s into something even smarter?
Either way, the scaling issue is interesting. I would expect the gain from networking AI’s to differ from the gain from networking humans, but I’m not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn’t expect complementarity among a bunch of identical AI’s. Generating useful differences would be an interesting problem.
If there is more to be gained by adding an additional AI then there is to be gained by scaling up the individual AI, then the best strategy for the AI is to create more AI’s with the same utility function.
Edited to add: Unless, perhaps, the AI had an explicit dislike of creating others, in which case it would be a matter of which effect was stronger.