If there are indeed gains to be had in coordination of agents that dwarf gains to be had in improvement of individual agents, why couldn’t an AI simply simulate multiple agents?
That may be a faster route to AI. But my point was that making an AI that’s smarter than the combined intelligence of humans will be much harder (even for an AI that’s already fairly smart and well-endowed with resources) than making one that’s smarter than an individual human. That moves this risk even further into the future. I’m more worried about the many risks that are more imminent.
You miss my point. Once we have a GAI, we can have many GAI, and if things scale amazingly in number of humans I see no reason they shouldn’t scale similarly in number of AI. From “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than any individual human” to “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than all collective humans” involves the passage of non-zero time, but I don’t expect it to be significant compared to the time to get there in the first place without significant other considerations.
Would the first AI want more AI’s around? Wouldn’t it compete more with AI’s than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI’s into something even smarter?
Either way, the scaling issue is interesting. I would expect the gain from networking AI’s to differ from the gain from networking humans, but I’m not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn’t expect complementarity among a bunch of identical AI’s. Generating useful differences would be an interesting problem.
If there is more to be gained by adding an additional AI then there is to be gained by scaling up the individual AI, then the best strategy for the AI is to create more AI’s with the same utility function.
Edited to add: Unless, perhaps, the AI had an explicit dislike of creating others, in which case it would be a matter of which effect was stronger.
If there are indeed gains to be had in coordination of agents that dwarf gains to be had in improvement of individual agents, why couldn’t an AI simply simulate multiple agents?
That may be a faster route to AI. But my point was that making an AI that’s smarter than the combined intelligence of humans will be much harder (even for an AI that’s already fairly smart and well-endowed with resources) than making one that’s smarter than an individual human. That moves this risk even further into the future. I’m more worried about the many risks that are more imminent.
You miss my point. Once we have a GAI, we can have many GAI, and if things scale amazingly in number of humans I see no reason they shouldn’t scale similarly in number of AI. From “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than any individual human” to “we have a GAI capable of recursive self improvement, that is significantly better at GAI design than all collective humans” involves the passage of non-zero time, but I don’t expect it to be significant compared to the time to get there in the first place without significant other considerations.
Would the first AI want more AI’s around? Wouldn’t it compete more with AI’s than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI’s into something even smarter?
Either way, the scaling issue is interesting. I would expect the gain from networking AI’s to differ from the gain from networking humans, but I’m not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn’t expect complementarity among a bunch of identical AI’s. Generating useful differences would be an interesting problem.
If there is more to be gained by adding an additional AI then there is to be gained by scaling up the individual AI, then the best strategy for the AI is to create more AI’s with the same utility function.
Edited to add: Unless, perhaps, the AI had an explicit dislike of creating others, in which case it would be a matter of which effect was stronger.