After writing this post, I recalled Carl Shulman’s Whole Brain Emulation and the
Evolution of Superorganisms, which discusses a similar topic, but with WBEs instead of de novo AGIs. I think the main difference between WBE and AGI in this regard is that WBE-based superorganisms probably can’t grow as large as AGI-based ones, because with WBE there’s a tradeoff between having all the WBEs share the same values and productive efficiency. (If you assign each task to a WBE that is best at that task, you’ll end up with a bunch of WBEs with different values who then have to coordinate with each other.) With AGI, each copy of an AGI can specialize into some area and probably still maintain value alignment with the overall superorganism.
(However with some of the more advanced techniques for internally coordinating WBE-based superorganisms you may be able to get pretty close to what is possible with AGI.)
Here’s a quote from Carl’s paper about the implications of increased coordination / economies of scale due to WBE (which would perhaps apply to AGI even more strongly)
The market considerations discussed above might be circumvented by regulation (although enforcement might be difficult without emulation police officers, perhaps superorganisms for value stability) in a given national jurisdiction, but such regulations
could impose large economic costs that would affect international competition. With
economic doubling times of perhaps weeks, a major productivity or growth advantage
from self-sacrificing software intelligences could quickly give a single nation a preponderance of economic and military power if other jurisdictions lacked or prohibited such
(Hanson 1998b, forthcoming). Other nations might abandon their regulations to avoid
this outcome, or the influence of the less regulated nation might spread its values as
it increased its capabilities, including by military means. A sufficiently large economic
or technological lead could enable a leading power to disarm others, but in addition
a society dominated by superorganisms could also be much more willing to risk massive
casualties to attain its objectives.
After writing this post, I recalled Carl Shulman’s Whole Brain Emulation and the Evolution of Superorganisms, which discusses a similar topic, but with WBEs instead of de novo AGIs. I think the main difference between WBE and AGI in this regard is that WBE-based superorganisms probably can’t grow as large as AGI-based ones, because with WBE there’s a tradeoff between having all the WBEs share the same values and productive efficiency. (If you assign each task to a WBE that is best at that task, you’ll end up with a bunch of WBEs with different values who then have to coordinate with each other.) With AGI, each copy of an AGI can specialize into some area and probably still maintain value alignment with the overall superorganism.
(However with some of the more advanced techniques for internally coordinating WBE-based superorganisms you may be able to get pretty close to what is possible with AGI.)
Here’s a quote from Carl’s paper about the implications of increased coordination / economies of scale due to WBE (which would perhaps apply to AGI even more strongly)