For another thing, out-of-control AGIs will have asymmetric advantages over good AGIs—like the ability to steal resources, to manipulate people and institutions via lying and disinformation; to cause wars, pandemics, blackouts, gray goo, and so on; and to not have to deal with coordination challenges across different (human) actors with different beliefs and goals. More on this topic here.
Is your claim that out-of-control AGIs will all-things-considered have an advantage? Because I expect the human environment to be very hostile towards AGIs that are not verified to be good, or that turn out to lie, cheat and steal, or act uncooperatively in other ways.
Thanks for the elaboration, looking forward to the next posts. :)