This might be a bit off topic for the focus of your response. I actually agree that deployment of AGI won’t be seen as an act of aggression. But I think it probably should be, if other actors understand the huge advantage that first movers will enjoy, and how tricky a new balance of power will become.
By setting aside alignment concerns entirely, you’re assuming for this scenario that not only is alignment solved, but that solution is easy enough, or coordination is good enough, that every new AGI is also aligned. I don’t think it’s useful to set the issue that far aside. Eventually, somebody is going to screw up and make one that’s not aligned.
I think a balance of power scenario also requires many AGIs to stay at about the same level of capability. If one becomes rapidly more capable, the balance of power is thrown off.
Another issue with balance-of-power scenarios, even assuming alignment, is that eventually individuals or small groups will be able to create AGI. And by eventually, I mean at most ten years after states and large corporations can do it. Then a lot of the balance of power arguments don’t apply, and you’re more prone to having people do truly stupid or evil (by default ethical standards) things with their personally-aligned AGI.
Most of the arguments in Steve Byrnes excellent What does it take to defend the world against out-of-control AGIs? apply to hostile actions from sane state and corporate actors. Even more apply to non-state actors with weirder goals. One pivotal act he doesn’t mention is forming a panopticon, monitoring and de-encrypting every human communication for the purpose of preventing further AGI development. Having this amount of power would also enable easy manipulation and sabotage of political systems, and it’s hard to imagine a balance of power where on corporation or government enjoys this power.
This might be a bit off topic for the focus of your response. I actually agree that deployment of AGI won’t be seen as an act of aggression. But I think it probably should be, if other actors understand the huge advantage that first movers will enjoy, and how tricky a new balance of power will become.
By setting aside alignment concerns entirely, you’re assuming for this scenario that not only is alignment solved, but that solution is easy enough, or coordination is good enough, that every new AGI is also aligned. I don’t think it’s useful to set the issue that far aside. Eventually, somebody is going to screw up and make one that’s not aligned.
I think a balance of power scenario also requires many AGIs to stay at about the same level of capability. If one becomes rapidly more capable, the balance of power is thrown off.
Another issue with balance-of-power scenarios, even assuming alignment, is that eventually individuals or small groups will be able to create AGI. And by eventually, I mean at most ten years after states and large corporations can do it. Then a lot of the balance of power arguments don’t apply, and you’re more prone to having people do truly stupid or evil (by default ethical standards) things with their personally-aligned AGI.
Most of the arguments in Steve Byrnes excellent What does it take to defend the world against out-of-control AGIs? apply to hostile actions from sane state and corporate actors. Even more apply to non-state actors with weirder goals. One pivotal act he doesn’t mention is forming a panopticon, monitoring and de-encrypting every human communication for the purpose of preventing further AGI development. Having this amount of power would also enable easy manipulation and sabotage of political systems, and it’s hard to imagine a balance of power where on corporation or government enjoys this power.