As a subset of the claim that AI is vastly better at everything, being vastly better at coordination is plausible. The specific arguments that AI somehow has (unlike any intelligence or optimization process we know of today) introspection into it’s “utility function” or can provide non-behavioral evidence of it’s intent to similarly-powerful AIs seem pretty weak.
I haven’t seen anyone attempting to model shifting equilibria and negotiation/conflict among AIs (and coalitions of AIs and of AIs + humans) with differing goals and levels of computational power, so it seems pretty unfounded to speculate on how “coordination” as a general topic will play out.
As a subset of the claim that AI is vastly better at everything, being vastly better at coordination is plausible. The specific arguments that AI somehow has (unlike any intelligence or optimization process we know of today) introspection into it’s “utility function” or can provide non-behavioral evidence of it’s intent to similarly-powerful AIs seem pretty weak.
I haven’t seen anyone attempting to model shifting equilibria and negotiation/conflict among AIs (and coalitions of AIs and of AIs + humans) with differing goals and levels of computational power, so it seems pretty unfounded to speculate on how “coordination” as a general topic will play out.