They can read each other’s source code, and thus trust much more deeply!
Being able to read source code doesn’t automatically increase trust—you also have to be able to verify that the code being shared with you actually governs the AGI’s behavior, despite that AGI’s incentives and abilities to fool you.
(Conditional on the AGIs having strongly aligned goals with each other, sure, this degree of transparency would help them with pure coordination problems.)
Being able to read source code doesn’t automatically increase trust—you also have to be able to verify that the code being shared with you actually governs the AGI’s behavior, despite that AGI’s incentives and abilities to fool you.
(Conditional on the AGIs having strongly aligned goals with each other, sure, this degree of transparency would help them with pure coordination problems.)