As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate.
The costs of peace will depend on the differences between those two AIs. “Let’s both self-modify to become compatible” is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be “winner takes all” or “let’s split the universe” or “let’s merge into one” or maybe something else I didn’t think about.
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate.
The costs of peace will depend on the differences between those two AIs. “Let’s both self-modify to become compatible” is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be “winner takes all” or “let’s split the universe” or “let’s merge into one” or maybe something else I didn’t think about.
The critical question is, whose utility?
The Aumann theorem will not help here since the FAIs will start with different values and different priors.
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
Then each AI makes its own choice and the two choices might well turn out to be incompatible.
There is also the issue of information exchange—basically, it will be hard for the two AIs to trust each other.