The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it’s worth it to betray your allies, and 2. it being risky to try when you’re just barely past the point where you think it’s worth it. Also there’s the other humans/other nations around, which might or might not apply in interstellar politics.
...although I’ve just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as “first strike would win but the other side can see it coming and retaliate”), both of which change the dynamics drastically. (I suppose it’s valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you’re unlikely to know with certainty that the other side is an optimizer without them telling you)
The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it’s worth it to betray your allies, and 2. it being risky to try when you’re just barely past the point where you think it’s worth it. Also there’s the other humans/other nations around, which might or might not apply in interstellar politics.
...although I’ve just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as “first strike would win but the other side can see it coming and retaliate”), both of which change the dynamics drastically. (I suppose it’s valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you’re unlikely to know with certainty that the other side is an optimizer without them telling you)