This may fall in the the fallowing type of reasoning: “Superinteligent AI will be super in any human capability X. Human can cooperate. Thus SAI will have superhuman capability to cooperate.”
The problem of such conjecture is that if we take an opposite human quality not-X, SAI will also have superhuman capability in it. For example, if X= cheating, then superintelligent AI will have superhuman capability in cheating.
However, SAI can’t be simultaneously super-cooperator and super-cheater.
I think superintelligent AI will probably have superhuman capability at cheating in an absolute sense, i.e., they’ll be much better than humans at cheating humans. But I don’t see a reason to think they’ll be better at cheating other superintelligent AI than humans are at cheating other humans, since SAI will also be superhuman at detecting and preventing cheating.
This may fall in the the fallowing type of reasoning: “Superinteligent AI will be super in any human capability X. Human can cooperate. Thus SAI will have superhuman capability to cooperate.”
The problem of such conjecture is that if we take an opposite human quality not-X, SAI will also have superhuman capability in it. For example, if X= cheating, then superintelligent AI will have superhuman capability in cheating.
However, SAI can’t be simultaneously super-cooperator and super-cheater.
I think superintelligent AI will probably have superhuman capability at cheating in an absolute sense, i.e., they’ll be much better than humans at cheating humans. But I don’t see a reason to think they’ll be better at cheating other superintelligent AI than humans are at cheating other humans, since SAI will also be superhuman at detecting and preventing cheating.
But 10 000 IQ AI can cheat 1000 IQ AI? If yes, only equally powerful AIs will cooperate.