The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn’t spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.
This is critical to your point. But you haven’t established this at all. You made one post with a just-so story about males in tribes perceiving those above them as corrupt, and then assumed, with no logical justification that I can recall, that this meant that those above them actually are corrupt. You haven’t defined what corrupt means, either.
I think you need to sit down and spell out what ‘corrupt’ means, and then Think Really Hard about whether those in power actually are more corrupt than those not in power;and if so, whether the mechanisms that lead to that result are a result of the peculiar evolutionary history of humans, or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.
You might argue that if you have one Sysop AI, it isn’t subject to evolutionary forces. This may be true. But if that’s what you’re counting on, it’s very important for you to make that explicit. I think that, as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally ‘Friendly’ AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the ‘young revolutionary’ of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
Why would a Friendly AI lose out? They can do anything any other AI can do. They’re not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.
You have it backwards. The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI. So an unfriendly AI can do anything a friendly AI could, but not vice-versa.
The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.
Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of “friendly” AIs.
I think you need to sit down and spell out what ‘corrupt’ means, and then Think Really Hard about whether those in power actually are more corrupt than those not in power;and if so, whether the mechanisms that lead to that result are a result of the peculiar evolutionary history of humans, or of general game-theoretic / evolutionary mechanisms that would apply equally to competing AIs.
You might argue that if you have one Sysop AI, it isn’t subject to evolutionary forces. This may be true. But if that’s what you’re counting on, it’s very important for you to make that explicit. I think that, as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally ‘Friendly’ AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the ‘young revolutionary’ of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
Why would a Friendly AI lose out? They can do anything any other AI can do. They’re not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.
You have it backwards. The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI. So an unfriendly AI can do anything a friendly AI could, but not vice-versa.
The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.
Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of “friendly” AIs.