as your post stands, you may be attributing qualities to Friendly AIs, that apply only to Solitary Friendly AIs that are in complete control of the world.
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally ‘Friendly’ AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the ‘young revolutionary’ of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
Why would a Friendly AI lose out? They can do anything any other AI can do. They’re not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.
You have it backwards. The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI. So an unfriendly AI can do anything a friendly AI could, but not vice-versa.
The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.
Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of “friendly” AIs.
Just to extend on this, it seems most likely that multiple AIs would actually be subject to dynamics similar to evolution and a totally ‘Friendly’ AI would probably tend to lose out against a more self-serving (but not necessarily evil) AIs. Or just like the ‘young revolutionary’ of the first post, a truly enlightened Friendly AI would be forced to assume power to deny it to any less moral AIs.
Philosophical questions aside, the likely reality of the future AI development is surely that it will also go to those that are able to seize the resources to propagate and improve themselves.
Why would a Friendly AI lose out? They can do anything any other AI can do. They’re not like humans, where they have to worry about becoming corrupt if they start committing atrocities for the good of humanity.
You have it backwards. The difference between a Friendly AI and an unfriendly one is entirely one of restrictions placed on the Friendly AI. So an unfriendly AI can do anything a friendly AI could, but not vice-versa.
The friendly AI could lose out because it would be restricted from committing atrocities, or at least atrocities which were strictly bad for humans, even in the long run.
Your comment that they can commit atrocities for the good of humanity without worrying about becoming corrupt is a reason to be fearful of “friendly” AIs.