“why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?”
One of Elizer’s central (and I think, indisputable) claims is that a hand-made AI, after going recursive self-improvement, could be powerful in the real world while being WILDLY unpredictable in its actions. It doesn’t have to be economically rational.
Given a paperclip-manufacturing AI, busily converting earth into grey goo into paperclips, there’s no reason to believe we could communicate with it well enough to offer to help.
“why would innovations discovered by a single AI not spread soon to the others, and why would a non-friendly AI not use those innovations to trade, instead of war?”
One of Elizer’s central (and I think, indisputable) claims is that a hand-made AI, after going recursive self-improvement, could be powerful in the real world while being WILDLY unpredictable in its actions. It doesn’t have to be economically rational.
Given a paperclip-manufacturing AI, busily converting earth into grey goo into paperclips, there’s no reason to believe we could communicate with it well enough to offer to help.