There is no reason to assume that an AI with goals that are hostile to us, despite our intentions, is stupid.
Humans often use birth control to have sex without procreating. If evolution were a more effective design algorithm it would never have allowed such a thing.
The fact that we have different goals from the system that designed us does not imply that we are stupid or incoherent.
Nor does the fact that evolution ‘failed’ in its goals in all the people who voluntarily abstain from reproducing (and didn’t, e.g., hugely benefit their siblings’ reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous. We can’t confidently generalize from one failure that evolution fails at everything; analogously, we can’t infer from the fact that a programmer failed to make an AI Friendly that it almost certainly failed at making the AI superintelligent. (Though we may be able to infer both from base rates.)
Nor does the fact that evolution ‘failed’ in its goals in all the people who voluntarily abstain from reproducing (and didn’t, e.g., hugely benefit their siblings’ reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.
Failure is a necessary part of mapping out the area where success is possible.
There is no reason to assume that an AI with goals that are hostile to us, despite our intentions, is stupid.
Humans often use birth control to have sex without procreating. If evolution were a more effective design algorithm it would never have allowed such a thing.
The fact that we have different goals from the system that designed us does not imply that we are stupid or incoherent.
Nor does the fact that evolution ‘failed’ in its goals in all the people who voluntarily abstain from reproducing (and didn’t, e.g., hugely benefit their siblings’ reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous. We can’t confidently generalize from one failure that evolution fails at everything; analogously, we can’t infer from the fact that a programmer failed to make an AI Friendly that it almost certainly failed at making the AI superintelligent. (Though we may be able to infer both from base rates.)
Failure is a necessary part of mapping out the area where success is possible.