You are assuming that mere intelligence is sufficient to give an AI an overwhelming advantage in any conflict. While I concede that is possible in theory I consider it much less likely than seems to be the norm here. This is partly because I am also skeptical about the existential dangers of self replicating nanotech, bioengineered viruses and other such technologies that an AI might attempt to use in a conflict.
As long as there is any reasonable probability that an AI would lose a conflict with humans or suffer serious damage to its capacity to achieve its goals, its best course of action is unlikely to be to attempt to wipe out humanity. A paperclip maximizer for example would seem to better further its goals by heading to the asteroid belt where it could advance its goals without needing to devote large amounts of computational capacity to winning a conflict with other goal-directed agents.
For people who’ve voted this down, I’d be interested in your answers to the following questions:
1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?
2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?
3) If you answered no to 1), what makes you certain that such a scenario is not possible?
You are assuming that mere intelligence is sufficient to give an AI an overwhelming advantage in any conflict. While I concede that is possible in theory I consider it much less likely than seems to be the norm here. This is partly because I am also skeptical about the existential dangers of self replicating nanotech, bioengineered viruses and other such technologies that an AI might attempt to use in a conflict.
As long as there is any reasonable probability that an AI would lose a conflict with humans or suffer serious damage to its capacity to achieve its goals, its best course of action is unlikely to be to attempt to wipe out humanity. A paperclip maximizer for example would seem to better further its goals by heading to the asteroid belt where it could advance its goals without needing to devote large amounts of computational capacity to winning a conflict with other goal-directed agents.
For people who’ve voted this down, I’d be interested in your answers to the following questions:
1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?
2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?
3) If you answered no to 1), what makes you certain that such a scenario is not possible?