For people who’ve voted this down, I’d be interested in your answers to the following questions:
1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?
2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?
3) If you answered no to 1), what makes you certain that such a scenario is not possible?
For people who’ve voted this down, I’d be interested in your answers to the following questions:
1) Can you envisage a scenario in which a greater than human intelligence AI with goals not completely compatible with human goals would ever choose a course of action other than wiping out humanity?
2) If you answered yes to 1), what probability do you assign to such an outcome, rather than an outcome involving the complete annihilation of humanity?
3) If you answered no to 1), what makes you certain that such a scenario is not possible?