It’s because computers do what you program them to do. If you build an AI with superhuman intelligence and creativity, and the way it makes decisions is to best fulfill some objective, that objective might get fulfilled but everything else might get fubar.
Suppose the objective is “protect the people of Sweden from threats.” This AI will almost certainly kill everyone outside Sweden, to eliminate potential threats. As for the survivors, well—what’s a “threat?” Does skin cancer or the flu or emotional harm count? What state would you say truly minimizes these threats—that sounds like a coma or a sensory deprivation tank to me.
It’s because computers do what you program them to do. If you build an AI with superhuman intelligence and creativity, and the way it makes decisions is to best fulfill some objective, that objective might get fulfilled but everything else might get fubar.
Suppose the objective is “protect the people of Sweden from threats.” This AI will almost certainly kill everyone outside Sweden, to eliminate potential threats. As for the survivors, well—what’s a “threat?” Does skin cancer or the flu or emotional harm count? What state would you say truly minimizes these threats—that sounds like a coma or a sensory deprivation tank to me.