It’s because computers do what you program them to do. If you build an AI with superhuman intelligence and creativity, and the way it makes decisions is to best fulfill some objective, that objective might get fulfilled but everything else might get fubar.
Suppose the objective is “protect the people of Sweden from threats.” This AI will almost certainly kill everyone outside Sweden, to eliminate potential threats. As for the survivors, well—what’s a “threat?” Does skin cancer or the flu or emotional harm count? What state would you say truly minimizes these threats—that sounds like a coma or a sensory deprivation tank to me.
Since we have no AI, we do not have any direct evidence. The argument though goes like this: AI is orthogonal to purpose, so any sufficiently advanced AI could self-improve and have an exponential impact on our society. But human values are fragile and complex, and if we do not carefully design said AI purpose carefully, it could trample all over them uncaringly.
Why do people believe that AI is dangerous? What direct evidence is there that this is likely to be the case?
It’s because computers do what you program them to do. If you build an AI with superhuman intelligence and creativity, and the way it makes decisions is to best fulfill some objective, that objective might get fulfilled but everything else might get fubar.
Suppose the objective is “protect the people of Sweden from threats.” This AI will almost certainly kill everyone outside Sweden, to eliminate potential threats. As for the survivors, well—what’s a “threat?” Does skin cancer or the flu or emotional harm count? What state would you say truly minimizes these threats—that sounds like a coma or a sensory deprivation tank to me.
Since we have no AI, we do not have any direct evidence.
The argument though goes like this: AI is orthogonal to purpose, so any sufficiently advanced AI could self-improve and have an exponential impact on our society. But human values are fragile and complex, and if we do not carefully design said AI purpose carefully, it could trample all over them uncaringly.
Read Bostrom’s Superintelligence. It summarizes all of the main arguments.
This is one of MIRI’s pivotal papers on the subject.