The behavior of [a] machine is going to depend a great deal on which values or preferences you give it. If you try to give it naive ones, it can get you into trouble.
If I say to you, “Would you please get me some spaghetti?”, you know there are other things in the world that I value besides spaghetti. You know I’m not saying that you should be willing to shred the world to get spaghetti. But if you were to code naively into an extremely intelligence machine as it’s only desire, “Get me spaghetti,” it would stop at absolutely nothing to do that.
So the danger is not so much the danger of a machine being evil or Terminator. The danger is that we give it a bad set of preferences… that lead to unintended consequences. Or [maybe] it’s built from the ground up with preferences that don’t reflect the preferences of most of humanity — maybe only the preferences that only a small group of people cares about.
Spencer Greenberg of Rebellion Research: