Why are you confident that an AI that we do develop will not have these traits?
For the same reason a jet engine doesn’t have comfy chairs: with all machines, you develop the core physical and mathematical principles first, and then add human comforts.
The core mathematical and physical principles behind AI are believed, not without reason, to be efficient cross-domain optimization. There is no reason for an arbitrarily-developed Really Powerful Optimization Process to have anything in its utility function dealing with human morality; in order for it to be so, you need your AI developers to be deliberately aiming at Friendly AI, and they need to actually know something about how to do it.
And then, if they don’t know enough, you need to get very, very, very lucky.
It’s an open question whether we could construct a utility function that is, in the ultimate analysis, Safe without being Fun.
Personally, I’m almost hoping the answer is no. I’d love to see the faces of all the world’s Very Serious People as we ever-so-seriously explain that if they don’t want to be killed to the last human being by a horrible superintelligent monster, they’re going to need to accept Fun as their lord and savior ;-).
For the same reason a jet engine doesn’t have comfy chairs: with all machines, you develop the core physical and mathematical principles first, and then add human comforts.
The core mathematical and physical principles behind AI are believed, not without reason, to be efficient cross-domain optimization. There is no reason for an arbitrarily-developed Really Powerful Optimization Process to have anything in its utility function dealing with human morality; in order for it to be so, you need your AI developers to be deliberately aiming at Friendly AI, and they need to actually know something about how to do it.
And then, if they don’t know enough, you need to get very, very, very lucky.
That’s what happens when Friendly is used to mean both Fun and Safe.
Early jets didn’t have comfy chairs, but they did have electors seats. Safety was a concern.
If an .AI researchers feels their .AI might kill them, they will have every motivation to build in safety features.
That has nothing g to do with making an .AI Your Plastic Pal Who’s Fun To Be With.
It’s an open question whether we could construct a utility function that is, in the ultimate analysis, Safe without being Fun.
Personally, I’m almost hoping the answer is no. I’d love to see the faces of all the world’s Very Serious People as we ever-so-seriously explain that if they don’t want to be killed to the last human being by a horrible superintelligent monster, they’re going to need to accept Fun as their lord and savior ;-).
Almost everything about FAI is anon question. What’s you get ifyou multiply a bunch of open questions together?