I think that framing the issue of AI safety in terms of “morality” or “friendliness” is a form of misleading anthropomorphization. Morality and friendliness are specific traits of human psychology which won’t necessarily generalize well to artificial agents (even attempts to generalize them to non-human animals are often far-fetched). I think that AI safety would be probably best dealt with in the framework of safety engineering.
All right. I certainly agree with you that talking about “morality” or “friendliness” without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we’re talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about “safety.”
I think that framing the issue of AI safety in terms of “morality” or “friendliness” is a form of misleading anthropomorphization. Morality and friendliness are specific traits of human psychology which won’t necessarily generalize well to artificial agents (even attempts to generalize them to non-human animals are often far-fetched).
I think that AI safety would be probably best dealt with in the framework of safety engineering.
All right. I certainly agree with you that talking about “morality” or “friendliness” without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we’re talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about “safety.”