All right. I certainly agree with you that talking about “morality” or “friendliness” without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we’re talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about “safety.”
All right. I certainly agree with you that talking about “morality” or “friendliness” without additional clarifications leads most people to conclusions that have very little to do with safe AI design. Then again, if we’re talking about self-improving AIs with superhuman intelligence (as many people on this site are) I think the same is true of talking about “safety.”