Even your clarification seems too anthromorphic to me.
AIs don’t turn evil, but I don’t think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer’s values deviate from humanity’s values).
AIs don’t turn evil, but I don’t think they deviate from their programming either.
They do, if they are self-improving, although I imagine you could collapse “programming” and “meta-programming”, in which case an AI would just only partially deviate. The point is you couldn’t expect things turn out to be so simple when talking about a runaway AI.
Even your clarification seems too anthromorphic to me.
AIs don’t turn evil, but I don’t think they deviate from their programming either. Their programming deviates from their programmers values. (Or, another possibility, their programmer’s values deviate from humanity’s values).
Programming != intended programming.
They do, if they are self-improving, although I imagine you could collapse “programming” and “meta-programming”, in which case an AI would just only partially deviate. The point is you couldn’t expect things turn out to be so simple when talking about a runaway AI.