You have understood Loosemore’s point but you’re making the same mistake he is. The AI in your example would understand the intent behind the words “maximize human happiness” perfectly well but that doesn’t mean it would want to obey that intent. You talk about learning human values and internalizing them as if those things naturally go together. The only way that value internalization naturally follows from value learning is if the agent already wants to internalize these values; figuring out how to do that is (part of) the Friendly AI problem.
Yes, I’m quite aware of that problem. It was outside the scope of this particular essay, though it’s somewhat implied by the deceptive turn and degrees of freedom hypotheses.
You have understood Loosemore’s point but you’re making the same mistake he is. The AI in your example would understand the intent behind the words “maximize human happiness” perfectly well but that doesn’t mean it would want to obey that intent. You talk about learning human values and internalizing them as if those things naturally go together. The only way that value internalization naturally follows from value learning is if the agent already wants to internalize these values; figuring out how to do that is (part of) the Friendly AI problem.
Yes, I’m quite aware of that problem. It was outside the scope of this particular essay, though it’s somewhat implied by the deceptive turn and degrees of freedom hypotheses.