Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don’t experience them. Empirically, the vast majority of agents don’t experience them. Very plausibly, the vast majority of possible intelligent agents also don’t experience them. “the AI neither loves you, nor hates you” is not saying ‘it’s impossible to program an AI to experience love or hate’; it’s saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.
Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don’t experience them. Empirically, the vast majority of agents don’t experience them. Very plausibly, the vast majority of possible intelligent agents also don’t experience them. “the AI neither loves you, nor hates you” is not saying ‘it’s impossible to program an AI to experience love or hate’; it’s saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.