There’s a difference between claiming that A can be dangerous without X and claiming that a scenario that A can be dangerous due to X.
If intelligent hurricanes loved you, they might well avoid destroying you. So it can indeed be said that intelligent hurricanes’ indifference to us is part of what makes them dangerous.
We do have discussions about boxing AI and in those cases it’s quite useful to model the AI as trying to act against humans to get out.
“the AI neither loves you, nor hates you” is compatible with ‘your actions are getting in the way of the AI’s terminal goals’. We don’t need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.
Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don’t experience them. Empirically, the vast majority of agents don’t experience them. Very plausibly, the vast majority of possible intelligent agents also don’t experience them. “the AI neither loves you, nor hates you” is not saying ‘it’s impossible to program an AI to experience love or hate’; it’s saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.
If intelligent hurricanes loved you, they might well avoid destroying you. So it can indeed be said that intelligent hurricanes’ indifference to us is part of what makes them dangerous.
“the AI neither loves you, nor hates you” is compatible with ‘your actions are getting in the way of the AI’s terminal goals’. We don’t need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.
There a difference between “need to appeal” and something being a possible explanation.
Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don’t experience them. Empirically, the vast majority of agents don’t experience them. Very plausibly, the vast majority of possible intelligent agents also don’t experience them. “the AI neither loves you, nor hates you” is not saying ‘it’s impossible to program an AI to experience love or hate’; it’s saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.