“Because if there isn’t, they’ll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that’d be a shame, wouldn’t it?”
I would much rather see someone dismiss the dangers of AI, than misrepresent them, by having a movie in which Johnny Depp plays “a seemingly megalomaniacal AI researcher”. This gives the impression that a “mad scientist” type who creates an “evil” AI that takes over the world is what we should worry about. Eliezer’s posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of “the AI neither loves you, nor hates you, but you are composed of matter it can use for other things”. That is, that if we create a powerful AI (or an AI who creates an AI who creates an AI who creates a powerful AI), whose values and morals do not align with what we humans would “want”, that it will probably result in something terrible. (And not even in a way that provides us the silver lining of ’well, the AIs wiped out humanity, but at least the AI civilization is highly advanced and interesting! But more like: now the entire planet earth is a Grey Goo/Paperclips/whatever). Or even just the danger of us biological humans losing relevance in a world with superintelligent entities.
While I would love to see a great, well done, well thought out movie about Transhumanism, it seems pretty likely that this movie is just going to make me angry/annoyed. I really hope I am wrong, and that this movie is actually great.
Eliezer’s posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of “the AI neither loves you, nor hates you, but you are composed of matter it can use for other things”.
I’m not sure that’s true. At the beginning stages where an AI is vulnerable it might very well use violence to prevent itself from getting destroyed.
Hurricanes act with ‘violence’ in the sense of destructive power, but hurricanes don’t hate people. The idea is that an AGI, like an intelligent hurricane, can be dangerous without bearing any special animosity for humans, indeed without caring or thinking about humans in any way whatsoever.
The idea is that an AGI, like an intelligent hurricane, can be dangerous without bearing any special animosity for humans, indeed without caring or thinking about humans in any way whatsoever.
No. That’s not what he said. There’s a difference between claiming that A can be dangerous without X and claiming that a scenario that A can be dangerous due to X.
There more than one plausible UFAI scenario. We do have discussions about boxing AI and in those cases it’s quite useful to model the AI as trying to act against humans to get out.
There’s a difference between claiming that A can be dangerous without X and claiming that a scenario that A can be dangerous due to X.
If intelligent hurricanes loved you, they might well avoid destroying you. So it can indeed be said that intelligent hurricanes’ indifference to us is part of what makes them dangerous.
We do have discussions about boxing AI and in those cases it’s quite useful to model the AI as trying to act against humans to get out.
“the AI neither loves you, nor hates you” is compatible with ‘your actions are getting in the way of the AI’s terminal goals’. We don’t need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.
Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don’t experience them. Empirically, the vast majority of agents don’t experience them. Very plausibly, the vast majority of possible intelligent agents also don’t experience them. “the AI neither loves you, nor hates you” is not saying ‘it’s impossible to program an AI to experience love or hate’; it’s saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.
Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it’s (arguably) not so likely to kill everybody. MIRI appears to be focussing on the “killing everybody case”. That is because—according to them—that is a really, really bad outcome.
The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.
“Because if there isn’t, they’ll dismiss the danger of AI like Erik Sofge already did in an early piece about the movie for Popular Science, and nudge their readers to do so too. And that’d be a shame, wouldn’t it?”
I would much rather see someone dismiss the dangers of AI, than misrepresent them, by having a movie in which Johnny Depp plays “a seemingly megalomaniacal AI researcher”. This gives the impression that a “mad scientist” type who creates an “evil” AI that takes over the world is what we should worry about. Eliezer’s posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of “the AI neither loves you, nor hates you, but you are composed of matter it can use for other things”. That is, that if we create a powerful AI (or an AI who creates an AI who creates an AI who creates a powerful AI), whose values and morals do not align with what we humans would “want”, that it will probably result in something terrible. (And not even in a way that provides us the silver lining of ’well, the AIs wiped out humanity, but at least the AI civilization is highly advanced and interesting! But more like: now the entire planet earth is a Grey Goo/Paperclips/whatever). Or even just the danger of us biological humans losing relevance in a world with superintelligent entities.
While I would love to see a great, well done, well thought out movie about Transhumanism, it seems pretty likely that this movie is just going to make me angry/annoyed. I really hope I am wrong, and that this movie is actually great.
I’m not sure that’s true. At the beginning stages where an AI is vulnerable it might very well use violence to prevent itself from getting destroyed.
Hurricanes act with ‘violence’ in the sense of destructive power, but hurricanes don’t hate people. The idea is that an AGI, like an intelligent hurricane, can be dangerous without bearing any special animosity for humans, indeed without caring or thinking about humans in any way whatsoever.
No. That’s not what he said. There’s a difference between claiming that A can be dangerous without X and claiming that a scenario that A can be dangerous due to X.
There more than one plausible UFAI scenario. We do have discussions about boxing AI and in those cases it’s quite useful to model the AI as trying to act against humans to get out.
If intelligent hurricanes loved you, they might well avoid destroying you. So it can indeed be said that intelligent hurricanes’ indifference to us is part of what makes them dangerous.
“the AI neither loves you, nor hates you” is compatible with ‘your actions are getting in the way of the AI’s terminal goals’. We don’t need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.
There a difference between “need to appeal” and something being a possible explanation.
Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don’t experience them. Empirically, the vast majority of agents don’t experience them. Very plausibly, the vast majority of possible intelligent agents also don’t experience them. “the AI neither loves you, nor hates you” is not saying ‘it’s impossible to program an AI to experience love or hate’; it’s saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.
Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it’s (arguably) not so likely to kill everybody. MIRI appears to be focussing on the “killing everybody case”. That is because—according to them—that is a really, really bad outcome.
The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.