If someone doesn’t believe in God, they’re unlikely to spend their career studying arcane arguments for and against God’s existence. So most people who specialize in this topic are theists, but nearly all of them were theists before they knew the arguments.
but also
If someone doesn’t believe in UFAI , they’re unlikely to spend their career studying arcane arguments about AGI impact. So most people who specialize in this topic are UFAI believers, but nearly all of them were UFAI believers before they knew the arguments.
Thus I do not think you should rule out the opinions of the large community of AI experts who do not specialize in AGI impact.
This is a false analogy. You can be a believer in God when you’re five years old and haven’t read any relevant arguments due to childhood indoctrination that happens in every home. You might even believe in income redistribution when you’re five years old if your parents tell you that it’s the right thing to do. I’m pretty sure nobody teaches their children about UFAI that way. You’d have to know the arguments for or against UFAI to even know what that means.
You’d have to know the arguments for or against UFAI to even know what that means.
You just have to watch the Terminator movies, or the Matrix, or read almost any science fiction with robots in it. The UFness of AI is a default assumption in popular culture.
It’s more complicated than that. We use (relatively incompetent) AIs all over the place, and there is no public outcry, even as we develop combat AI for our UAVs and ground based combat robots, most likely because everyone thinks of AIs as merely idiot-savant servants or computer programs. Few people think much about the distinction between specialized AIs and general AI, probably because we don’t actually have any general AI, though no doubt they understand that the simpler AI “can’t become self-aware”.
People dangerously anthropomorphize AI, expecting it by default to assign huge values to human life (huge negative values in the case of “rogue AI”), with a common failure mode of immediate and incompetent homicidal rampage while being plagued by various human failings. Even general AIs are viewed as being inferior to humans in several aspects.
Overall, there is not a general awareness that a non-friendly general AI might cause a total extinction of human life due to apathy.
The UFness of AI is a default assumption in popular culture.
This is true. On the other hand, the default is for the AI to be both unfriendly and stupid. Notice, for example, the complete inability of the Matrix overlords to make their agents hit anything they’re shooting at :-D
Not only
but also
If someone doesn’t believe in UFAI , they’re unlikely to spend their career studying arcane arguments about AGI impact. So most people who specialize in this topic are UFAI believers, but nearly all of them were UFAI believers before they knew the arguments.
Thus I do not think you should rule out the opinions of the large community of AI experts who do not specialize in AGI impact.
This is a false analogy. You can be a believer in God when you’re five years old and haven’t read any relevant arguments due to childhood indoctrination that happens in every home. You might even believe in income redistribution when you’re five years old if your parents tell you that it’s the right thing to do. I’m pretty sure nobody teaches their children about UFAI that way. You’d have to know the arguments for or against UFAI to even know what that means.
You just have to watch the Terminator movies, or the Matrix, or read almost any science fiction with robots in it. The UFness of AI is a default assumption in popular culture.
It’s more complicated than that. We use (relatively incompetent) AIs all over the place, and there is no public outcry, even as we develop combat AI for our UAVs and ground based combat robots, most likely because everyone thinks of AIs as merely idiot-savant servants or computer programs. Few people think much about the distinction between specialized AIs and general AI, probably because we don’t actually have any general AI, though no doubt they understand that the simpler AI “can’t become self-aware”.
People dangerously anthropomorphize AI, expecting it by default to assign huge values to human life (huge negative values in the case of “rogue AI”), with a common failure mode of immediate and incompetent homicidal rampage while being plagued by various human failings. Even general AIs are viewed as being inferior to humans in several aspects.
Overall, there is not a general awareness that a non-friendly general AI might cause a total extinction of human life due to apathy.
This is true. On the other hand, the default is for the AI to be both unfriendly and stupid. Notice, for example, the complete inability of the Matrix overlords to make their agents hit anything they’re shooting at :-D