it can just be smart in a single dangerous domain.
Possibly. If that domain can overwhelm other areas. But its still the general intelligences—those capable of using their weather prediction modules to be socially seductive instead—that have the most potential to go wrong.
There are some ways of taking over human society that are much easier than others (though we might not know which are easiest at the moment). A narrow intelligence gets to try one thing, and that has to work, while a general intelligence can search through many different approaches.
Yeah, I agree that a truly general intelligence would be the most powerful thing, if it could exist. But that doesn’t mean it’s the main thing to worry about, because non-general intelligences can be powerful enough to kill everyone, and higher degrees of power probably don’t matter as much.
For example, fast uploads aren’t general by your definition, because they’re only good at the same things that humans are good at, but that’s enough to be dangerous. And even a narrow tool AI can be dangerous, if the domain is something like designing weapons or viruses or nanotech. Sure, a tool AI is only dangerous in the wrong hands, but it will fall into wrong hands eventually, if something like FAI doesn’t happen first.
Possibly. If that domain can overwhelm other areas. But its still the general intelligences—those capable of using their weather prediction modules to be socially seductive instead—that have the most potential to go wrong.
There are some ways of taking over human society that are much easier than others (though we might not know which are easiest at the moment). A narrow intelligence gets to try one thing, and that has to work, while a general intelligence can search through many different approaches.
Yeah, I agree that a truly general intelligence would be the most powerful thing, if it could exist. But that doesn’t mean it’s the main thing to worry about, because non-general intelligences can be powerful enough to kill everyone, and higher degrees of power probably don’t matter as much.
For example, fast uploads aren’t general by your definition, because they’re only good at the same things that humans are good at, but that’s enough to be dangerous. And even a narrow tool AI can be dangerous, if the domain is something like designing weapons or viruses or nanotech. Sure, a tool AI is only dangerous in the wrong hands, but it will fall into wrong hands eventually, if something like FAI doesn’t happen first.
We seem to have drifted into agreement.