I think the core point to consider in terms of this are:
is the AI simply indifferent due to misalignment, or actually malicious? The latter seems more improbable (it’s a form of alignment after all!) but if the AI does not need nor plan to survive its extermination of us then we’ve got much less chances, and it’s got a lot more paths open to it;
does the AI even need to kill everyone? Collapsing civilisation to the point where mounting a coordinated, effective response is impossible might be enough. A few stone age primitives won’t pose a threat to an army of superintelligent robots, they can be left around until such time comes that environmental degradation kills them anyway (I can’t imagine AI caring much to keep the Earth unpolluted or its air breathable);
would the AI start a fight if it thinks it has a chance not to win it? Depends on how its reward estimation works, is it one for risky bets that win big or a careful planner? I suppose the best (albeit somewhat funny and scary to think about) outcome would be an extremely careful AI that might consider killing us, but it’s just gotta be sure, man, and it never feels like it has enough of a safety margin to take the plunge. Jokes aside, for any scenario other than “superintelligent AI with powerful nanotech at its fingertips” that’s gotta count. Going to war might have benefits but will also come with risks. And at the very least the AI needs to be able to set up infrastructure to keep maintenance, replacements, energy supply etc. ongoing to its own hardware.
that said, if AI started developing efficient robotics that revolutionize productivity, would we say no? And if those robots populated every factory and key infrastructure node, would we stop there? And if we tried to design a kill switch, just to be safe, even though at that point we’d be so dependent on the robots that activating it would just be almost as harmful to us as to the AI, would a smart enough AI not be able to get around it? So honestly a scenario in which AI controls just enough to support itself and then decides it can simply collapse our civilisation with a moderately virulent super-plague would eventually be reached no matter what.
I think the core point to consider in terms of this are:
is the AI simply indifferent due to misalignment, or actually malicious? The latter seems more improbable (it’s a form of alignment after all!) but if the AI does not need nor plan to survive its extermination of us then we’ve got much less chances, and it’s got a lot more paths open to it;
does the AI even need to kill everyone? Collapsing civilisation to the point where mounting a coordinated, effective response is impossible might be enough. A few stone age primitives won’t pose a threat to an army of superintelligent robots, they can be left around until such time comes that environmental degradation kills them anyway (I can’t imagine AI caring much to keep the Earth unpolluted or its air breathable);
would the AI start a fight if it thinks it has a chance not to win it? Depends on how its reward estimation works, is it one for risky bets that win big or a careful planner? I suppose the best (albeit somewhat funny and scary to think about) outcome would be an extremely careful AI that might consider killing us, but it’s just gotta be sure, man, and it never feels like it has enough of a safety margin to take the plunge. Jokes aside, for any scenario other than “superintelligent AI with powerful nanotech at its fingertips” that’s gotta count. Going to war might have benefits but will also come with risks. And at the very least the AI needs to be able to set up infrastructure to keep maintenance, replacements, energy supply etc. ongoing to its own hardware.
that said, if AI started developing efficient robotics that revolutionize productivity, would we say no? And if those robots populated every factory and key infrastructure node, would we stop there? And if we tried to design a kill switch, just to be safe, even though at that point we’d be so dependent on the robots that activating it would just be almost as harmful to us as to the AI, would a smart enough AI not be able to get around it? So honestly a scenario in which AI controls just enough to support itself and then decides it can simply collapse our civilisation with a moderately virulent super-plague would eventually be reached no matter what.