It’s odd to claim that, contingent upon AGI being significantly smarter than us, and wanting to kill us, that there is no realistic pathway for us to be physically harmed.
Claims of this sort by intelligent, competent people likely reveal that they are passively objecting to the contingencies rather than disputing whether these contingencies would lead to the conclusion.
The quotes you’re responding to here superficially imply “if smart + malicious AI, it can’t kill us”, but it seems much more likely this is a warped translation of either “AI can’t be smart”, or “AI can’t be malicious”.
I imagine there could also be some unexplored assumption, such as “but we are many, and the AI is one” (a strong intuition that many always defeat one, which was true for our ancestors), and they don’t realize that “one” superhuman AI could still do thousand things in parallel and build backups and extra robotic brains.
How is such a failure of imagination possible?
It’s odd to claim that, contingent upon AGI being significantly smarter than us, and wanting to kill us, that there is no realistic pathway for us to be physically harmed.
Claims of this sort by intelligent, competent people likely reveal that they are passively objecting to the contingencies rather than disputing whether these contingencies would lead to the conclusion.
The quotes you’re responding to here superficially imply “if smart + malicious AI, it can’t kill us”, but it seems much more likely this is a warped translation of either “AI can’t be smart”, or “AI can’t be malicious”.
I imagine there could also be some unexplored assumption, such as “but we are many, and the AI is one” (a strong intuition that many always defeat one, which was true for our ancestors), and they don’t realize that “one” superhuman AI could still do thousand things in parallel and build backups and extra robotic brains.