When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn’t seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don’t really think there’s any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don’t need particularly fast reactions to be effective.
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn’t seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don’t really think there’s any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don’t need particularly fast reactions to be effective.