Now we know more than nothing about the real-world operational details of AI risks. Albeit mostly banal everyday AI that we can’t imagine harming us at scale. So maybe that’s what we should try harder to imagine and prevent.
Maybe these solutions will not generalize out of this real-world already-observed AI risk distribution. But even if not, which of these is more dignified?
Being wiped out in a heartbeat by some nano-Cthulu in pursuit of some inscrutable goal that nobody genuinely saw coming
Being killed even before that by whatever is the most lethal thing you can imagine evolving from existing ad-click maximizers, bitcoin maximizers, up-vote maximizers, (oh, and military drones, those are kind of lethal) etc. because they seemed like too mundane a threat
Now we know more than nothing about the real-world operational details of AI risks. Albeit mostly banal everyday AI that we can’t imagine harming us at scale. So maybe that’s what we should try harder to imagine and prevent.
Maybe these solutions will not generalize out of this real-world already-observed AI risk distribution. But even if not, which of these is more dignified?
Being wiped out in a heartbeat by some nano-Cthulu in pursuit of some inscrutable goal that nobody genuinely saw coming
Being killed even before that by whatever is the most lethal thing you can imagine evolving from existing ad-click maximizers, bitcoin maximizers, up-vote maximizers, (oh, and military drones, those are kind of lethal) etc. because they seemed like too mundane a threat