Autonomous lethal weapons (ALWs; we need a more eerie, memetic name) could make the difference. Against the “realists”, whereas bias is not a new problem, ALWs emphatically are. Likewise no reflexive optimism from the boosters lessens the need for sober regulation to lessen the self-evident risk of ALWs.
And this provides a “narrative through-line” for regulation—we must regulate ALWs, and so, AI systems that could design ALWs. It follows, we must regulate AI systems that design other AI systems in general, and so too, we must therefore regulate AI artificial intelligence researchers, or recursive self-improving systems. The regulations can logically follow, and lead to the (capability, at least) of regulating projects conducive to AGI.
All this suggests a scenario plan: on assumption ALWs will be used in combat, and there are confirmed fatalities therefrom, we publicise like hell: names, faces, biographies—which the ALW didn’t, and couldn’t have, appreciated, but it killed them anyway. We observe that it chose to kill them—why? On what criteria? What chain of reasoning? No one alive knows. With the anxiety from ALWs in general, and such a case in particular, we are apt to have more public pressure for regulation in general.
If that regulation focuses on ALWs, and what ensures more safety, to the “stair-steps” of regulation against risk to humans in general, we have a model that appeals to “realists”: ALWs given a photograph, of epicanthic folds, dark skin, blue eyes, whatever—enables the ultimate “discrimination”, of genocide. Whereas the boosters have nothing to say against regulations, since such lethal uses by AI can’t be “made safe”, particularly if multiple antagonists have them.
We leverage ALW regulation to get implicitly existence-risk averse leadership into regulatory bodies (since in a bureaucracy, who wins the decision makers wins the decisions). Progress.
OP’s analysis seems sound—but observe that the media are also biased toward booster-friendly, simpler, hyperbolic narratives; whereas they’ve no mental model of, not robots with human minds, but the minds themselves supplanted. Not knowing what’s happening, they default to their IT shibboleths, “realist”-friendly bias concerns. As for “doomers”, they don’t know what to do.
If somebody knows how to make a press release for such a use described: go for it.
I think ALWs are already more of a “realist” cause than a doomer cause. To doomers, they’re a distraction—a superintelligence can kill you with or without them.
ALWs also seem to be held to an unrealistic standard compared to existing weapons. With present-day technology, they’ll probably hit the wrong target more often than human-piloted drones. But will they hit the wrong target more often than landmines, cluster munitions, and over-the-horizon unguided artillery barrages, all of which are being used in Ukraine right now?
Autonomous lethal weapons (ALWs; we need a more eerie, memetic name) could make the difference. Against the “realists”, whereas bias is not a new problem, ALWs emphatically are. Likewise no reflexive optimism from the boosters lessens the need for sober regulation to lessen the self-evident risk of ALWs.
And this provides a “narrative through-line” for regulation—we must regulate ALWs, and so, AI systems that could design ALWs. It follows, we must regulate AI systems that design other AI systems in general, and so too, we must therefore regulate AI artificial intelligence researchers, or recursive self-improving systems. The regulations can logically follow, and lead to the (capability, at least) of regulating projects conducive to AGI.
All this suggests a scenario plan: on assumption ALWs will be used in combat, and there are confirmed fatalities therefrom, we publicise like hell: names, faces, biographies—which the ALW didn’t, and couldn’t have, appreciated, but it killed them anyway. We observe that it chose to kill them—why? On what criteria? What chain of reasoning? No one alive knows. With the anxiety from ALWs in general, and such a case in particular, we are apt to have more public pressure for regulation in general.
If that regulation focuses on ALWs, and what ensures more safety, to the “stair-steps” of regulation against risk to humans in general, we have a model that appeals to “realists”: ALWs given a photograph, of epicanthic folds, dark skin, blue eyes, whatever—enables the ultimate “discrimination”, of genocide. Whereas the boosters have nothing to say against regulations, since such lethal uses by AI can’t be “made safe”, particularly if multiple antagonists have them.
We leverage ALW regulation to get implicitly existence-risk averse leadership into regulatory bodies (since in a bureaucracy, who wins the decision makers wins the decisions). Progress.
OP’s analysis seems sound—but observe that the media are also biased toward booster-friendly, simpler, hyperbolic narratives; whereas they’ve no mental model of, not robots with human minds, but the minds themselves supplanted. Not knowing what’s happening, they default to their IT shibboleths, “realist”-friendly bias concerns. As for “doomers”, they don’t know what to do.
If somebody knows how to make a press release for such a use described: go for it.
There’s already a more eerie, memetic name. Slaughterbots.
I think ALWs are already more of a “realist” cause than a doomer cause. To doomers, they’re a distraction—a superintelligence can kill you with or without them.
ALWs also seem to be held to an unrealistic standard compared to existing weapons. With present-day technology, they’ll probably hit the wrong target more often than human-piloted drones. But will they hit the wrong target more often than landmines, cluster munitions, and over-the-horizon unguided artillery barrages, all of which are being used in Ukraine right now?