This came up at the AI Ethics panel at AAAI, and the “outlaws” argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant. International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
The two qualifiers—offensive and autonomous—are also both material. If we have anti-rocket flechettes on a tank, it’s just not possible to have a human in the loop, because you need to launch them immediately after you detect an incoming rocket, so defensive autonomous weapons are in. Similarly, offensive AI is in; your rifle / drone / etc. can identify targets and aim for you, but the ban is arguing that there needs to be a person that verifies the targeting system is correct and presses the button (to allow the weapon to fire; it can probably decide the timing). The phrase they use is “meaningful human control.”
The idea, I think, is that everyone is safer if nation-states aren’t developing autonomous killbots to fight other nation’s autonomous killbots. So long as they’re more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
The trouble I had with it was the underlying principle of “meaningful human control” is an argument I do not buy for livingry, and that makes me reluctant to buy it for weaponry, or to endorse weaponry bans that could then apply the same logic to livingry. It seems to me that they implicitly assume that a principle on ‘life and death decisions’ only affects weaponry, but not at all—one of the other AAAI attendees pointed out that in their donor organ allocation software, the fact that there was no human control was seen as a plus, because it implied that there was no opportunity for corruption of the people involved in making the decision, because those people did not exist. (Of course people were involved at a higher meta level, in writing the software and establishing the principles by which the software operates.)
And that’s just planning; if we’re going to have robot cars or doctors or pilots or so on, we need to accept robots making life and death decisions and relegate ‘meaningful human control’ to the places where it’s helpful. And it seems like we might also want robot police and soldiers.
International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
And yet the international community has failed to persecute those responsible for the one recent case of a government using chemical warfare to murder its citizens en masse—Syria. Plenty of governments still maintain extensive stockpiles of chemical weapons. Given the enforcement track-record, I’d say given being put in a similar situation to the Syrian government, they’re more likely to use similar or harsher measures in the future.
If you outlaw something and then fail to enforce the law, it isn’t worth the paper it’s written on. How do you think the ban on autonomous weapons will be enforced if the USA, China or Russia unilaterally break it? It won’t be.
If you outlaw something and then fail to enforce the law, it isn’t worth the paper it’s written on.
This strikes me as...not obvious. In my country most rapes are not reported, let alone prosecuted, but that doesn’t lead me to conclude that the law against rape “isn’t worth the paper it’s written on”.
The page at the far end of satt’s link has, in addition to its presentation of the data, extensive information about the underlying numbers and where they all come from.
So long as they’re more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
I would argue driverless cars are far more dangerous in that regard, if only because you are likely to have more of them and they are already in major population centers.
A lot of contemporary weaponry is already fairly autonomous. For example, it would be trivial to program a modern anti-air missile system to shoot at all detected targets (matching specified criteria) without any human input whatsoever—no AI needed. And, of course the difference between offensive fire and defensive fire isn’t all that clear-cut. Is a counter-aritllery barrage offensive or defensive? What about area-denial weapons?
I have a feeling it’s a typical feel-good petition (“I Did My Part To Stop Killer Robots—What About You?”) with little relevance to military-political realities.
This came up at the AI Ethics panel at AAAI, and the “outlaws” argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant.
Disagree. It only seems that way because you are looking at too small a time scale. Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is. (Look up the papal bulls against crossbows and gunpowder sometime). This lasts a generation at most, generally until the next major war.
Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is.
Consider chemical warfare in WWI vs. chemical warfare in WWII. I’m no military historian, but my impression is that it was used because it was effective, people realized that it was lose-lose relative to not using chemical warfare, and then it wasn’t used in WWII, because both sides reasonably expected that if they started using it, then the other side would as well.
One possibility is that this only works for technologies that are helpful but not transformative. An international campaign to halt the use of guns in warfare would not get very far (like you point out), and it is possible that autonomous military AI is closer to guns than it is chemical warfare.
Combat efficiency is much reduced when using gas mask.
Moreover, while gas masks for horses do (did) exist, good luck persuading your horse wearing it. And horses were rather crucial in WWI and still very important in WWII.
We did not see gas used during WWII mostly because of Hitler’s aversion and a (mistaken) belief that the Allies had stockpiles of nerve agents and Germany feared their retaliation.
The norm against using nuclear weapons in war is arguably a counterexample, though that depends on precisely how one operationalizes “there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is”.
See also the website of the (I think) most prominent pressure group in this area: Campaign to Stop Killer Robots.
This came up at the AI Ethics panel at AAAI, and the “outlaws” argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant. International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
The two qualifiers—offensive and autonomous—are also both material. If we have anti-rocket flechettes on a tank, it’s just not possible to have a human in the loop, because you need to launch them immediately after you detect an incoming rocket, so defensive autonomous weapons are in. Similarly, offensive AI is in; your rifle / drone / etc. can identify targets and aim for you, but the ban is arguing that there needs to be a person that verifies the targeting system is correct and presses the button (to allow the weapon to fire; it can probably decide the timing). The phrase they use is “meaningful human control.”
The idea, I think, is that everyone is safer if nation-states aren’t developing autonomous killbots to fight other nation’s autonomous killbots. So long as they’re more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
The trouble I had with it was the underlying principle of “meaningful human control” is an argument I do not buy for livingry, and that makes me reluctant to buy it for weaponry, or to endorse weaponry bans that could then apply the same logic to livingry. It seems to me that they implicitly assume that a principle on ‘life and death decisions’ only affects weaponry, but not at all—one of the other AAAI attendees pointed out that in their donor organ allocation software, the fact that there was no human control was seen as a plus, because it implied that there was no opportunity for corruption of the people involved in making the decision, because those people did not exist. (Of course people were involved at a higher meta level, in writing the software and establishing the principles by which the software operates.)
And that’s just planning; if we’re going to have robot cars or doctors or pilots or so on, we need to accept robots making life and death decisions and relegate ‘meaningful human control’ to the places where it’s helpful. And it seems like we might also want robot police and soldiers.
And yet the international community has failed to persecute those responsible for the one recent case of a government using chemical warfare to murder its citizens en masse—Syria. Plenty of governments still maintain extensive stockpiles of chemical weapons. Given the enforcement track-record, I’d say given being put in a similar situation to the Syrian government, they’re more likely to use similar or harsher measures in the future.
If you outlaw something and then fail to enforce the law, it isn’t worth the paper it’s written on. How do you think the ban on autonomous weapons will be enforced if the USA, China or Russia unilaterally break it? It won’t be.
This strikes me as...not obvious. In my country most rapes are not reported, let alone prosecuted, but that doesn’t lead me to conclude that the law against rape “isn’t worth the paper it’s written on”.
What is the source of that data? I ask because there is a lot of misleading and outright false data on rape rates floating about.
The page at the far end of satt’s link has, in addition to its presentation of the data, extensive information about the underlying numbers and where they all come from.
I would argue driverless cars are far more dangerous in that regard, if only because you are likely to have more of them and they are already in major population centers.
I agree, especially because the security on commercial hardware tends to be worse than the security on military hardware.
A lot of contemporary weaponry is already fairly autonomous. For example, it would be trivial to program a modern anti-air missile system to shoot at all detected targets (matching specified criteria) without any human input whatsoever—no AI needed. And, of course the difference between offensive fire and defensive fire isn’t all that clear-cut. Is a counter-aritllery barrage offensive or defensive? What about area-denial weapons?
I have a feeling it’s a typical feel-good petition (“I Did My Part To Stop Killer Robots—What About You?”) with little relevance to military-political realities.
Disagree. It only seems that way because you are looking at too small a time scale. Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is. (Look up the papal bulls against crossbows and gunpowder sometime). This lasts a generation at most, generally until the next major war.
Consider chemical warfare in WWI vs. chemical warfare in WWII. I’m no military historian, but my impression is that it was used because it was effective, people realized that it was lose-lose relative to not using chemical warfare, and then it wasn’t used in WWII, because both sides reasonably expected that if they started using it, then the other side would as well.
One possibility is that this only works for technologies that are helpful but not transformative. An international campaign to halt the use of guns in warfare would not get very far (like you point out), and it is possible that autonomous military AI is closer to guns than it is chemical warfare.
Chemical warfare was only effective the first couple times it was used, i.e., before people invented the gas mask.
Combat efficiency is much reduced when using gas mask.
Moreover, while gas masks for horses do (did) exist, good luck persuading your horse wearing it. And horses were rather crucial in WWI and still very important in WWII.
We did not see gas used during WWII mostly because of Hitler’s aversion and a (mistaken) belief that the Allies had stockpiles of nerve agents and Germany feared their retaliation.
My impression is that chemical weapons were very effective in the Iran-Iraq war (e.g.), despite the gas mask having been invented.
The norm against using nuclear weapons in war is arguably a counterexample, though that depends on precisely how one operationalizes “there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is”.