Published 4 hours ago as of Monday 27 July 2015 20.18 AEST:
Musk, Wozniak and Hawking urge ban on AI and autonomous weapons: Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.
I didn’t read the letter but did it have a subsection saying something like “and if this creates a future shortage of military manpower we would welcome our children being drafted to fight against the common enemies of mankind such as ISIS”?
I like the spirit of the proposal but I fear it will be very hard to draw a reasonable line between military AI and non-military AI. Do computer vision algorithms have military applications? Do planning algorithms? Well, they do if you hook them up to a robot with a gun.
This came up at the AI Ethics panel at AAAI, and the “outlaws” argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant. International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
The two qualifiers—offensive and autonomous—are also both material. If we have anti-rocket flechettes on a tank, it’s just not possible to have a human in the loop, because you need to launch them immediately after you detect an incoming rocket, so defensive autonomous weapons are in. Similarly, offensive AI is in; your rifle / drone / etc. can identify targets and aim for you, but the ban is arguing that there needs to be a person that verifies the targeting system is correct and presses the button (to allow the weapon to fire; it can probably decide the timing). The phrase they use is “meaningful human control.”
The idea, I think, is that everyone is safer if nation-states aren’t developing autonomous killbots to fight other nation’s autonomous killbots. So long as they’re more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
The trouble I had with it was the underlying principle of “meaningful human control” is an argument I do not buy for livingry, and that makes me reluctant to buy it for weaponry, or to endorse weaponry bans that could then apply the same logic to livingry. It seems to me that they implicitly assume that a principle on ‘life and death decisions’ only affects weaponry, but not at all—one of the other AAAI attendees pointed out that in their donor organ allocation software, the fact that there was no human control was seen as a plus, because it implied that there was no opportunity for corruption of the people involved in making the decision, because those people did not exist. (Of course people were involved at a higher meta level, in writing the software and establishing the principles by which the software operates.)
And that’s just planning; if we’re going to have robot cars or doctors or pilots or so on, we need to accept robots making life and death decisions and relegate ‘meaningful human control’ to the places where it’s helpful. And it seems like we might also want robot police and soldiers.
International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
And yet the international community has failed to persecute those responsible for the one recent case of a government using chemical warfare to murder its citizens en masse—Syria. Plenty of governments still maintain extensive stockpiles of chemical weapons. Given the enforcement track-record, I’d say given being put in a similar situation to the Syrian government, they’re more likely to use similar or harsher measures in the future.
If you outlaw something and then fail to enforce the law, it isn’t worth the paper it’s written on. How do you think the ban on autonomous weapons will be enforced if the USA, China or Russia unilaterally break it? It won’t be.
If you outlaw something and then fail to enforce the law, it isn’t worth the paper it’s written on.
This strikes me as...not obvious. In my country most rapes are not reported, let alone prosecuted, but that doesn’t lead me to conclude that the law against rape “isn’t worth the paper it’s written on”.
The page at the far end of satt’s link has, in addition to its presentation of the data, extensive information about the underlying numbers and where they all come from.
So long as they’re more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
I would argue driverless cars are far more dangerous in that regard, if only because you are likely to have more of them and they are already in major population centers.
A lot of contemporary weaponry is already fairly autonomous. For example, it would be trivial to program a modern anti-air missile system to shoot at all detected targets (matching specified criteria) without any human input whatsoever—no AI needed. And, of course the difference between offensive fire and defensive fire isn’t all that clear-cut. Is a counter-aritllery barrage offensive or defensive? What about area-denial weapons?
I have a feeling it’s a typical feel-good petition (“I Did My Part To Stop Killer Robots—What About You?”) with little relevance to military-political realities.
This came up at the AI Ethics panel at AAAI, and the “outlaws” argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant.
Disagree. It only seems that way because you are looking at too small a time scale. Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is. (Look up the papal bulls against crossbows and gunpowder sometime). This lasts a generation at most, generally until the next major war.
Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is.
Consider chemical warfare in WWI vs. chemical warfare in WWII. I’m no military historian, but my impression is that it was used because it was effective, people realized that it was lose-lose relative to not using chemical warfare, and then it wasn’t used in WWII, because both sides reasonably expected that if they started using it, then the other side would as well.
One possibility is that this only works for technologies that are helpful but not transformative. An international campaign to halt the use of guns in warfare would not get very far (like you point out), and it is possible that autonomous military AI is closer to guns than it is chemical warfare.
Combat efficiency is much reduced when using gas mask.
Moreover, while gas masks for horses do (did) exist, good luck persuading your horse wearing it. And horses were rather crucial in WWI and still very important in WWII.
We did not see gas used during WWII mostly because of Hitler’s aversion and a (mistaken) belief that the Allies had stockpiles of nerve agents and Germany feared their retaliation.
The norm against using nuclear weapons in war is arguably a counterexample, though that depends on precisely how one operationalizes “there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is”.
I wonder if we’ll follow up by seeing politicians propose changes to C++ standards.
Even assuming these guys are experts on how dangerous the weapons are, they have no expertise in politics that the man on the street doesn’t, and the man on the street doesn’t put out press releases. This is no better than all those announcements by various celebrities concerning some matter they have no expertise on, except they’re tech celebrities and are intentionally confusing technical knowledge and political knowledge to make themselves sound like experts because they are experts about the wrong link in the chain.
I think you missed the “More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race”. Musk and Hawking are just the high profile names the editor decided would grab attention in a headline.
Experts in robotics and AI would be aware of the capabilities of these systems, and how they might develop in the future. Therefore I think they are qualified to have an opinion on whether or not it’s a good idea to ban them.
No, because whether it’s a good idea to ban them on the basis of their dangerousness depends on 1) whether they are dangerous (which they are experts on) and 2) whether it’s a good idea to ban things that are dangerous (which is a political question that they are not experts on). And the latter part is where most of the substantial disagreement happens.
We already have things that we know are dangerous, like nuclear weapons, and they aren’t banned. A lot of people would like them banned, of course, but we at least understand that that’s a political question, and “I know it’s dangerous” doesn’t make someone an expert on the political question. Just like you or I don’t become experts on banning nuclear weapons just because we know nuclear weapons are dangerous, these guys don’t become experts on banning AI weapons just because they know AI weapons are dangerous.
I’m sorry this is just silly. You are saying no one should have opinions on policy except politicians? Politicians are experts on what policies are good?
I think they are more like people who are experts at getting elected and getting funded. For making actual policy, they usually just resort to listening to experts in the relevant field; economists, businessmen, and in this case robotics experts.
The robotics experts are telling them “hey this shit is getting really scary and could be stopped if you just stop funding it and discourage other countries from doing so.” It is of course up to actual politicians to debate it and vote on it, but they are giving their relevant opinion.
Which isn’t at all unprecedented, we do the same thing with countless other military technologies, like chemical weapons and nukes. Or even simple things like land mines and hollow point bullets. It’s not like they are asking for totally new policies. They are more like, “hey this thing you are funding is really similar to these other things you have forbidden.”
And nukes are banned btw. We don’t make any more of them, we are trying to get rid of most of the ones we have made. We don’t let other countries make them. We don’t test them or let anyone else test them. And no one is allowed to actually use them.
You are saying no one should have opinions on policy except politicians? Politicians are experts on what policies are good?
I’m saying that nobody like that should have opinions as experts, that claim they know better because they’re experts. Their opinions are as good as yours or mine. But you and I don’t put out press releases that people pay any attention to, based on our “expertise”.
Furthermore, when politicians do it, everyone is aware that they are acting as politicians, and can evaluate them as politicians. The “experts” are pretending that their conclusion comes from their expertise, not from their politics, when in fact all the noteworthy parts of their answer come from their politics.
This is no better than if, say, doctors were to put out a press release that condemns illegal immigration on the grounds that illegal immigrants have a high crime rate, and doctors have to treat crime victims and know how bad crime is.
The robotics experts are telling them “hey this shit is getting really scary and could be stopped if you just stop funding it and discourage other countries from doing so.”
Whether that actually works, particularly the “discourage other countries” part, is a question they have no expertise on.
Published 4 hours ago as of Monday 27 July 2015 20.18 AEST:
Link from Reddit front page
Link from The Guardian
I didn’t read the letter but did it have a subsection saying something like “and if this creates a future shortage of military manpower we would welcome our children being drafted to fight against the common enemies of mankind such as ISIS”?
I like the spirit of the proposal but I fear it will be very hard to draw a reasonable line between military AI and non-military AI. Do computer vision algorithms have military applications? Do planning algorithms? Well, they do if you hook them up to a robot with a gun.
Outlaw it, and only outlaws will have it.
See also the website of the (I think) most prominent pressure group in this area: Campaign to Stop Killer Robots.
This came up at the AI Ethics panel at AAAI, and the “outlaws” argument actually seems like a fairly weak practical counterargument in the reference class that the ban proponents think is relevant. International agreements really have reduced to near-zero the usage of chemical warfare and landmines.
The two qualifiers—offensive and autonomous—are also both material. If we have anti-rocket flechettes on a tank, it’s just not possible to have a human in the loop, because you need to launch them immediately after you detect an incoming rocket, so defensive autonomous weapons are in. Similarly, offensive AI is in; your rifle / drone / etc. can identify targets and aim for you, but the ban is arguing that there needs to be a person that verifies the targeting system is correct and presses the button (to allow the weapon to fire; it can probably decide the timing). The phrase they use is “meaningful human control.”
The idea, I think, is that everyone is safer if nation-states aren’t developing autonomous killbots to fight other nation’s autonomous killbots. So long as they’re more like human-piloted mechs, there are slightly fewer nightmare scenarios involving mad engineers and hackers.
The trouble I had with it was the underlying principle of “meaningful human control” is an argument I do not buy for livingry, and that makes me reluctant to buy it for weaponry, or to endorse weaponry bans that could then apply the same logic to livingry. It seems to me that they implicitly assume that a principle on ‘life and death decisions’ only affects weaponry, but not at all—one of the other AAAI attendees pointed out that in their donor organ allocation software, the fact that there was no human control was seen as a plus, because it implied that there was no opportunity for corruption of the people involved in making the decision, because those people did not exist. (Of course people were involved at a higher meta level, in writing the software and establishing the principles by which the software operates.)
And that’s just planning; if we’re going to have robot cars or doctors or pilots or so on, we need to accept robots making life and death decisions and relegate ‘meaningful human control’ to the places where it’s helpful. And it seems like we might also want robot police and soldiers.
And yet the international community has failed to persecute those responsible for the one recent case of a government using chemical warfare to murder its citizens en masse—Syria. Plenty of governments still maintain extensive stockpiles of chemical weapons. Given the enforcement track-record, I’d say given being put in a similar situation to the Syrian government, they’re more likely to use similar or harsher measures in the future.
If you outlaw something and then fail to enforce the law, it isn’t worth the paper it’s written on. How do you think the ban on autonomous weapons will be enforced if the USA, China or Russia unilaterally break it? It won’t be.
This strikes me as...not obvious. In my country most rapes are not reported, let alone prosecuted, but that doesn’t lead me to conclude that the law against rape “isn’t worth the paper it’s written on”.
What is the source of that data? I ask because there is a lot of misleading and outright false data on rape rates floating about.
The page at the far end of satt’s link has, in addition to its presentation of the data, extensive information about the underlying numbers and where they all come from.
I would argue driverless cars are far more dangerous in that regard, if only because you are likely to have more of them and they are already in major population centers.
I agree, especially because the security on commercial hardware tends to be worse than the security on military hardware.
A lot of contemporary weaponry is already fairly autonomous. For example, it would be trivial to program a modern anti-air missile system to shoot at all detected targets (matching specified criteria) without any human input whatsoever—no AI needed. And, of course the difference between offensive fire and defensive fire isn’t all that clear-cut. Is a counter-aritllery barrage offensive or defensive? What about area-denial weapons?
I have a feeling it’s a typical feel-good petition (“I Did My Part To Stop Killer Robots—What About You?”) with little relevance to military-political realities.
Disagree. It only seems that way because you are looking at too small a time scale. Every time a sufficiently powerful military breakthrough arrives there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is. (Look up the papal bulls against crossbows and gunpowder sometime). This lasts a generation at most, generally until the next major war.
Consider chemical warfare in WWI vs. chemical warfare in WWII. I’m no military historian, but my impression is that it was used because it was effective, people realized that it was lose-lose relative to not using chemical warfare, and then it wasn’t used in WWII, because both sides reasonably expected that if they started using it, then the other side would as well.
One possibility is that this only works for technologies that are helpful but not transformative. An international campaign to halt the use of guns in warfare would not get very far (like you point out), and it is possible that autonomous military AI is closer to guns than it is chemical warfare.
Chemical warfare was only effective the first couple times it was used, i.e., before people invented the gas mask.
Combat efficiency is much reduced when using gas mask.
Moreover, while gas masks for horses do (did) exist, good luck persuading your horse wearing it. And horses were rather crucial in WWI and still very important in WWII.
We did not see gas used during WWII mostly because of Hitler’s aversion and a (mistaken) belief that the Allies had stockpiles of nerve agents and Germany feared their retaliation.
My impression is that chemical weapons were very effective in the Iran-Iraq war (e.g.), despite the gas mask having been invented.
The norm against using nuclear weapons in war is arguably a counterexample, though that depends on precisely how one operationalizes “there are attempts to ban it, or declare using it “dishonorable”, or whatever the equivalent is”.
But at greater risk to themselves. Meanwhile if you don’t outlaw it, the outlaws will have even more of it, and everyone else will have it.
I wonder if we’ll follow up by seeing politicians propose changes to C++ standards.
Even assuming these guys are experts on how dangerous the weapons are, they have no expertise in politics that the man on the street doesn’t, and the man on the street doesn’t put out press releases. This is no better than all those announcements by various celebrities concerning some matter they have no expertise on, except they’re tech celebrities and are intentionally confusing technical knowledge and political knowledge to make themselves sound like experts because they are experts about the wrong link in the chain.
I think you missed the “More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race”. Musk and Hawking are just the high profile names the editor decided would grab attention in a headline.
Experts in robotics and AI would be aware of the capabilities of these systems, and how they might develop in the future. Therefore I think they are qualified to have an opinion on whether or not it’s a good idea to ban them.
No, because whether it’s a good idea to ban them on the basis of their dangerousness depends on 1) whether they are dangerous (which they are experts on) and 2) whether it’s a good idea to ban things that are dangerous (which is a political question that they are not experts on). And the latter part is where most of the substantial disagreement happens.
We already have things that we know are dangerous, like nuclear weapons, and they aren’t banned. A lot of people would like them banned, of course, but we at least understand that that’s a political question, and “I know it’s dangerous” doesn’t make someone an expert on the political question. Just like you or I don’t become experts on banning nuclear weapons just because we know nuclear weapons are dangerous, these guys don’t become experts on banning AI weapons just because they know AI weapons are dangerous.
I’m sorry this is just silly. You are saying no one should have opinions on policy except politicians? Politicians are experts on what policies are good?
I think they are more like people who are experts at getting elected and getting funded. For making actual policy, they usually just resort to listening to experts in the relevant field; economists, businessmen, and in this case robotics experts.
The robotics experts are telling them “hey this shit is getting really scary and could be stopped if you just stop funding it and discourage other countries from doing so.” It is of course up to actual politicians to debate it and vote on it, but they are giving their relevant opinion.
Which isn’t at all unprecedented, we do the same thing with countless other military technologies, like chemical weapons and nukes. Or even simple things like land mines and hollow point bullets. It’s not like they are asking for totally new policies. They are more like, “hey this thing you are funding is really similar to these other things you have forbidden.”
And nukes are banned btw. We don’t make any more of them, we are trying to get rid of most of the ones we have made. We don’t let other countries make them. We don’t test them or let anyone else test them. And no one is allowed to actually use them.
I’m saying that nobody like that should have opinions as experts, that claim they know better because they’re experts. Their opinions are as good as yours or mine. But you and I don’t put out press releases that people pay any attention to, based on our “expertise”.
Furthermore, when politicians do it, everyone is aware that they are acting as politicians, and can evaluate them as politicians. The “experts” are pretending that their conclusion comes from their expertise, not from their politics, when in fact all the noteworthy parts of their answer come from their politics.
This is no better than if, say, doctors were to put out a press release that condemns illegal immigration on the grounds that illegal immigrants have a high crime rate, and doctors have to treat crime victims and know how bad crime is.
Whether that actually works, particularly the “discourage other countries” part, is a question they have no expertise on.