Meh. The assumption that bans won’t work seems to miss most of the subtlety of reality, which can vary between the failure of U.S. prohibition of alcohol to Japan’s two gun-related homocides per year.
The open letter says: “If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons.”
The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war.
I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don’t want to have them.
I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren’t experts on politics, and don’t know what the actual effects of an autonomous weapon ban would be.
I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties.
What does “if used ethically” mean?
This is a bit like the debate around tasers. Taser seem like a good idea because they allow policeman to use less force. In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun. The do additional shots with the tasers.
The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don’t want to have them.
The US is already using it’s drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people. That’s not in line with ethical use. They use the weapons whenever they expect that to produce a military advantage.
Robotics and AI experts aren’t experts on politics, and don’t know what the actual effects of an autonomous weapon ban would be.
Elon Musk does politics in the sense that he has experience in lobbying for laws getting passed. He likely has people with deeper knowledge on staff.
On the other hand I don’t see that the author of the article has political experience.
I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters. Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter.
The US is already using it’s drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people.
I wasn’t aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.
I was thinking mainly along the lines of using it in regular combat
US drones in Pakistan usually don’t strike in regular combat but strike a house while people sleep in it.
indiscriminately killing protesters
If you want to kill protesters you don’t need drones. You can simply shoot into the mass. In most cases that however doesn’t make sense and is no effective move.
If you want to understand warfare you have to move past the standard spin.
I wasn’t aware of this, do you have a source on that?
Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.
The fact that civilian casualties exists doesn’t show that a military violates ethical standards. Shooting on rescues on the other hand is a violation of ethical standards.
From a military standpoint there’s an advantage to be gained by killing the doctors of the other side, from an ethical perspective it’s bad and there’s international law against it.
The US tries to maximize military objectives instead of ethical ones.
In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun.
Do you have a source for that?
One method would be to look at the number of police killings and see if it changed the trend. But it’s pretty tough to get the number of American police killings, let alone the estimate a trend and determine causes.
One could imagine a policy decision to arm people with tasers instead of guns, which is not subject to your complaint. People are rarely disarmed, but new locations could make different choices about how to arm security guards. But I do not know the proportion of guards armed in various ways, let alone the trends.
1 - a ban won’t work 2 - properly programmed autonomous weapon (AW) could reduce causalties
So, the conclusion goes, we should totally dig AW.
Point n° 2 is the most fragile: they could very well reduce as increment causalties, depending on how they are programmed. It’s also true that the availability of cheaper soldiers might make for cheaper (i.e., more affordable) wars. But point n° 1 is debatable also: after all, the ban on chemical and biological weapon has worked, sorta.
A voice of reason.
Against Musk, Hawking and all other “pacifists”.
Meh. The assumption that bans won’t work seems to miss most of the subtlety of reality, which can vary between the failure of U.S. prohibition of alcohol to Japan’s two gun-related homocides per year.
Trying to summarize here:
The open letter says: “If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons.”
The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war.
I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don’t want to have them.
I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren’t experts on politics, and don’t know what the actual effects of an autonomous weapon ban would be.
True. And the experts in politics usually don’t want to even consider such childish fantasies like autonomous killing robots.
Until at least, they are here.
What does “if used ethically” mean?
This is a bit like the debate around tasers. Taser seem like a good idea because they allow policeman to use less force. In reality in nearly every case where a policeman wanted to use a real gun in the past they still use a real gun. The do additional shots with the tasers.
The US is already using it’s drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people. That’s not in line with ethical use. They use the weapons whenever they expect that to produce a military advantage.
Elon Musk does politics in the sense that he has experience in lobbying for laws getting passed. He likely has people with deeper knowledge on staff.
On the other hand I don’t see that the author of the article has political experience.
I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters.
Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter.
I wasn’t aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.
US drones in Pakistan usually don’t strike in regular combat but strike a house while people sleep in it.
If you want to kill protesters you don’t need drones. You can simply shoot into the mass. In most cases that however doesn’t make sense and is no effective move.
If you want to understand warfare you have to move past the standard spin.
http://www.theguardian.com/commentisfree/2012/aug/20/us-drones-strikes-target-rescuers-pakistan
The fact that civilian casualties exists doesn’t show that a military violates ethical standards. Shooting on rescues on the other hand is a violation of ethical standards.
From a military standpoint there’s an advantage to be gained by killing the doctors of the other side, from an ethical perspective it’s bad and there’s international law against it.
The US tries to maximize military objectives instead of ethical ones.
Do you have a source for that?
One method would be to look at the number of police killings and see if it changed the trend. But it’s pretty tough to get the number of American police killings, let alone the estimate a trend and determine causes.
One could imagine a policy decision to arm people with tasers instead of guns, which is not subject to your complaint. People are rarely disarmed, but new locations could make different choices about how to arm security guards. But I do not know the proportion of guards armed in various ways, let alone the trends.
Where did “pacifists” and the scare quotes around it come from?
The UFAI debate isn’t mainly about military robots.
The article’s two main points are:
1 - a ban won’t work
2 - properly programmed autonomous weapon (AW) could reduce causalties
So, the conclusion goes, we should totally dig AW.
Point n° 2 is the most fragile: they could very well reduce as increment causalties, depending on how they are programmed. It’s also true that the availability of cheaper soldiers might make for cheaper (i.e., more affordable) wars. But point n° 1 is debatable also: after all, the ban on chemical and biological weapon has worked, sorta.
This sounds like a straw man, right there at the beginning. Stopped there.