Neither land mines nor pit-traps are “autonomous robotic weapons” of course. But speaking of precedent, there are numerous campaigns to ban land mines (eg. http://www.icbl.org/), for reasons which are rather similar to those advanced in “The Case against Killer Robots”.
Modern naval mines probably qualify as ‘robotic’ in some sense.
I’m not sure what a heat-seeking missile is; a human decided to fire it, but after that it’s autonomous. How is that different in principle from a robot which is activated by a human and is autonomous afterwards?
Probably the most important feature is the extent to which the human activator can predict the actions of the potentially-robotic weapon.
In the case of a gun, you probably know where the bullet will go, and if you don’t, then you probably shouldn’t fire it.
In the case of an autonomous robot, you have no clue what it will do in specific situations, and requiring that you don’t activate it when you can’t predict it means you won’t activate it at all.
Okay, that actually seems like quite a good isolation of the correct empirical cluster. Presumably guided missiles fall under the ‘not allowed’ category there, as you don’t know what path they’ll follow under surprising circumstances.
The proposal under discussion has poor definitions, but “autonomous robotic weapons which can kill a human without an explicit command from a human operator” is a good start.
That’s at least six different grey areas already (Autonomous, robotic, weapon, able to kill a human, explicit, human operator).
My guess is that bullets fired from current-generation conventional firearms aren’t robotic, and also don’t pass the explicit command test. That is despite the fact that many firearms discharge unintentionally when dropped- a strict reading would have them fail that test.
Finally, the entire legislation could be replaced by legislation banning war behavior in general, and it would be equally effective.
Extremely unlikely, with a properly designed, maintained, and controlled firearm. A worn out machine pistol knockoff can have a sear that consistently drops out when struck in the right spot. There’s a continuum there, and a strict enough reading of ‘explicit command from a human operator’ would be that anything that can be fired accidentally crosses the line.
For that matter, runaway is a common enough occurrence in belt-fed firearms that learning how to minimize the effects is part of learning to use the weapon. (Heat in the chamber is enough to cause the powder to ignite without the primer being struck by the pin; the weapon continues to fire until it runs out of ammunition.)
Neither land mines nor pit-traps are “autonomous robotic weapons” of course. But speaking of precedent, there are numerous campaigns to ban land mines (eg. http://www.icbl.org/), for reasons which are rather similar to those advanced in “The Case against Killer Robots”.
Modern naval mines probably qualify as ‘robotic’ in some sense.
I’m not sure what a heat-seeking missile is; a human decided to fire it, but after that it’s autonomous. How is that different in principle from a robot which is activated by a human and is autonomous afterwards?
And a bullet is out of human control once you’ve fired it. Where do you draw the line?
Probably the most important feature is the extent to which the human activator can predict the actions of the potentially-robotic weapon.
In the case of a gun, you probably know where the bullet will go, and if you don’t, then you probably shouldn’t fire it.
In the case of an autonomous robot, you have no clue what it will do in specific situations, and requiring that you don’t activate it when you can’t predict it means you won’t activate it at all.
Okay, that actually seems like quite a good isolation of the correct empirical cluster. Presumably guided missiles fall under the ‘not allowed’ category there, as you don’t know what path they’ll follow under surprising circumstances.
The proposal under discussion has poor definitions, but “autonomous robotic weapons which can kill a human without an explicit command from a human operator” is a good start.
That’s at least six different grey areas already (Autonomous, robotic, weapon, able to kill a human, explicit, human operator).
My guess is that bullets fired from current-generation conventional firearms aren’t robotic, and also don’t pass the explicit command test. That is despite the fact that many firearms discharge unintentionally when dropped- a strict reading would have them fail that test.
Finally, the entire legislation could be replaced by legislation banning war behavior in general, and it would be equally effective.
Is this true? My impression is that almost all modern firearms are designed to make this extremely unlikely.
Extremely unlikely, with a properly designed, maintained, and controlled firearm. A worn out machine pistol knockoff can have a sear that consistently drops out when struck in the right spot. There’s a continuum there, and a strict enough reading of ‘explicit command from a human operator’ would be that anything that can be fired accidentally crosses the line.
For that matter, runaway is a common enough occurrence in belt-fed firearms that learning how to minimize the effects is part of learning to use the weapon. (Heat in the chamber is enough to cause the powder to ignite without the primer being struck by the pin; the weapon continues to fire until it runs out of ammunition.)
Exactly