A lot of the concerns here amount to “smart enemies could fool dumb robots into doing awful things”, or “generals could easily instruct robots to do awful things” … but a few amount to “robots can’t tell if they are doing awful things or not, because they have no sense of ‘awful’. The outcomes that human warriors want from making war are not only military victory, but also not too much awfulness in achieving it; therefore, robots are defective warriors.”
A passage closely relevant to a number of LW-ish ideas:
An even more serious problem is that fully autonomous weapons would not possess human qualities necessary to assess an individual’s intentions, an assessment that is key to distinguishing targets. According to philosopher Marcello Guarini and computer scientist Paul Bello, “[i]n a context where we cannot assume that everyone present is a combatant, then we have to figure out who is a combatant and who is not. This frequently requires the attribution of intention.” One way to determine intention is to understand an individual’s emotional state, something that can only be done if the soldier has emotions. Guarini and Bello continue, “A system without emotion … could not predict the emotions or action of others based on its own states because it has no emotional states.” Roboticist Noel Sharkey echoes this argument: “Humans understand one another in a way that machines cannot. Cues can be very subtle, and there are an infinite number of circumstances where lethal force is inappropriate.” For example, a frightened mother may run after her two children and yell at them to stop playing with toy guns near a soldier. A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals. The former would hold fire, and the latter might launch an attack. Technological fixes could not give fully autonomous weapons the ability to relate to and understand humans that is needed to pick up on such cues.
Guarini and Bello continue, “A system without emotion … could not predict the emotions or action of others based on its own states because it has no emotional states.
.… so it could predict them using another system! To do arithmetics, Humans use their fingers, or memorize multiplications tables, but a computer doesn’t need either of those. I don’t see why it would need emotions to predict emotions either.
Similarly, there was that time US soldiers fired on a camera crew, even laughing at them for being incompetent terrorists when they ran around. Or they just capture and torture them with a poor explanation.
human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmles
It’s unlikely that a human would actually do this. They would have been trained to react quickly and to not take risk / chances with the lives of their fellow soldiers. Reports from Cold War era wars support this view. The ( possibly unreliable ) local reports in current war torn regions also support humans not making these distinctions, or at least not making them when things are happening quickly.
You wouldn’t use fully autonomous weapons systems in that situation, for the same reason that you wouldn’t be using air-burst flechettes. It’s not the right tool to do what you intend to do.
It’s not just terrorists. Stalin, Saddam Hussein, Genghis Khan, and many of the ancient Romans didn’t have that problem either. In the olden days, you didn’t bother trying to tell insurgents and civilians apart; you just massacred the population until there wasn’t anyone left who was willing to fight you.
Telling the difference between combatants and non-combatants only matters if you care whether or not non-combatants are killed.
A lot of the concerns here amount to “smart enemies could fool dumb robots into doing awful things”, or “generals could easily instruct robots to do awful things” … but a few amount to “robots can’t tell if they are doing awful things or not, because they have no sense of ‘awful’. The outcomes that human warriors want from making war are not only military victory, but also not too much awfulness in achieving it; therefore, robots are defective warriors.”
A passage closely relevant to a number of LW-ish ideas:
(source, it’s from page 138 of Robot Ethics)
.… so it could predict them using another system! To do arithmetics, Humans use their fingers, or memorize multiplications tables, but a computer doesn’t need either of those. I don’t see why it would need emotions to predict emotions either.
As a side note, we are getting better at software recognition of emotions.
Similarly, there was that time US soldiers fired on a camera crew, even laughing at them for being incompetent terrorists when they ran around. Or they just capture and torture them with a poor explanation.
http://www.reuters.com/article/2010/04/06/us-iraq-usa-journalists-idUSTRE6344FW20100406
http://www.guardian.co.uk/media/2004/jan/13/usnews.iraq
It’s unlikely that a human would actually do this. They would have been trained to react quickly and to not take risk / chances with the lives of their fellow soldiers. Reports from Cold War era wars support this view. The ( possibly unreliable ) local reports in current war torn regions also support humans not making these distinctions, or at least not making them when things are happening quickly.
You wouldn’t use fully autonomous weapons systems in that situation, for the same reason that you wouldn’t be using air-burst flechettes. It’s not the right tool to do what you intend to do.
Funny, our enemies don’t seem to have that problem. :P
If you are referring to terrorists, they generally claim that democracy or whatever makes us all complicit, IIRC.
It’s not just terrorists. Stalin, Saddam Hussein, Genghis Khan, and many of the ancient Romans didn’t have that problem either. In the olden days, you didn’t bother trying to tell insurgents and civilians apart; you just massacred the population until there wasn’t anyone left who was willing to fight you.
Telling the difference between combatants and non-combatants only matters if you care whether or not non-combatants are killed.
That’s a serious ethical argument, by the way.
Well, it persuaded them …
It requires some confusion as to the purpose of refraining from killing civilians, I think.