To hear him explain it, it doesn’t even sound like a very hard problem.
Then I’m not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they’re surrendering? When they’re dead/severely wounded?
The rules of war themselves are fairly algorithmic, but applying them is a different story.
Well there’s a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn’t an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).
Then I’m not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they’re surrendering? When they’re dead/severely wounded?
The rules of war themselves are fairly algorithmic, but applying them is a different story.
Well there’s a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn’t an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).