Land mines don’t distinguish between your allies and your enemies.
Land mines don’t move and people can avoid them.
Unless your enemy is extremely small and/or really terrible at strategy, you can’t win a war with land mines. On the other hand, these killer robots can identify targets, could hunt people down by tracking various bits of data (transactions, cell phone signals, etc), could follow people around using surveillance systems, and can distinguish between enemies and allies. With killer robots, you could conceivably win a war.
::points to exhibit of plucked chicken wearing “I’m a human!” sign::
Well, yeah, it’s a far cry from killer robots, but once a mine is planted, who dies and when is pretty much entirely out of the hands of the person who planted it. And there are indeed political movements to ban the use of land mines, specifically because of this lack of control; land mines have a tendency to go on killing people long after the original conflict is over. So land mines and autonomous killer robots do share at least a few problematic aspects; could a clever lawyer make a case that a ban on “lethal autonomy” should encompass land mines as well?
A less silly argument could also be directed at already-banned biological weapons; pathogens reproduce and kill people all the time without any human intervention at all. Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?
Now I’m not sure whether you were (originally) trying to start a discussion about how the term “lethal autonomy” should be used, or if you intended to imply something to the effect of “lethal autonomy isn’t a new threat, therefore we shouldn’t be concerned about it”.
Even if I was wrong in my interpretation of your message, I’m still glad I responded the way I did—this is one of those topic where it’s best if nobody finds excuses to go into denial, default to optimism bias, or otherwise fail to see the risk.
Do you view lethally autonomous robots as a potential threat to freedom and democracy?
Don’t there exist weapons that already exhibit the property of “lethal autonomy”—namely, land mines?
That’s not even comparable. Consider this:
Land mines don’t distinguish between your allies and your enemies.
Land mines don’t move and people can avoid them.
Unless your enemy is extremely small and/or really terrible at strategy, you can’t win a war with land mines. On the other hand, these killer robots can identify targets, could hunt people down by tracking various bits of data (transactions, cell phone signals, etc), could follow people around using surveillance systems, and can distinguish between enemies and allies. With killer robots, you could conceivably win a war.
Basically, no. Being a trigger that blows up when stepped on isn’t something that can realistically be called autonomy.
::points to exhibit of plucked chicken wearing “I’m a human!” sign::
Well, yeah, it’s a far cry from killer robots, but once a mine is planted, who dies and when is pretty much entirely out of the hands of the person who planted it. And there are indeed political movements to ban the use of land mines, specifically because of this lack of control; land mines have a tendency to go on killing people long after the original conflict is over. So land mines and autonomous killer robots do share at least a few problematic aspects; could a clever lawyer make a case that a ban on “lethal autonomy” should encompass land mines as well?
A less silly argument could also be directed at already-banned biological weapons; pathogens reproduce and kill people all the time without any human intervention at all. Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?
Now I’m not sure whether you were (originally) trying to start a discussion about how the term “lethal autonomy” should be used, or if you intended to imply something to the effect of “lethal autonomy isn’t a new threat, therefore we shouldn’t be concerned about it”.
Even if I was wrong in my interpretation of your message, I’m still glad I responded the way I did—this is one of those topic where it’s best if nobody finds excuses to go into denial, default to optimism bias, or otherwise fail to see the risk.
Do you view lethally autonomous robots as a potential threat to freedom and democracy?
I dunno. I’m just a compulsive nitpicker.
Lol. Well thank you for admitting this.
Yes. But I wouldn’t expect it to come up too often as a sincere question.
Or the pit-trap: Lethal autonomy that goes back to the Stone Age :-)
And deliberately set wildfires.