::points to exhibit of plucked chicken wearing “I’m a human!” sign::
Well, yeah, it’s a far cry from killer robots, but once a mine is planted, who dies and when is pretty much entirely out of the hands of the person who planted it. And there are indeed political movements to ban the use of land mines, specifically because of this lack of control; land mines have a tendency to go on killing people long after the original conflict is over. So land mines and autonomous killer robots do share at least a few problematic aspects; could a clever lawyer make a case that a ban on “lethal autonomy” should encompass land mines as well?
A less silly argument could also be directed at already-banned biological weapons; pathogens reproduce and kill people all the time without any human intervention at all. Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?
Now I’m not sure whether you were (originally) trying to start a discussion about how the term “lethal autonomy” should be used, or if you intended to imply something to the effect of “lethal autonomy isn’t a new threat, therefore we shouldn’t be concerned about it”.
Even if I was wrong in my interpretation of your message, I’m still glad I responded the way I did—this is one of those topic where it’s best if nobody finds excuses to go into denial, default to optimism bias, or otherwise fail to see the risk.
Do you view lethally autonomous robots as a potential threat to freedom and democracy?
Basically, no. Being a trigger that blows up when stepped on isn’t something that can realistically be called autonomy.
::points to exhibit of plucked chicken wearing “I’m a human!” sign::
Well, yeah, it’s a far cry from killer robots, but once a mine is planted, who dies and when is pretty much entirely out of the hands of the person who planted it. And there are indeed political movements to ban the use of land mines, specifically because of this lack of control; land mines have a tendency to go on killing people long after the original conflict is over. So land mines and autonomous killer robots do share at least a few problematic aspects; could a clever lawyer make a case that a ban on “lethal autonomy” should encompass land mines as well?
A less silly argument could also be directed at already-banned biological weapons; pathogens reproduce and kill people all the time without any human intervention at all. Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?
Now I’m not sure whether you were (originally) trying to start a discussion about how the term “lethal autonomy” should be used, or if you intended to imply something to the effect of “lethal autonomy isn’t a new threat, therefore we shouldn’t be concerned about it”.
Even if I was wrong in my interpretation of your message, I’m still glad I responded the way I did—this is one of those topic where it’s best if nobody finds excuses to go into denial, default to optimism bias, or otherwise fail to see the risk.
Do you view lethally autonomous robots as a potential threat to freedom and democracy?
I dunno. I’m just a compulsive nitpicker.
Lol. Well thank you for admitting this.
Yes. But I wouldn’t expect it to come up too often as a sincere question.