[LINKS] Killer Robots and Theories of Truth
Peter at the Conscious Entities blog wrote an essay on the problems with using autonomous robots for combat, and attempts to articulate some general principles which allow them to be used ethically. He says:
In essence I think there are four broad reasons why hypothetically we might think it right to be wary of killer robots: first, because they work well; second because in other ways they don’t work well, third because they open up new scope for crime, and fourth because they might be inherently unethical.
Unpacking this a little, autonomous robots will affect the characteristics of war and make it easier for many to carry out, can be expected to malfunction in especially complex and open-ended situations in very serious ways, might be re-purposed for crime, and because for various reasons they make the ethics surrounding war even more dubious.
He even takes a stab at laying out restrictive principles which will help mitigate some of the danger in utilizing autonomous robots:
P1. Killer Robots should not be produced or used in a way that allows them to fall into the hands of people who will use them unethically.
P2. Killer Robots should not be used for any mission which you would not be prepared to assign to a human soldier if a human soldier were capable of executing it.
P3. Killer Robots should not be used for any mission in unpredictable circumstances or where the application of background understanding may be required.
P4. Killer Robots should not be equipped with capacities that go beyond the immediate mission; they should be subject to built-in time limits and capable of being shut down remotely.
Though he is a non-expert in the field, I (also a non-expert) find his analysis capable and thorough, though I spotted some possible flaws. I mention it here at LessWrong because, while we may be decades away from superintelligent AI, work in AI risk and machine ethics is going to become especially important very soon as drones, robots, and other non-human combatants become more prevalent on battlefields all over the world.
Switching gears a bit, Massimo Pigliucci of Rationally Speaking fame lays out some common theories of truth and problems facing each one. If you’ve never heard of Charles Sanders Pierce and wouldn’t know a verificationist account of truth if it hit you in the face, Massimo’s article could be a good place to start getting some familiarity. It seems relevant because there has been some work on epistemology in these parts recently. And, as Massimo says:
...it turns out that it is not exactly straightforward to claim that science makes progress toward the truth about the natural world, because it is not clear that we have a good theory of truth to rely on; moreover, there are different conceptions of truth, some of which likely represent the best we can do to justify our intuitive sense that science does indeed make progress, but others that may constitute a better basis to judge progress (understood in a different fashion) in other fields — such as mathematics, logic, and of course, philosophy.
This matters for anyone who wants to know how things are, but is even more urgent for one who would create a truth-seeking artificial mind.
My purpose with this is not to argue, but to get people to really think about the measures he suggests because I think we can have a more realistic view than the one presented by Peter at the Conscious Entities blog.
P1 - Restricting killer robot production would come at great cost, would pose risks, and isn’t likely to happen.
Great Cost:
To ban killer robots, you would also have to ban:
3-D printers (If they can’t make parts for killer robots now, they’ll probably be able to make them later.)
Personal robots (If they can hold a gun then people could pull some Kevlar over them and make any modifications needed.)
Anything that can be controlled by a computer and also hold a deadly payload (toy and hobby items like airplanes and quad copters may be able to be fashioned into assassination tools with the addition of something like a spray bottle full of chemicals or dart shooter.)
Computer controlled vehicles. Seem unwieldy or expensive? Consider how many pounds of explosives they can conceal, how far they can go, and how much damage they could do for the price, and the possibility of choosing a cheap used vehicle to offset cost (and the used cars of the future may be computer capable).
The number of technologies that could potentially be used to make lethally autonomous “killer robot” weapons is limited only by our imaginations. Pretty much anything with the ability to see, process visual data, identify targets, and physically move could become deadly with modification. As technology progresses, it would become harder and harder to make anything new without it getting banned due to it’s potential for lethal autonomy. The amount of future technologies we’d have to ban could become ridiculous.
Bans pose risks:
As is said about gun control: “If guns are illegal, only the criminals will have them”—Eliezer agrees with the spirit of this in the context of killer robots.
Consider these possibilities:
People will be able to steal from these approved companies, they’ll be able to bribe these companies, and organized crime groups like mafias and gangs will be able to use tactics like blackmail and intimidation to get 3-D printers and other technologies. Criminals will therefore still have access to those things.
Anybody who wants to become a bloodthirsty dictator would only have to start the right kind of company. Then they’d have access to all the potential weapons they want, and assuming they could amass enough robots to take on an army (in some country, if not in their own)… they could fulfill that dream.
If we did ban them for the average person but let companies have them, we’d be upgrading those companies to an empowered class of potential warlords. Imagine if companies today—the same ones that are pulling various forms of B.S. (like the banks and the recession) also had enough firepower to kill you.
Isn’t likely to happen:
I don’t think we’re likely to ban all 3-D printers, personal robots, computer-controlled cars, computer-controlled toys / electronics and everything else that could possibly be used as a lethally autonomous weapon. Such widespread bans would have a major impact on economic growth. Consider how much we feel a need to compete with other countries—and other countries may not have bans. Especially consider the relationship between our economic growth and our military power—we can’t defend ourselves against other countries without funding our military, and we can’t fund our military without taxes, and without sufficient economic growth, we won’t be able to collect sufficient taxes. If any other countries do not also have such bans, and any of those ban-less countries might in the future decide to make war against us, we’d be sitting ducks if we let such bans slow economic growth.
Even if we did ban possession of these items for the average person (which would seriously impact economic growth, seeing as how the average person’s purchases are a large part of that, and those purchases can be taxed), we’d probably not ban them for manufacturers and other professionals else technological progress may be seriously crippled. If we do not ban them for companies, this means that the risk was not eliminated (see “bans pose risks” above).
If the people realize how these technologies could cause the power balances to shift—and Daniel Suarez is working on getting them to realize that—they may begin to demand to be allowed 3-D printers and personal robots and so on as an extension of their right to bear arms. They may realistically need to have defenses against the gangs, wayward companies and would-be dictators of the future, and if they’re concerned about it, they’ll be looking to get a hold of those weapons in whatever way possible. If the people believe that they have a right to, or a need for 3-D printers and robot body-guards, then a ban on these types of technologies would be about as effective as prohibition.
P2. - Ensuring hypothetical human soldiers will not protect democracy.
If sufficient killer robots exist to match or overpower human soldiers, then at that point, the government can do what it likes because nobody will be able to fight back. This means the checks and balances on the government’s power are gone. No checks and balances means that the government does not even have to follow it’s own rules—nobody is powerful enough to enforce them. (Imagine the supreme court screaming at the executive branch in front of the executive branch’s killer robot army. Not practical.) If that happens, you’ll be at the mercy of those in power and will just have to cross your fingers that every single president you elect until the end of time (don’t forget the one in office at the time) chooses not to be a dictator for life. Game over. We fail.
P3. - Avoiding unpredictable circumstances is not possible.
A. If unpredictable circumstances are a killer robot army’s weakness, the enemy of said killer robot army will most certainly realize that this can be exploited. If any types of unpredictable circumstances at all are useful, the enemy will likely be forced to exploit them in order to survive.
B. Since when is regular life predictable, let alone a situation as chaotic as war? Sorry, ensuring a predictable circumstance in the event of war is not possible.
P4. - Restricting killer robot abilities may prove anti-strategic and therefore deemed lame.
Since war is an unpredictable and chaotic situation in which your enemy—who is a conscious, thinking entity—will probably get creative and throw at you exactly what you did not plan for, versatility is a must. It may be that failing to arm the robot in every way possible makes it totally ineffective, meaning that if people choose to fight with them at all, they will view it as an absolute necessity to arm all killer robots to the teeth, and will justify that with “Well you want to survive the war, don’t you?”
P4. - Adding remote shut down isn’t practical.
A. Imagine how a remote shut down situation would actually play out in reality. Your robots are fighting. Oops there’s a bug. There are enemies everywhere. You shut them down. The enemy goes “WOOT FREE KILLER ROBOTS!” takes them to their base, hacks them. and reverse engineers them. Not only did you just lose your killer robot army, your enemy just gained a killer robot army, and will be able to make better weapons from now on. When is remote shut down ever going to actually be used on a killer robot army during combat? I think the answer to that is: If the person controlling the robots has half a brain, never. They will never use this feature outside of a test environment, and if their computer security expert has half a brain, the remote shut down feature will be removed from them after they leave the test environment (See B).
B. Successful offense is easier than successful defense—this also applies to the computer hacking world. This is why there are so many police stations and government offices that do not connect computers with sensitive data to the internet or don’t have the internet at all! They can’t be certain of preventing their computers from being hacked. If you put remote shut down into the killer robot armies, that’s just a super sweet target for your enemy’s hackers. In order to be hacker-proof, they’ll have to make these robots truly autonomous—meaning no remote control whatsoever, and no special button or voice command or sequence that shuts them down, period. If their computer security expert has half a brain, killer robots will not be made with the remote shutdown “feature”. Well, okay, I suppose the government could put in a remote shut down feature they want to send them to shoot at people in developing countries with no hackers—but the remote shutdown feature would be a serious flaw against a technologically advanced enemy. Actually, scratch that. There are a lot of criminal hacking organizations out there and technology companies that may be interested in hacking the remote shutdown feature in order to usurp their very own robot army. Creating an army of killer robots with a shutdown feature in a world where there are multiple parties that may be interested in usurping this army could be an extremely poor decision, even if your original intention was to expose that robot army only to third-world combatants.
Thank you very much for taking time to talk about this issue. I’m very glad to see that people are taking it seriously and are talking about it. I hope you do not take offense at my comment, as my purpose with this is not to make you feel argued with but to encourage people to think realistically about these dangers.
Thanks for your comments, I’m inclined to basically agree with what you’ve said. Bans are almost never the answer and probably wouldn’t work anyway. Which, if that’s true, means machine ethics is even more important, because the only solution is to make these autonomous technologies as absolutely safe as possible.
I am glad to know that my comments have made a difference and that they were welcome. I think LessWrong could benefit a lot from The Power of Reinforcement, so I am glad to see someone doing this.
Actually, I don’t think that approach will work in this scenario. When it comes to killer robots, the militaries will make them as dangerous as possible (but controllable, of course). However, the biggest problem isn’t that they’ll shoot innocent people—that’s a problem, but there’s a worse one. The worst one is that we may soon live in an age where anyone can decide to make themselves an army. Making killer robots safe is an oxymoron. There needs to be a solution that’s really out of the box.
Each of the P are vulnerable to the same objection: What is special about robots?
Why does this not apply to rifles?
Again, why isn’t this isomorphic to “Human equipped with weapon X” versus “unarmed human”?
Once more: Why are “Killer Robots” different from “machine guns” in this sentence?
s/Killer Robot/military unit.
Killer robots pose a threat to democracy that rifles do not. Please see “Near-Term Risk: Killer Robots a Threat to Freedom and Democracy” and the TED Talk link therein “Daniel Suarez: The kill decision shouldn’t belong to a robot”. You might also like to check out his book “Daemon” and it’s sequel.
Machine guns are wielded by humans, the humans can make better ethical decisions than robots currently can.
This is not obvious. Many’s the innocent who has been killed by some tense soldier with his finger on the trigger of a loaded weapon, who didn’t make an ethical decision at all. He just reacted to movement in the corner of his eye. If there was an ethical decision made, it was not at the point of killing, but at the point of deploying the soldier, with that armament and training, to that area—and this decision will not be made by the robots themselves, for some time to come.
If you don’t like machine guns, how about minefields? The difference between a killer robot and a minefield seems pretty minuscule to me; one moves around, the other doesn’t.
Your mistake is in identifying pulling the trigger as the ethically important moment.
Here is a small edit:
Every argument for and against industrial robots applies to military robots, except industrial robots influence more people on an ongoing basis (redundancy through automation) while (aside from a Terminator future) military robots influence less people for shorter periods.
Military robots and industrial robots are both capable of going horribly wrong. However, military robots can also go horribly right. They are designed to cause large amounts of damage, which means that it’s more likely for them to cause large amounts of damage in an inconvenient way. Industrial robots can, and occasionally do, cause large amounts of damage, but it’s much less likely.
Also, the argument that military robots can commit atrocities that human soldiers would not has no analogue with industrial robots. Industry is a much less ethically gray area. They do things that are somewhat unethical, but not to the point that they can’t find people willing to do them.
I don’t entirely buy these arguments. In fact, I think military robots would make atrocities less likely. Soldiers are quite capable of committing them and with robots, at least everything they do is recorded. My point is that there are significant differences between military and industrial robots.