it would allow a small number of people to concentrate a very large amount of power
Possibly a smaller number than with soldiers, but not that small—you still need to deal with logistics, maintenance, programming...
it’s unthinkable today that American soldiers might suddenly decide to follow a tyrannical leader tomorrow whose goal is to have total power and murder all opponents. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risk of mutiny.
It might me a bit more likely, but it stills seems like a very unlikely scenario (0.3% instead of 0.1%?), still staying less likely than other disaster scenarios (breakdown of infrastructure/economy leading in food shortages and panic and riots; a big war starting on one of the less stable parts of the world (ex-Yugoslavia, China//Taiwan, Middle east) and spilling over; an ideological movement motivating a big part of the population into violent action; UFAI; etc.)
EDIT: to expand a bit on this, I don’t think replacing soldiers by drones increases risk much all else being equal because the kind of things soldiers would refuse to do are also the kind of things the (current) command structure is unlikely to want to do anyway.
I highly doubt that either one of us would be able to accurately estimate how many employees it would require to make a robot army large enough to take over a population, but looking at some numbers will at least give us some perspective. I’ll use the USA as an example.
The USA has 120,022,084 people fit for military service according to Wikipedia. (The current military is much smaller, but if there were a takeover in progress, that’s the maximum number of hypothetical America soldiers we could have defending the country.)
We’ll say that making a robot army takes as many programmers as Microsoft and as many engineers and factory workers as Boeing:
Microsoft employees: 97,811
Boeing employees: 171,700
That’s 0.22% of the number of soldiers.
I’m not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.
Is it possible that 1 in 200 people or so are crazy enough to build and maintain a robot army for a tyrant?
Number of sociopaths: 1 in 20.
And you wouldn’t even have to be a sociopath to follow a new Hitler.
I like that you brought up the point that it would take a significant number of employees to make a robot army happen, but I’m not convinced that this makes us safe. This is especially because they could do something like build military robots that are very close to lethal autonomy but not quite, tell people they’re making something else, make software to run the basic functions like walking and seeing, and then have a very small number of people make modifications to the hardware and/or software to turn them into autonomous killers.
Of course, once the killer robots are made, then they can just use them to coerce the maintenance and logistics people.
How many employees would have to be aware of their true ambitions? That might be the key question.
The USA has 120,022,084 people fit for military service according to Wikipedia.
(...)
That’s 0.22% of the number of soldiers.
Excuse me? You are taking the number of military-age males and using it as the number of soldiers! The actual US armed forces are a few million. 5% would be a much better estimate. This aside, you are ignoring that “lethal autonomy” is nowhere near the same thing as “operational autonomy”. A Predator drone requires more people to run it—fuelling, arming, polishing the paint—than a fighter aircraft does.
Of course, once the killer robots are made, then they can just use them to coerce the maintenance and logistics people.
How? “Do as I say, or else I’ll order you to fire up the drones on your base and have them shoot you!” And while you might credibly threaten to instead order the people on the next base over to fire up their drones, well, now you’ve started a civil war in your own armed forces. Why will that work better with drones than with rifles?
Again, you are confusing lethal with operational autonomy. A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing. It still has to be pointed in the general direction of the enemy, loaded, fuelled, and launched; and you still have to convince the people doing the work that it needs to be done.
A Predator drone requires more people to run it—fuelling, arming, polishing the paint—than a fighter aircraft does.
It does? I would’ve guessed the exact opposite and that the difference would be by a large margin: drones are smaller, eliminate all the equipment necessary to support a human, don’t have to be man-rated, and are expected to have drastically less performance in terms of going supersonic or executing high-g maneuvers.
Yes. An F-16 requires 100 support personnel; a Predator 168; a Reaper, 180. Source.
It seems like some but not all of the difference is that manned planes have only a single pilot, whereas UAV’s not only have multiple pilots, but also perform much more analysis on recorded data and split the job of piloting up into multiple subtasks for different people, since they are not limited by the need to have only 1 or 2 people controlling the plane.
If I had to guess, some of the remaining difference is probably due to the need to maintain the equipment connecting the pilots to the UAV, in addition to the UAV itself; the most high-profile UAV failure thus far was due to a failure in the connection between the pilots and the UAV.
I’m not sure that’s comparing apples and oranges. From the citation for the Predator figure:
About 168 people are needed to keep a single Predator aloft for 24 hours, according to the Air Force. The larger Global Hawk surveillance drone requires 300 people. In contrast, an F-16 fighter aircraft needs fewer than 100 people per mission.
I’m not sure how long the average mission for an F-16 is, but if it’s less than ~12 hours, then the Predator would seem to have a manpower advantage; and the CRS paper cited also specifically says:
In addition to having lower operating costs per flight hour, specialized unmanned aircraft systems can reduce flight hours for fighter aircraft
The F-16 seems to have a maximum endurance of 3-4 hours, so I’m pretty sure its average mission is less than 12 hours.
My understanding was that Rolf’s argument depended on the ratio personnel:plane, not on the ratio personnel:flight hour; the latter is more relevant for reconnaissance, ground attack against hidden targets, or potentially for strikes at range, whereas the former is more relevant for air superiority or short range strikes.
The actual US armed forces are a few million. 5% would be a much better estimate. This aside, you are ignoring that “lethal autonomy” is nowhere near the same thing as “operational autonomy”. A Predator drone requires more people to run it—fuelling, arming, polishing the paint—than a fighter aircraft does.
If you are getting >6x more flight-hours out of a drone for 6x for an increased man power of <2x—even if you keep the manpower constant and shrink the size of the fleet to compensate for that <2x manpower penalty, you’ve still got a new fleet which is somewhere around 6x more lethal. Or you could take the tradeoff even further and have an equally lethal fleet with a small fraction of the total manpower, because each drone goes so much further than its equivalent. So a drone fleet off similar lethality does have more operational autonomy!
That’s why per flight hour costs matter—because ultimately, the entire point of having these airplanes is to fly them.
Would you happen to be able to provide these figures:
The ratio of human resources-to-firepower on the current generation of weapons.
The ratio of human resources-to-firepower on the weapons used during eras where oppression was common.
I’d like to compare them.
Hmm, “firepower” is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.
I have no idea; as I said, my expectations are just guesses based on broad principles (slow planes are cheaper than ultra-fast planes; clunk planes are cheaper than ultra-maneuverable ones; machines whose failure do not immediately kill humans are cheaper to make than machines whose failure do entail human death; the cheapest, lightest, and easiest to maintain machine parts are the ones that aren’t there). You should ask Rolf, since apparently he’s knowledgeable in the topic.
Would you happen to be able to provide these figures:
The ratio of human resources-to-firepower on the current generation of weapons.
The ratio of human resources-to-firepower on the weapons used during eras where oppression was common.
I’d like to compare them.
Hmm, “firepower” is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.
Excuse me? You are taking the number of military-age males and using it as the number of soldiers!
Yes!
The actual US armed forces are a few million. 5% would be a much better estimate.
If the question here is “How many people are currently in the military” my figure is wrong. However, that’s not the question. The question is “In the event that a robot army tries to take over the American population, how many American soldiers might there be to defend America?” You’re estimating in a different context than the one in my comment.
This aside, you are ignoring that “lethal autonomy” is nowhere near the same thing as “operational autonomy”
Actually, if you’re defining “operational autonomy” as “how many people it takes to run weapons”, I did address that when I said “I’m not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.” If you have better estimates, would you share them?
How? “Do as I say, or else I’ll order you to fire up the drones on your base and have them shoot you!”
Method A. They could wait until the country is in turmoil and prey on people’s irrationality like Hitler did.
Method B. They could get those people to operate the drones under the guise of fighting for a good cause. Then they could threaten to use the army to kill anyone who opposes them. This doesn’t have to be sudden—it could happen quite gradually, as a series of small and oppressive steps and rules wrapped in doublespeak that eventually lead up to complete tyranny. If people don’t realize that most other people disagree with the tyrant, they will feel threatened and probably comply in order to survive.
Method C. Check out the Milgram experiment. Those people didn’t even need to be coerced to apply lethal force. It’s a lot easier than you think.
Method D. If they can get just a small group to operate a small number of drones, they can coerce a larger group of people to operate more drones. With the larger group of people operating drones, they can coerce even more people, and so on.
Why will that work better with drones than with rifles?
This all depends on the ratio of people it takes to operate the weapons vs. number of people the weapons can subdue. Your perception appears to be that predator drones require more people to run them than a fighter aircraft. My perception is that it doesn’t matter how many people it takes to operate a predator drone because war technology is likely to be optimized further than it is today, and if it is possible to decrease the number of people it requires to build/maintain/run/etc. the killer robots significantly below the number of people it would take to get the same amount of firepower otherwise, then of course they can take over a population more easily.
A high firepower to human resource ratio means takeovers would work better.
A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing.
That’s not what Suarez says. Even if he’s wrong do you deny that it’s likely that technology will advance to the point where people can make robots capable of killing without a human making the decision? That’s what this conversation is about. Don’t let us get all mixed up like Eliezer warns us about in 37 ways words can be wrong. If we’re talking about robots that can kill without a human’s decision, those are a threat, and could potentially reduce the human resources-to-firepower ratio enough to threaten democracy. If you want to disagree with me about what words I should use to speak about this, that’s great. In that case, though, I’d like to know where your credible sources are so that I can read authoritative definitions please.
and you still have to convince the people doing the work that it needs to be done.
What prevents these methods from being used with rifles? What is special about robots in this context?
Even if he’s wrong do you deny that it’s likely that technology will advance to the point where people can make robots capable of killing without a human making the decision?
No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it’s in a position to make the decision to kill, is what we cannot yet do programmatically. You’re confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years.
Hitler.
Milgram experiment.
Number of sociopaths: 1 in 20.
Is rationality taught in school?: No.
Ok, so this being so, how come we don’t already have oppressive societies being run with plain old rifles?
This is implausible. There is no conceivable motive for people to support the hypothetical robot army; there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant, who doesn’t have the many trillions of dollars needed to pay them (since nobody has that much money) [1], who is unable to keep secret the millions of people building illegal weaponry for him, and who almost no chance at succeeding even with the robot army, since the US military outspends everybody.
[1]: 1⁄200 US population average microsoft salary = 150 billion USD. This would require many, many years of work- given how long the military has worked on predators, probably decades. So it would require trillions of dollars.
Also, I don’t think you understand sociopathy. The 1⁄20 figure you cited should be 1⁄25, which refers to the DSM’s “antisocial personality disorder;” sociopathy is a deficit in moral reasoning, which is very different from being a person who’s just waiting to become a minion to some dictator.
This is implausible. There is no conceivable motive for people to support the hypothetical robot army; there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant, who doesn’t have the many trillions of dollars needed to pay them (since nobody has that much money)
For a start, I don’t believe you. People have done comparable things for tyrants in the past (complete albeit probably inefficient dedication of the resources of the given tribe to the objectives of the tyrant—horseback archers and small moustaches spring to mind). But that isn’t the primary problem here.
The primary problem would be with a country creating the army in the usual way that a country creates an army but that once owned this army would be much easier for an individual (or a few) to control. It makes it easier for such people to become tyrants and once they are to retain their power. This kind of thing (a general seizing control by use of his control of the military) is not unusual for humans. Killer robots make it somewhat easier. Controlling many humans is complicated and unreliable.
there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant
There are so many ways that a tyrant could end up with a robot army. Don’t let’s pretend that that’s the only way. Here are a few:
A country is in turmoil and a leader comes along who makes people feel hope. The people are open to “lesser evil” propositions and risk-taking because they are desperate. They make irrational decisions and empower the wrong person. Hitler is a real life actual example of this happening.
A leader who is thought of as “good” builds a killer robot army. Then, realizing that they have total power over their people corrupts them and they behave like a tyrant, effectively turning into an oppressive dictator.
Hypothetical scenario: The setting is a country with presidential elections (I choose America for this one). Hypothetically, in this scenario we’ll say the technology to do this was completely ready to be exploited. So the government begins to build a killer robot army. Hypothetically, a good president happens to be in office, so people think it’s okay. We’ll say that president gets a second term. Eight years pass, and a significant killer robot army is created. It’s powerful enough to kill every American. Now, it’s time to change the president. Maybe the American people choose somebody with their best interests in mind. Maybe they choose a wolf in sheep’s clothing, or a moron who doesn’t understand the dangers. It’s not like we haven’t elected morons before and it isn’t as if entire countries full of people have never empowered anyone dangerous. I think it’s reasonable to say that there’s at least a 5% chance that each election will yield either a fatally moronic person, an otherwise good person who is susceptible to being seriously corrupted if given too much power, someone with a tyrant’s values/personality, or a sociopath. If you’re thinking to yourself “how many times in American history have we seen a president go corrupt by power” consider that there have been checks and balances in place to prevent them from having enough power to be corrupted by. In my opinion, it’s likely that most of them would be corrupted by the kind of absolute power that a killer robot army would give them, and 5% is actually quite a low estimate compared with my model of how reality works. But for hypothetical purposes, we’ll pretend it’s only as high as 5%. We roll the dice on that 5% chance every four years because we hold elections again. If we added those 5% chances up over the course of the rest of my life, we’d end up with it being more likely than not (62.5%) that the wrong person will end up having total control over the country I live in.
Do you see now how a tyrant or other undesirable leader could very conceivably end up heading a killer robot army?
[1]: 1⁄200 US population average microsoft salary = 150 billion USD. This would require many, many years of work- given how long the military has worked on predators, probably decades. So it would require trillions of dollars.
Thank you. I am very glad for these figures. How long do you think it would take for the US government to build 100 million killer robots?
Also, I don’t think you understand sociopathy. The 1⁄20 figure you cited should be 1⁄25, which refers to the DSM’s “antisocial personality disorder;”
Not sure why we have different numbers but: The statistics for America are different from the statistics for other countries (so, depending on whether your source is aiming for a global figure or local figure, this can vary), the statistic probably changes over time, the DSM changes over time, there are multiple sources on this that probably do not agree, and the 1⁄20 figure is based on research I did ten years ago, so something in there probably explains it. A 1% difference in prevalence is irrelevant here since (in my extremely amateurish, shoot-from-the-hip estimate “just to get some perspective”) if 1 in 200 people are willing to work on the robot army, that’s enough—and 1⁄25 is obviously significantly larger.
sociopathy is a deficit in moral reasoning, which is very different from being a person who’s just waiting to become a minion to some dictator.
Ah, but if you want to be a tyrant you don’t need minions who have been dreaming of becoming a minion. Consider this—most people who are employed at a job didn’t dream of becoming what they are. There are a lot of people working as cashiers, grocery baggers, doing boring work in a factory, working in telemarketing, etc who dislike and even despise their jobs. Why do they do them? Money, obviously.
Why don’t those people turn into drug dealers? They’d make more money that way.
Ethics!
Those people have a sense of right and wrong, or at least are successfully coerced by laws.
People with antisocial personality disorder, the way the DSM defines it, have neither of these properties.
You said yourself above that most people wouldn’t want to build a robot army for a tyrant. I agree. But a sociopath doesn’t give a rat’s behind how they get their money. That is why they are more likely to work for a tyrant—they don’t have a conscience and they don’t care about the law. If they can make more money assembling killer robots than flipping burgers, there’s nothing to stop them from taking the killer robot job.
Taking this into consideration, do you think sociopaths could end up building killer robots?
It seems to me like you’re outlining four different scenarios:
1) The United States, or another major power, converts from manned to unmanned weapons of war. A military coup is impossible today because soldiers won’t be willing to launch one; were soldiers to be replaced by robots, they could be ordered to.
2) Another state develops unmanned weapons systems which enable it to defeat the United States.
3) A private individual develops unmanned weapons systems which enable them to defeat the United States.
4) Another state which is already a dictatorship develops unmanned weapons systems which alow the dictator to remain in power.
My interpretation of your original comment was that you were arguing for #3; that is the only context in which hiring sociopaths would be relevant, as normal weapons development clearly doesn’t require hiring a swarm of sociopathic engineers. The claim that dictatorships exclusively or primarily rely on sociopaths is factually wrong. e.g. according to data from Order Police Battalion 101, 97% of an arbitrary sample of Germans under Hitler were willing to take guns and mow down civilians. Certainly, close to 100% of an arbitrary sample of people would be willing to work on developing robots for either the US or any other state- we can easily see this today.
If you were arguing for #2, then my response would be that the presence of unmanned weapons systems wouldn’t make a different one way or another- if we’re positing another state able to outdevelop, then defeat, the US, it would presumably be able to do so anyways. The only difference would be if it had an enormous GDP but low population; but such a state would be unlikely to be an aggressive military dictatorship, and, anyways, clearly doesn’t exist.
For #4, current dictatorships are too far behind in terms of technological development for unmanned weapons systems to have a significant impact- what we see today is that the most complex weapons systems are produced in a few. mostly stable and democratic nations, and there’s good reason to think that democracy is caused by economic and technological development, such that the states that are most able to produce unmanned weapons are also the most likely to already be democratic. (More precisely, are the most likely to be democratic by the time they build enough unmanned weapons, which seems to be decades off at a minimum.) Worst case, there are 1-3 states (Iran, Russia, China[*]) likely to achieve the capacity to build their own unmanned weapons systems without being democracies; and even then, it’s questionable whether unmanned weapons systems would be able to do all that much. (It depends on the exact implementation, of course; but in general, no robots can assure the safety of a dictator, and they cannot stop the only way recent Great Power dictatorships have loosened up, by choice of their leaders.)
[*] This is a list of every country that is a dictatorship or quasi-dictatorship that’s built it’s own fighters, bombers, or tanks, minus Pakistan. I’m very confident that China’s government already has enough stability/legitimacy concerns and movement towards democracy that they would implement safeguards. Iran and Russia I give ~50% chances each of doing so.
If you were arguing for #1, then a) the US has well-established procedures for oversight of dangerous weapons (i.e. WMD) which have never failed, b) it would be much easier for the President and a small cabal to gain control using nukes than robots, c) the President doesn’t actually have direct control of the military- the ability to create military plans and respond to circumstances for large groups of military robots almost certainly requires AGI, d) as noted separately, there never be a point, pre-AGI, where robots are actually independent of people, e) conspiracies as a general rule are very rarely workable, and this hypothetical conspiracy seems even less workable than most, because it requires many people working together over at least several decades, each person ascending to an elite position.
How long do you think it would take for the US government to build 100 million killer robots?
I don’t believe that it ever will. If the US spends 6,000 USD in maintenance per robot, that would eat up the entire US military budget. 6,000$ is almost certainly a severe underestimate of the cost of operating them, by roughly 3 orders of magnitude, and anyways, that neglects a huge number of relevant factors: the cost of purchasing an MQ-1, amortized over a 30-year operating period, is roughly the same per year as the operating cost; money also needs to be spent doing R&D; non-robot fixed costs total at least 6% of the US military budget; much of US military spending is on things like carriers or aerial refueling or transports whose robot equivalents wouldn’t be ‘killer robots’; etc. (The military budget may go up over time, but the cost per plane has risen faster than the military budget since WWII, so if anything this also argues against large numbers of robots.)
An alternative date: I would expect the USAF to be majority unmanned by 2040 +- 5 years (50% bounds, most uncertainty above); this is roughly one lifecycle of planes forward from today. (Technically it’s a fair bit less; but I’d expect development to speed up somewhat.)
I would expect the US Army to deploy unmanned ground combat units in serious numbers by 2035 +- 5 years.
I would expect the USAF to remove humans from the decision making loop on an individual plane’s flights in 2045 +- 5 years; on a squadron, including maintenance, command, etc, 2065 +- 5 years; above that, never.
How long do you think it would take for the US government to build 100 million killer robots?
I don’t believe that it ever will.
Technologies become less expensive over time, and as we progress, our wealth grows. If we don’t have the money to produce it at the current cost, that doesn’t mean they’ll never be able to afford to do it.
If the US spends 6,000 USD in maintenance per robot, that would eat up the entire US military budget. 6,000$ is almost certainly a severe underestimate of the cost of operating them, by roughly 3 orders of magnitude
You didn’t specify a time period—should I assume that’s yearly? Also, do they have to pay $6,000 in maintenance costs while the units are in storage?
and anyways, that neglects a huge number of relevant factors: the cost of purchasing an MQ-1, amortized over a 30-year operating period, is roughly the same per year as the operating cost; money also needs to be spent doing R&D; non-robot fixed costs total at least 6% of the US military budget; much of US military spending is on things like carriers or aerial refueling or transports whose robot equivalents wouldn’t be ‘killer robots’; etc. (The military budget may go up over time, but the cost per plane has risen faster than the military budget since WWII, so if anything this also argues against large numbers of robots.)
Okay, so an MQ-1 is really, really expensive. Thank you.
An alternative date: I would expect the USAF to be majority unmanned by 2040 +- 5 years (50% bounds, most uncertainty above); this is roughly one lifecycle of planes forward from today. (Technically it’s a fair bit less; but I’d expect development to speed up somewhat.) I would expect the US Army to deploy unmanned ground combat units in serious numbers by 2035 +- 5 years.
What is “serious numbers”?
I would expect the USAF to remove humans from the decision making loop on an individual plane’s flights in 2045 +- 5 years; on a squadron, including maintenance, command, etc, 2065 +- 5 years; above that, never.
What do you mean by “above that, never”?
Sorry I didn’t get to your other points today. I don’t have enough time.
P.S. How did you get these estimates for when unmanned weapons will come out?
Possibly a smaller number than with soldiers, but not that small—you still need to deal with logistics, maintenance, programming...
It might me a bit more likely, but it stills seems like a very unlikely scenario (0.3% instead of 0.1%?), still staying less likely than other disaster scenarios (breakdown of infrastructure/economy leading in food shortages and panic and riots; a big war starting on one of the less stable parts of the world (ex-Yugoslavia, China//Taiwan, Middle east) and spilling over; an ideological movement motivating a big part of the population into violent action; UFAI; etc.)
EDIT: to expand a bit on this, I don’t think replacing soldiers by drones increases risk much all else being equal because the kind of things soldiers would refuse to do are also the kind of things the (current) command structure is unlikely to want to do anyway.
Ok let’s get some numbers.
I highly doubt that either one of us would be able to accurately estimate how many employees it would require to make a robot army large enough to take over a population, but looking at some numbers will at least give us some perspective. I’ll use the USA as an example.
The USA has 120,022,084 people fit for military service according to Wikipedia. (The current military is much smaller, but if there were a takeover in progress, that’s the maximum number of hypothetical America soldiers we could have defending the country.)
We’ll say that making a robot army takes as many programmers as Microsoft and as many engineers and factory workers as Boeing:
Microsoft employees: 97,811 Boeing employees: 171,700
That’s 0.22% of the number of soldiers.
I’m not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.
Is it possible that 1 in 200 people or so are crazy enough to build and maintain a robot army for a tyrant?
Number of sociopaths: 1 in 20.
And you wouldn’t even have to be a sociopath to follow a new Hitler.
I like that you brought up the point that it would take a significant number of employees to make a robot army happen, but I’m not convinced that this makes us safe. This is especially because they could do something like build military robots that are very close to lethal autonomy but not quite, tell people they’re making something else, make software to run the basic functions like walking and seeing, and then have a very small number of people make modifications to the hardware and/or software to turn them into autonomous killers.
Of course, once the killer robots are made, then they can just use them to coerce the maintenance and logistics people.
How many employees would have to be aware of their true ambitions? That might be the key question.
Excuse me? You are taking the number of military-age males and using it as the number of soldiers! The actual US armed forces are a few million. 5% would be a much better estimate. This aside, you are ignoring that “lethal autonomy” is nowhere near the same thing as “operational autonomy”. A Predator drone requires more people to run it—fuelling, arming, polishing the paint—than a fighter aircraft does.
How? “Do as I say, or else I’ll order you to fire up the drones on your base and have them shoot you!” And while you might credibly threaten to instead order the people on the next base over to fire up their drones, well, now you’ve started a civil war in your own armed forces. Why will that work better with drones than with rifles?
Again, you are confusing lethal with operational autonomy. A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing. It still has to be pointed in the general direction of the enemy, loaded, fuelled, and launched; and you still have to convince the people doing the work that it needs to be done.
It does? I would’ve guessed the exact opposite and that the difference would be by a large margin: drones are smaller, eliminate all the equipment necessary to support a human, don’t have to be man-rated, and are expected to have drastically less performance in terms of going supersonic or executing high-g maneuvers.
Yes. An F-16 requires 100 support personnel; a Predator 168; a Reaper, 180. Source.
It seems like some but not all of the difference is that manned planes have only a single pilot, whereas UAV’s not only have multiple pilots, but also perform much more analysis on recorded data and split the job of piloting up into multiple subtasks for different people, since they are not limited by the need to have only 1 or 2 people controlling the plane.
If I had to guess, some of the remaining difference is probably due to the need to maintain the equipment connecting the pilots to the UAV, in addition to the UAV itself; the most high-profile UAV failure thus far was due to a failure in the connection between the pilots and the UAV.
I’m not sure that’s comparing apples and oranges. From the citation for the Predator figure:
I’m not sure how long the average mission for an F-16 is, but if it’s less than ~12 hours, then the Predator would seem to have a manpower advantage; and the CRS paper cited also specifically says:
The F-16 seems to have a maximum endurance of 3-4 hours, so I’m pretty sure its average mission is less than 12 hours.
My understanding was that Rolf’s argument depended on the ratio personnel:plane, not on the ratio personnel:flight hour; the latter is more relevant for reconnaissance, ground attack against hidden targets, or potentially for strikes at range, whereas the former is more relevant for air superiority or short range strikes.
I don’t think it saves Rolf’s point:
If you are getting >6x more flight-hours out of a drone for 6x for an increased man power of <2x—even if you keep the manpower constant and shrink the size of the fleet to compensate for that <2x manpower penalty, you’ve still got a new fleet which is somewhere around 6x more lethal. Or you could take the tradeoff even further and have an equally lethal fleet with a small fraction of the total manpower, because each drone goes so much further than its equivalent. So a drone fleet off similar lethality does have more operational autonomy!
That’s why per flight hour costs matter—because ultimately, the entire point of having these airplanes is to fly them.
Would you happen to be able to provide these figures:
The ratio of human resources-to-firepower on the current generation of weapons.
The ratio of human resources-to-firepower on the weapons used during eras where oppression was common.
I’d like to compare them.
Hmm, “firepower” is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.
I have no idea; as I said, my expectations are just guesses based on broad principles (slow planes are cheaper than ultra-fast planes; clunk planes are cheaper than ultra-maneuverable ones; machines whose failure do not immediately kill humans are cheaper to make than machines whose failure do entail human death; the cheapest, lightest, and easiest to maintain machine parts are the ones that aren’t there). You should ask Rolf, since apparently he’s knowledgeable in the topic.
Thanks. I will ask Rolf.
Would you happen to be able to provide these figures:
The ratio of human resources-to-firepower on the current generation of weapons.
The ratio of human resources-to-firepower on the weapons used during eras where oppression was common.
I’d like to compare them.
Hmm, “firepower” is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.
Yes!
If the question here is “How many people are currently in the military” my figure is wrong. However, that’s not the question. The question is “In the event that a robot army tries to take over the American population, how many American soldiers might there be to defend America?” You’re estimating in a different context than the one in my comment.
Actually, if you’re defining “operational autonomy” as “how many people it takes to run weapons”, I did address that when I said “I’m not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.” If you have better estimates, would you share them?
Method A. They could wait until the country is in turmoil and prey on people’s irrationality like Hitler did.
Method B. They could get those people to operate the drones under the guise of fighting for a good cause. Then they could threaten to use the army to kill anyone who opposes them. This doesn’t have to be sudden—it could happen quite gradually, as a series of small and oppressive steps and rules wrapped in doublespeak that eventually lead up to complete tyranny. If people don’t realize that most other people disagree with the tyrant, they will feel threatened and probably comply in order to survive.
Method C. Check out the Milgram experiment. Those people didn’t even need to be coerced to apply lethal force. It’s a lot easier than you think.
Method D. If they can get just a small group to operate a small number of drones, they can coerce a larger group of people to operate more drones. With the larger group of people operating drones, they can coerce even more people, and so on.
This all depends on the ratio of people it takes to operate the weapons vs. number of people the weapons can subdue. Your perception appears to be that predator drones require more people to run them than a fighter aircraft. My perception is that it doesn’t matter how many people it takes to operate a predator drone because war technology is likely to be optimized further than it is today, and if it is possible to decrease the number of people it requires to build/maintain/run/etc. the killer robots significantly below the number of people it would take to get the same amount of firepower otherwise, then of course they can take over a population more easily.
A high firepower to human resource ratio means takeovers would work better.
That’s not what Suarez says. Even if he’s wrong do you deny that it’s likely that technology will advance to the point where people can make robots capable of killing without a human making the decision? That’s what this conversation is about. Don’t let us get all mixed up like Eliezer warns us about in 37 ways words can be wrong. If we’re talking about robots that can kill without a human’s decision, those are a threat, and could potentially reduce the human resources-to-firepower ratio enough to threaten democracy. If you want to disagree with me about what words I should use to speak about this, that’s great. In that case, though, I’d like to know where your credible sources are so that I can read authoritative definitions please.
Hitler.
Milgram experiment.
Number of sociopaths: 1 in 20.
Is rationality taught in school?: No.
What prevents these methods from being used with rifles? What is special about robots in this context?
No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it’s in a position to make the decision to kill, is what we cannot yet do programmatically. You’re confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years.
Ok, so this being so, how come we don’t already have oppressive societies being run with plain old rifles?
This is implausible. There is no conceivable motive for people to support the hypothetical robot army; there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant, who doesn’t have the many trillions of dollars needed to pay them (since nobody has that much money) [1], who is unable to keep secret the millions of people building illegal weaponry for him, and who almost no chance at succeeding even with the robot army, since the US military outspends everybody.
[1]: 1⁄200 US population average microsoft salary = 150 billion USD. This would require many, many years of work- given how long the military has worked on predators, probably decades. So it would require trillions of dollars.
Also, I don’t think you understand sociopathy. The 1⁄20 figure you cited should be 1⁄25, which refers to the DSM’s “antisocial personality disorder;” sociopathy is a deficit in moral reasoning, which is very different from being a person who’s just waiting to become a minion to some dictator.
For a start, I don’t believe you. People have done comparable things for tyrants in the past (complete albeit probably inefficient dedication of the resources of the given tribe to the objectives of the tyrant—horseback archers and small moustaches spring to mind). But that isn’t the primary problem here.
The primary problem would be with a country creating the army in the usual way that a country creates an army but that once owned this army would be much easier for an individual (or a few) to control. It makes it easier for such people to become tyrants and once they are to retain their power. This kind of thing (a general seizing control by use of his control of the military) is not unusual for humans. Killer robots make it somewhat easier. Controlling many humans is complicated and unreliable.
There are so many ways that a tyrant could end up with a robot army. Don’t let’s pretend that that’s the only way. Here are a few:
A country is in turmoil and a leader comes along who makes people feel hope. The people are open to “lesser evil” propositions and risk-taking because they are desperate. They make irrational decisions and empower the wrong person. Hitler is a real life actual example of this happening.
A leader who is thought of as “good” builds a killer robot army. Then, realizing that they have total power over their people corrupts them and they behave like a tyrant, effectively turning into an oppressive dictator.
Hypothetical scenario: The setting is a country with presidential elections (I choose America for this one). Hypothetically, in this scenario we’ll say the technology to do this was completely ready to be exploited. So the government begins to build a killer robot army. Hypothetically, a good president happens to be in office, so people think it’s okay. We’ll say that president gets a second term. Eight years pass, and a significant killer robot army is created. It’s powerful enough to kill every American. Now, it’s time to change the president. Maybe the American people choose somebody with their best interests in mind. Maybe they choose a wolf in sheep’s clothing, or a moron who doesn’t understand the dangers. It’s not like we haven’t elected morons before and it isn’t as if entire countries full of people have never empowered anyone dangerous. I think it’s reasonable to say that there’s at least a 5% chance that each election will yield either a fatally moronic person, an otherwise good person who is susceptible to being seriously corrupted if given too much power, someone with a tyrant’s values/personality, or a sociopath. If you’re thinking to yourself “how many times in American history have we seen a president go corrupt by power” consider that there have been checks and balances in place to prevent them from having enough power to be corrupted by. In my opinion, it’s likely that most of them would be corrupted by the kind of absolute power that a killer robot army would give them, and 5% is actually quite a low estimate compared with my model of how reality works. But for hypothetical purposes, we’ll pretend it’s only as high as 5%. We roll the dice on that 5% chance every four years because we hold elections again. If we added those 5% chances up over the course of the rest of my life, we’d end up with it being more likely than not (62.5%) that the wrong person will end up having total control over the country I live in.
Do you see now how a tyrant or other undesirable leader could very conceivably end up heading a killer robot army?
Thank you. I am very glad for these figures. How long do you think it would take for the US government to build 100 million killer robots?
Not sure why we have different numbers but: The statistics for America are different from the statistics for other countries (so, depending on whether your source is aiming for a global figure or local figure, this can vary), the statistic probably changes over time, the DSM changes over time, there are multiple sources on this that probably do not agree, and the 1⁄20 figure is based on research I did ten years ago, so something in there probably explains it. A 1% difference in prevalence is irrelevant here since (in my extremely amateurish, shoot-from-the-hip estimate “just to get some perspective”) if 1 in 200 people are willing to work on the robot army, that’s enough—and 1⁄25 is obviously significantly larger.
Ah, but if you want to be a tyrant you don’t need minions who have been dreaming of becoming a minion. Consider this—most people who are employed at a job didn’t dream of becoming what they are. There are a lot of people working as cashiers, grocery baggers, doing boring work in a factory, working in telemarketing, etc who dislike and even despise their jobs. Why do they do them? Money, obviously.
Why don’t those people turn into drug dealers? They’d make more money that way.
Ethics!
Those people have a sense of right and wrong, or at least are successfully coerced by laws.
People with antisocial personality disorder, the way the DSM defines it, have neither of these properties.
You said yourself above that most people wouldn’t want to build a robot army for a tyrant. I agree. But a sociopath doesn’t give a rat’s behind how they get their money. That is why they are more likely to work for a tyrant—they don’t have a conscience and they don’t care about the law. If they can make more money assembling killer robots than flipping burgers, there’s nothing to stop them from taking the killer robot job.
Taking this into consideration, do you think sociopaths could end up building killer robots?
It seems to me like you’re outlining four different scenarios:
1) The United States, or another major power, converts from manned to unmanned weapons of war. A military coup is impossible today because soldiers won’t be willing to launch one; were soldiers to be replaced by robots, they could be ordered to.
2) Another state develops unmanned weapons systems which enable it to defeat the United States.
3) A private individual develops unmanned weapons systems which enable them to defeat the United States.
4) Another state which is already a dictatorship develops unmanned weapons systems which alow the dictator to remain in power.
My interpretation of your original comment was that you were arguing for #3; that is the only context in which hiring sociopaths would be relevant, as normal weapons development clearly doesn’t require hiring a swarm of sociopathic engineers. The claim that dictatorships exclusively or primarily rely on sociopaths is factually wrong. e.g. according to data from Order Police Battalion 101, 97% of an arbitrary sample of Germans under Hitler were willing to take guns and mow down civilians. Certainly, close to 100% of an arbitrary sample of people would be willing to work on developing robots for either the US or any other state- we can easily see this today.
If you were arguing for #2, then my response would be that the presence of unmanned weapons systems wouldn’t make a different one way or another- if we’re positing another state able to outdevelop, then defeat, the US, it would presumably be able to do so anyways. The only difference would be if it had an enormous GDP but low population; but such a state would be unlikely to be an aggressive military dictatorship, and, anyways, clearly doesn’t exist.
For #4, current dictatorships are too far behind in terms of technological development for unmanned weapons systems to have a significant impact- what we see today is that the most complex weapons systems are produced in a few. mostly stable and democratic nations, and there’s good reason to think that democracy is caused by economic and technological development, such that the states that are most able to produce unmanned weapons are also the most likely to already be democratic. (More precisely, are the most likely to be democratic by the time they build enough unmanned weapons, which seems to be decades off at a minimum.) Worst case, there are 1-3 states (Iran, Russia, China[*]) likely to achieve the capacity to build their own unmanned weapons systems without being democracies; and even then, it’s questionable whether unmanned weapons systems would be able to do all that much. (It depends on the exact implementation, of course; but in general, no robots can assure the safety of a dictator, and they cannot stop the only way recent Great Power dictatorships have loosened up, by choice of their leaders.)
[*] This is a list of every country that is a dictatorship or quasi-dictatorship that’s built it’s own fighters, bombers, or tanks, minus Pakistan. I’m very confident that China’s government already has enough stability/legitimacy concerns and movement towards democracy that they would implement safeguards. Iran and Russia I give ~50% chances each of doing so.
If you were arguing for #1, then a) the US has well-established procedures for oversight of dangerous weapons (i.e. WMD) which have never failed, b) it would be much easier for the President and a small cabal to gain control using nukes than robots, c) the President doesn’t actually have direct control of the military- the ability to create military plans and respond to circumstances for large groups of military robots almost certainly requires AGI, d) as noted separately, there never be a point, pre-AGI, where robots are actually independent of people, e) conspiracies as a general rule are very rarely workable, and this hypothetical conspiracy seems even less workable than most, because it requires many people working together over at least several decades, each person ascending to an elite position.
I don’t believe that it ever will. If the US spends 6,000 USD in maintenance per robot, that would eat up the entire US military budget. 6,000$ is almost certainly a severe underestimate of the cost of operating them, by roughly 3 orders of magnitude, and anyways, that neglects a huge number of relevant factors: the cost of purchasing an MQ-1, amortized over a 30-year operating period, is roughly the same per year as the operating cost; money also needs to be spent doing R&D; non-robot fixed costs total at least 6% of the US military budget; much of US military spending is on things like carriers or aerial refueling or transports whose robot equivalents wouldn’t be ‘killer robots’; etc. (The military budget may go up over time, but the cost per plane has risen faster than the military budget since WWII, so if anything this also argues against large numbers of robots.)
An alternative date: I would expect the USAF to be majority unmanned by 2040 +- 5 years (50% bounds, most uncertainty above); this is roughly one lifecycle of planes forward from today. (Technically it’s a fair bit less; but I’d expect development to speed up somewhat.)
I would expect the US Army to deploy unmanned ground combat units in serious numbers by 2035 +- 5 years.
I would expect the USAF to remove humans from the decision making loop on an individual plane’s flights in 2045 +- 5 years; on a squadron, including maintenance, command, etc, 2065 +- 5 years; above that, never.
Technologies become less expensive over time, and as we progress, our wealth grows. If we don’t have the money to produce it at the current cost, that doesn’t mean they’ll never be able to afford to do it.
You didn’t specify a time period—should I assume that’s yearly? Also, do they have to pay $6,000 in maintenance costs while the units are in storage?
Okay, so an MQ-1 is really, really expensive. Thank you.
What is “serious numbers”?
What do you mean by “above that, never”?
Sorry I didn’t get to your other points today. I don’t have enough time.
P.S. How did you get these estimates for when unmanned weapons will come out?