Excuse me? You are taking the number of military-age males and using it as the number of soldiers!
Yes!
The actual US armed forces are a few million. 5% would be a much better estimate.
If the question here is “How many people are currently in the military” my figure is wrong. However, that’s not the question. The question is “In the event that a robot army tries to take over the American population, how many American soldiers might there be to defend America?” You’re estimating in a different context than the one in my comment.
This aside, you are ignoring that “lethal autonomy” is nowhere near the same thing as “operational autonomy”
Actually, if you’re defining “operational autonomy” as “how many people it takes to run weapons”, I did address that when I said “I’m not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.” If you have better estimates, would you share them?
How? “Do as I say, or else I’ll order you to fire up the drones on your base and have them shoot you!”
Method A. They could wait until the country is in turmoil and prey on people’s irrationality like Hitler did.
Method B. They could get those people to operate the drones under the guise of fighting for a good cause. Then they could threaten to use the army to kill anyone who opposes them. This doesn’t have to be sudden—it could happen quite gradually, as a series of small and oppressive steps and rules wrapped in doublespeak that eventually lead up to complete tyranny. If people don’t realize that most other people disagree with the tyrant, they will feel threatened and probably comply in order to survive.
Method C. Check out the Milgram experiment. Those people didn’t even need to be coerced to apply lethal force. It’s a lot easier than you think.
Method D. If they can get just a small group to operate a small number of drones, they can coerce a larger group of people to operate more drones. With the larger group of people operating drones, they can coerce even more people, and so on.
Why will that work better with drones than with rifles?
This all depends on the ratio of people it takes to operate the weapons vs. number of people the weapons can subdue. Your perception appears to be that predator drones require more people to run them than a fighter aircraft. My perception is that it doesn’t matter how many people it takes to operate a predator drone because war technology is likely to be optimized further than it is today, and if it is possible to decrease the number of people it requires to build/maintain/run/etc. the killer robots significantly below the number of people it would take to get the same amount of firepower otherwise, then of course they can take over a population more easily.
A high firepower to human resource ratio means takeovers would work better.
A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing.
That’s not what Suarez says. Even if he’s wrong do you deny that it’s likely that technology will advance to the point where people can make robots capable of killing without a human making the decision? That’s what this conversation is about. Don’t let us get all mixed up like Eliezer warns us about in 37 ways words can be wrong. If we’re talking about robots that can kill without a human’s decision, those are a threat, and could potentially reduce the human resources-to-firepower ratio enough to threaten democracy. If you want to disagree with me about what words I should use to speak about this, that’s great. In that case, though, I’d like to know where your credible sources are so that I can read authoritative definitions please.
and you still have to convince the people doing the work that it needs to be done.
What prevents these methods from being used with rifles? What is special about robots in this context?
Even if he’s wrong do you deny that it’s likely that technology will advance to the point where people can make robots capable of killing without a human making the decision?
No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it’s in a position to make the decision to kill, is what we cannot yet do programmatically. You’re confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years.
Hitler.
Milgram experiment.
Number of sociopaths: 1 in 20.
Is rationality taught in school?: No.
Ok, so this being so, how come we don’t already have oppressive societies being run with plain old rifles?
Yes!
If the question here is “How many people are currently in the military” my figure is wrong. However, that’s not the question. The question is “In the event that a robot army tries to take over the American population, how many American soldiers might there be to defend America?” You’re estimating in a different context than the one in my comment.
Actually, if you’re defining “operational autonomy” as “how many people it takes to run weapons”, I did address that when I said “I’m not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.” If you have better estimates, would you share them?
Method A. They could wait until the country is in turmoil and prey on people’s irrationality like Hitler did.
Method B. They could get those people to operate the drones under the guise of fighting for a good cause. Then they could threaten to use the army to kill anyone who opposes them. This doesn’t have to be sudden—it could happen quite gradually, as a series of small and oppressive steps and rules wrapped in doublespeak that eventually lead up to complete tyranny. If people don’t realize that most other people disagree with the tyrant, they will feel threatened and probably comply in order to survive.
Method C. Check out the Milgram experiment. Those people didn’t even need to be coerced to apply lethal force. It’s a lot easier than you think.
Method D. If they can get just a small group to operate a small number of drones, they can coerce a larger group of people to operate more drones. With the larger group of people operating drones, they can coerce even more people, and so on.
This all depends on the ratio of people it takes to operate the weapons vs. number of people the weapons can subdue. Your perception appears to be that predator drones require more people to run them than a fighter aircraft. My perception is that it doesn’t matter how many people it takes to operate a predator drone because war technology is likely to be optimized further than it is today, and if it is possible to decrease the number of people it requires to build/maintain/run/etc. the killer robots significantly below the number of people it would take to get the same amount of firepower otherwise, then of course they can take over a population more easily.
A high firepower to human resource ratio means takeovers would work better.
That’s not what Suarez says. Even if he’s wrong do you deny that it’s likely that technology will advance to the point where people can make robots capable of killing without a human making the decision? That’s what this conversation is about. Don’t let us get all mixed up like Eliezer warns us about in 37 ways words can be wrong. If we’re talking about robots that can kill without a human’s decision, those are a threat, and could potentially reduce the human resources-to-firepower ratio enough to threaten democracy. If you want to disagree with me about what words I should use to speak about this, that’s great. In that case, though, I’d like to know where your credible sources are so that I can read authoritative definitions please.
Hitler.
Milgram experiment.
Number of sociopaths: 1 in 20.
Is rationality taught in school?: No.
What prevents these methods from being used with rifles? What is special about robots in this context?
No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it’s in a position to make the decision to kill, is what we cannot yet do programmatically. You’re confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years.
Ok, so this being so, how come we don’t already have oppressive societies being run with plain old rifles?