When killer robots are outlawed, only rogue nations will have massive drone armies.
An ideal outcome here would be if counter-drones have an advantage over drones, but it’s hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones. A second-best outcome would be no asymmetrical advantage of guerilla drone warfare, where the wealthiest nation clearly wins via numerical drone superiority combined with excellent enemy drone detection.
...you know, at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote and they’re going to get it half-wrong. Sigh.
When killer robots are outlawed, only rogue nations will have massive drone armies.
That’s not necessarily a huge issue. If all the major powers agree to not have automated killing drones, and a few minor rogue states (say, Iran) ignore that and develop their own killer drones, then (at least in the near term) that probably won’t give them a big enough advantage over semi-autonomous drones controlled by major nations to be a big deal; an Iranian automated drone army probably still isn’t a match for the American military, the American military has too many other technological advantages.
On the other hand, if one or more major powers start building large numbers of fully autonomous drones, then everyone is going to. That defiantly sounds like a scenario we should try to avoid, especially since that kind of arms race is something that I could see eventually leading to unfriendly AI.
Developing the technology in secret is probably quite possible. Large-scale deployment, though, building a large army of them, would probably be quite hard to hide, especially from modern satellite photography and information technology.
I suppose. Would that really give you enough of an advantage to be worth the diplomatic cost, though? The difference between a semi-autonomous Predator drone and a fully-autonomous Predator drone in military terms doesn’t seem all that significant.
Now, you could make a type of military unit that would really take advantage of being fully autonomous and have a real advantage, like a fully autonomous air-to-air fighter for example (not really practical to do with semi autonomous drones because of delayed reaction time), but it would seem like that would be much harder to hide.
I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.
Almost certainly, but the point that stationary counter-drones wouldn’t necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.
I see. The existence of the specific example caused me to interpret your post as being about a specific method, not a general strategy.
To the strategy, I say:
I’ve heard that defense is more difficult than offense. If the strategy you have defined is basically:
Original drones are offensive and counter-drones are defensive (to prevent them from attacking, presumably).
Then if what I heard was correct, this would fail. If not at first, then likely over time as technology advanced and new offensive strategies are used with the drones.
I’m not sure how to check to see if what I heard was true but if defense worked that well, we wouldn’t have war.
Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.
I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it’s doing than flying, which allows for more exotic sensing and destructive capabilities.
Also, what’s offense and what’s defense? Anti-aircraft artillery (effective against drones? I think current air drones are optimized for use against low-tech enemies w/ few defenses) is a “defense” against ‘attack from the air’, but ‘heat-seeking AA missles’, ‘flack guns’, ‘radar-guided AA missiles’ and ‘machine gun turrets’ are all “offenses” against combat aircraft where the defenses are evasive maneuvers, altitude, armor, and chaff/flare decoys.
In WWI, defenses (machine guns and fortifications) were near-invincible, and killed attackers without time for them to retreat.
I think that current drones are pretty soft and might even be subject to hacking (seem to remember somethign about unencrypted video?) but that would change as soon as somebody starts making real countermeasures.
This took effort to parse. I think what you’re saying is:
If we’re going to have killer drones, there needs to be something to check their power. Example: counter-drones.
If we’re going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones.
If counter-counter-drones can dominate the original drones, then counter-drones probably aren’t strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.)
(I want everyone to understand, so I’m writing it all out—let me know if I’m right.)
And you propose “no asymmetrical advantage of guerilla drone warfare… etc” which isn’t clear to me because I can interpret multiple meanings:
Trash the drones vs. counter-drones vs. counter-counter-drones idea?
Make sure drones don’t have an advantage at guerilla drone warfare?
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
What did your statement mean?
I think if we’re going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike “drone / counter-drone”. Reading this gave me a different idea for checking and balancing killer robots and advanced weapons. I can see some potential cons to it, but I think it might be better than the alternatives. I’m curious about what pros and cons you would think of.
(I want everyone to understand, so I’m writing it all out—let me know if I’m right.)
This isn’t quite what Eliezer said. In particular Eliezer wasn’t considering proposals or ‘what we need’ but instead making observations about scenarios and the implications they could have. The key point is the opening sentence:
When killer robots are outlawed, only rogue nations will have massive drone armies.
This amounts to dismissing Suarez’s proposal to make autonomous killer robots illegal as absurd. Unilaterally disarming oneself without first preventing potential threats from having those same weapons is crazy for all the reasons it usually is. Of course there is the possibility of using the threat of nuclear strike against anyone who creates killer robots but that is best considered a separate proposal and discussed on its own terms.
An ideal outcome here would be if counter-drones have an advantage over drones, but it’s hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones.
This isn’t saying we need drones (or counter or counter-counter drones). It rather saying:
We don’t (yet) know the details of the relevant technology will develop or the relative strengths and weaknesses thereof.
It would great if we discovered that for some reason it is easier to create drones that kill drones than drones that hurt people. That would mean that defence has an advantage when it comes to drone wars. That will result in less attacking (with drones) and so the drone risk would be much, much lower. (And a few other desirable implications...)
The above doesn’t seem likely. Bugger.
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
This wouldn’t be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights. It amounts to the same thing.
This amounts to dismissing Suarez’s proposal to make autonomous killer robots illegal as absurd.
Yeah, I got that, and I think that his statement is easy to understand so I’m not sure why you’re explaining that to me. If you hadn’t noticed this, I wrote out various cons for the legislation idea which were either identical in meaning to his statement or along the same lines as “making them illegal is absurd”. He got several points for that and his comment put at the top of the page. I wrote them first and was evidently ignored (by karma clickers if not by you).
This isn’t saying we need drones (or counter or counter-counter drones). It rather saying:
I didn’t say that he was saying that either.
This wouldn’t be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights.
I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn’t think he was talking about that—it doesn’t seem like a thing that needs saying. Maybe what he meant was not “people will decide whether to fight based on whether it’s likely to succeed” or “people will make formal agreements” but something more like “using killer robots would increase the amount or quality of data we have in a significant way and this will encourage that kind of decision-making”.
What if that’s not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...
Now “the great filter” comes to mind again. :|
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars.
B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
We might be lucky—maybe people are far less likely to initiate wars if it isn’t clear who will win… I’d like to read about this topic if there’s information on it.
Yeah, I got that, and I think that his statement is easy to understand so I’m not sure why you’re explaining that to me.
You wrote a comment explaining what Eliezer meant.
You were wrong about what Eliezer meant.
You explicitly asked to be told whether you were right.
I told you you were not right.
I made my own comment explaining what Eliezer’s words mean.
Maybe you already understood the first sentence of Eliezer’s comment and only misunderstood the later sentences. That’s great! By all means ignore the parts of my explanation that are redundant.
Note that when you make comments like this, including the request for feedback, then getting a reply like mine is close to the best case scenario. Alternatives would be finding you difficult to speak to and just ignoring you and dismissing what you have to say in the entire thread because this particular comment is a straw man.
The problem that you have with with my reply seems to be caused by part of it being redundant for the purpose of facilitating your understanding. But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing. I cannot realistically be expected to perfectly model which parts of Eliezer’s comment you interpreted correctly and which parts you did not. After all that task is (strictly) more difficult than the task of interpreting Eliezer’s comment correctly. The best I can do is explain Eliezer’s comment in my own words and you can take or leave each part of it.
I wrote them first and was evidently ignored (by karma clickers if not by you).
It is frustrating not being rewarded for one’s contributions when others are.
I didn’t say that he was saying that either.
Let me rephrase. The following quote is not something Eliezer said:
If we’re going to have killer drones, there needs to be something to check their power. Example: counter-drones.
I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn’t think he was talking about that—it doesn’t seem like a thing that needs saying.
Eliezer didn’t say it. He assumed it (and/or various loosely related considerations) when he made his claim. I needed to say it because rather than assuming a meaning like this ‘obvious’ one, you assumed that it was a proposal:
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
What if that’s not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...
Yes. That would be bad. Eliezer is making the observation that if technology evolves in such a way (and it seems likely) then it would be less desirable than if for some (somewhat surprising technical reason) the new dynamic did not facilitate asymmetric warfare.
Now “the great filter” comes to mind again.
Yes. Good point.
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars.
Yes. That would be bad.
Now “the great filter” comes to mind again.
Yes. Good point.
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars.
B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
I do not know, but am interested.
Hmm. I wonder if this situation is comparable to any of the situations we know about.
Clarifies my questions:
When humans feel confused about whether they’re likely to win a deadly conflict that they would hypothetically initiate, are they more likely to react to that confusion by acknowledging it and avoiding conflict, or by being overconfident / denying the risk / going irrational and taking the gamble?
If humans are normally more likely to acknowledge the confusion, what circumstances may make them take a gamble on initiating war?
When humans feel confused about whether a competitor has enough power to destroy them, do they react by staying peaceful? The “obvious” answer to this is yes, but it’s not good to feel certain about things immediately before even thinking about them. For an example: if animals are backed into a corner by a human, they fight, even despite the obvious size difference. There might be certain situations where a power imbalance triggers the “backed into a corner” instinct. For some ideas about what those situations might be, I’d wonder about situations in which people over-react to confusion by “erring on the side of caution” (deciding that the opponent is a threat) and then initiating war to take advantage of the element of surprise as part of an effort at self-preservation. I would guess that whether people initiate war in this scenario probably has a lot to do with how big the element of surprise advantage is and how quickly they can kill their opponent.
Does the imbalance between defense and offense grow over time? If so, would people be more or less likely to initiate conflict if defense essentially didn’t exist?
Now I’m thinking about whether we have data that answers these or similar questions.
But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing.
Sorry for not seeing this intention. Thanks for your efforts.
because this particular comment is a straw man
Do you mean to say that I intentionally attacked someone with a (either an intentional or unintentional) misinterpretation of their words? Since my intention with the comment referenced just prior to your statement here was an attempt to clarify and in no way an attack, I’m am not sure what comment you’re referring to.
When killer robots are outlawed, only rogue nations will have massive drone armies.
An ideal outcome here would be if counter-drones have an advantage over drones, but it’s hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones. A second-best outcome would be no asymmetrical advantage of guerilla drone warfare, where the wealthiest nation clearly wins via numerical drone superiority combined with excellent enemy drone detection.
...you know, at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote and they’re going to get it half-wrong. Sigh.
That’s not necessarily a huge issue. If all the major powers agree to not have automated killing drones, and a few minor rogue states (say, Iran) ignore that and develop their own killer drones, then (at least in the near term) that probably won’t give them a big enough advantage over semi-autonomous drones controlled by major nations to be a big deal; an Iranian automated drone army probably still isn’t a match for the American military, the American military has too many other technological advantages.
On the other hand, if one or more major powers start building large numbers of fully autonomous drones, then everyone is going to. That defiantly sounds like a scenario we should try to avoid, especially since that kind of arms race is something that I could see eventually leading to unfriendly AI.
One issue is how easy it is to secretly build an army of autonomous drones?
Developing the technology in secret is probably quite possible. Large-scale deployment, though, building a large army of them, would probably be quite hard to hide, especially from modern satellite photography and information technology.
Why? Just build a large number of non-autonomous drones and then upgrade the software at the last minute.
I suppose. Would that really give you enough of an advantage to be worth the diplomatic cost, though? The difference between a semi-autonomous Predator drone and a fully-autonomous Predator drone in military terms doesn’t seem all that significant.
Now, you could make a type of military unit that would really take advantage of being fully autonomous and have a real advantage, like a fully autonomous air-to-air fighter for example (not really practical to do with semi autonomous drones because of delayed reaction time), but it would seem like that would be much harder to hide.
I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.
Is there not a way to shield combat drones from EMP weapons? I wouldn’t be surprised if they are already doing that.
Almost certainly, but the point that stationary counter-drones wouldn’t necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.
I see. The existence of the specific example caused me to interpret your post as being about a specific method, not a general strategy.
To the strategy, I say:
I’ve heard that defense is more difficult than offense. If the strategy you have defined is basically:
Original drones are offensive and counter-drones are defensive (to prevent them from attacking, presumably).
Then if what I heard was correct, this would fail. If not at first, then likely over time as technology advanced and new offensive strategies are used with the drones.
I’m not sure how to check to see if what I heard was true but if defense worked that well, we wouldn’t have war.
This distinction is just flying/not-flying.
Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.
I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it’s doing than flying, which allows for more exotic sensing and destructive capabilities.
Also, what’s offense and what’s defense? Anti-aircraft artillery (effective against drones? I think current air drones are optimized for use against low-tech enemies w/ few defenses) is a “defense” against ‘attack from the air’, but ‘heat-seeking AA missles’, ‘flack guns’, ‘radar-guided AA missiles’ and ‘machine gun turrets’ are all “offenses” against combat aircraft where the defenses are evasive maneuvers, altitude, armor, and chaff/flare decoys.
In WWI, defenses (machine guns and fortifications) were near-invincible, and killed attackers without time for them to retreat.
I think that current drones are pretty soft and might even be subject to hacking (seem to remember somethign about unencrypted video?) but that would change as soon as somebody starts making real countermeasures.
Gain enough status to make that someone likely to be you.
That is not how government contracts work.
This took effort to parse. I think what you’re saying is:
If we’re going to have killer drones, there needs to be something to check their power. Example: counter-drones.
If we’re going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones.
If counter-counter-drones can dominate the original drones, then counter-drones probably aren’t strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.)
(I want everyone to understand, so I’m writing it all out—let me know if I’m right.)
And you propose “no asymmetrical advantage of guerilla drone warfare… etc” which isn’t clear to me because I can interpret multiple meanings:
Trash the drones vs. counter-drones vs. counter-counter-drones idea?
Make sure drones don’t have an advantage at guerilla drone warfare?
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
What did your statement mean?
I think if we’re going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike “drone / counter-drone”. Reading this gave me a different idea for checking and balancing killer robots and advanced weapons. I can see some potential cons to it, but I think it might be better than the alternatives. I’m curious about what pros and cons you would think of.
This isn’t quite what Eliezer said. In particular Eliezer wasn’t considering proposals or ‘what we need’ but instead making observations about scenarios and the implications they could have. The key point is the opening sentence:
This amounts to dismissing Suarez’s proposal to make autonomous killer robots illegal as absurd. Unilaterally disarming oneself without first preventing potential threats from having those same weapons is crazy for all the reasons it usually is. Of course there is the possibility of using the threat of nuclear strike against anyone who creates killer robots but that is best considered a separate proposal and discussed on its own terms.
This isn’t saying we need drones (or counter or counter-counter drones). It rather saying:
We don’t (yet) know the details of the relevant technology will develop or the relative strengths and weaknesses thereof.
It would great if we discovered that for some reason it is easier to create drones that kill drones than drones that hurt people. That would mean that defence has an advantage when it comes to drone wars. That will result in less attacking (with drones) and so the drone risk would be much, much lower. (And a few other desirable implications...)
The above doesn’t seem likely. Bugger.
This wouldn’t be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights. It amounts to the same thing.
Yeah, I got that, and I think that his statement is easy to understand so I’m not sure why you’re explaining that to me. If you hadn’t noticed this, I wrote out various cons for the legislation idea which were either identical in meaning to his statement or along the same lines as “making them illegal is absurd”. He got several points for that and his comment put at the top of the page. I wrote them first and was evidently ignored (by karma clickers if not by you).
I didn’t say that he was saying that either.
I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn’t think he was talking about that—it doesn’t seem like a thing that needs saying. Maybe what he meant was not “people will decide whether to fight based on whether it’s likely to succeed” or “people will make formal agreements” but something more like “using killer robots would increase the amount or quality of data we have in a significant way and this will encourage that kind of decision-making”.
What if that’s not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...
Now “the great filter” comes to mind again. :|
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
We might be lucky—maybe people are far less likely to initiate wars if it isn’t clear who will win… I’d like to read about this topic if there’s information on it.
You wrote a comment explaining what Eliezer meant.
You were wrong about what Eliezer meant.
You explicitly asked to be told whether you were right.
I told you you were not right.
I made my own comment explaining what Eliezer’s words mean.
Maybe you already understood the first sentence of Eliezer’s comment and only misunderstood the later sentences. That’s great! By all means ignore the parts of my explanation that are redundant.
Note that when you make comments like this, including the request for feedback, then getting a reply like mine is close to the best case scenario. Alternatives would be finding you difficult to speak to and just ignoring you and dismissing what you have to say in the entire thread because this particular comment is a straw man.
The problem that you have with with my reply seems to be caused by part of it being redundant for the purpose of facilitating your understanding. But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing. I cannot realistically be expected to perfectly model which parts of Eliezer’s comment you interpreted correctly and which parts you did not. After all that task is (strictly) more difficult than the task of interpreting Eliezer’s comment correctly. The best I can do is explain Eliezer’s comment in my own words and you can take or leave each part of it.
It is frustrating not being rewarded for one’s contributions when others are.
Let me rephrase. The following quote is not something Eliezer said:
Eliezer didn’t say it. He assumed it (and/or various loosely related considerations) when he made his claim. I needed to say it because rather than assuming a meaning like this ‘obvious’ one, you assumed that it was a proposal:
Yes. That would be bad. Eliezer is making the observation that if technology evolves in such a way (and it seems likely) then it would be less desirable than if for some (somewhat surprising technical reason) the new dynamic did not facilitate asymmetric warfare.
Yes. Good point.
I do not know, but am interested.
Hmm. I wonder if this situation is comparable to any of the situations we know about.
Clarifies my questions:
When humans feel confused about whether they’re likely to win a deadly conflict that they would hypothetically initiate, are they more likely to react to that confusion by acknowledging it and avoiding conflict, or by being overconfident / denying the risk / going irrational and taking the gamble?
If humans are normally more likely to acknowledge the confusion, what circumstances may make them take a gamble on initiating war?
When humans feel confused about whether a competitor has enough power to destroy them, do they react by staying peaceful? The “obvious” answer to this is yes, but it’s not good to feel certain about things immediately before even thinking about them. For an example: if animals are backed into a corner by a human, they fight, even despite the obvious size difference. There might be certain situations where a power imbalance triggers the “backed into a corner” instinct. For some ideas about what those situations might be, I’d wonder about situations in which people over-react to confusion by “erring on the side of caution” (deciding that the opponent is a threat) and then initiating war to take advantage of the element of surprise as part of an effort at self-preservation. I would guess that whether people initiate war in this scenario probably has a lot to do with how big the element of surprise advantage is and how quickly they can kill their opponent.
Does the imbalance between defense and offense grow over time? If so, would people be more or less likely to initiate conflict if defense essentially didn’t exist?
Now I’m thinking about whether we have data that answers these or similar questions.
I think a more important question than “how likely am I to win this conflict?” is “will my odds increase or decrease by waiting?”
Sorry for not seeing this intention. Thanks for your efforts.
Do you mean to say that I intentionally attacked someone with a (either an intentional or unintentional) misinterpretation of their words? Since my intention with the comment referenced just prior to your statement here was an attempt to clarify and in no way an attack, I’m am not sure what comment you’re referring to.