This took effort to parse. I think what you’re saying is:
If we’re going to have killer drones, there needs to be something to check their power. Example: counter-drones.
If we’re going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones.
If counter-counter-drones can dominate the original drones, then counter-drones probably aren’t strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.)
(I want everyone to understand, so I’m writing it all out—let me know if I’m right.)
And you propose “no asymmetrical advantage of guerilla drone warfare… etc” which isn’t clear to me because I can interpret multiple meanings:
Trash the drones vs. counter-drones vs. counter-counter-drones idea?
Make sure drones don’t have an advantage at guerilla drone warfare?
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
What did your statement mean?
I think if we’re going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike “drone / counter-drone”. Reading this gave me a different idea for checking and balancing killer robots and advanced weapons. I can see some potential cons to it, but I think it might be better than the alternatives. I’m curious about what pros and cons you would think of.
(I want everyone to understand, so I’m writing it all out—let me know if I’m right.)
This isn’t quite what Eliezer said. In particular Eliezer wasn’t considering proposals or ‘what we need’ but instead making observations about scenarios and the implications they could have. The key point is the opening sentence:
When killer robots are outlawed, only rogue nations will have massive drone armies.
This amounts to dismissing Suarez’s proposal to make autonomous killer robots illegal as absurd. Unilaterally disarming oneself without first preventing potential threats from having those same weapons is crazy for all the reasons it usually is. Of course there is the possibility of using the threat of nuclear strike against anyone who creates killer robots but that is best considered a separate proposal and discussed on its own terms.
An ideal outcome here would be if counter-drones have an advantage over drones, but it’s hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones.
This isn’t saying we need drones (or counter or counter-counter drones). It rather saying:
We don’t (yet) know the details of the relevant technology will develop or the relative strengths and weaknesses thereof.
It would great if we discovered that for some reason it is easier to create drones that kill drones than drones that hurt people. That would mean that defence has an advantage when it comes to drone wars. That will result in less attacking (with drones) and so the drone risk would be much, much lower. (And a few other desirable implications...)
The above doesn’t seem likely. Bugger.
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
This wouldn’t be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights. It amounts to the same thing.
This amounts to dismissing Suarez’s proposal to make autonomous killer robots illegal as absurd.
Yeah, I got that, and I think that his statement is easy to understand so I’m not sure why you’re explaining that to me. If you hadn’t noticed this, I wrote out various cons for the legislation idea which were either identical in meaning to his statement or along the same lines as “making them illegal is absurd”. He got several points for that and his comment put at the top of the page. I wrote them first and was evidently ignored (by karma clickers if not by you).
This isn’t saying we need drones (or counter or counter-counter drones). It rather saying:
I didn’t say that he was saying that either.
This wouldn’t be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights.
I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn’t think he was talking about that—it doesn’t seem like a thing that needs saying. Maybe what he meant was not “people will decide whether to fight based on whether it’s likely to succeed” or “people will make formal agreements” but something more like “using killer robots would increase the amount or quality of data we have in a significant way and this will encourage that kind of decision-making”.
What if that’s not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...
Now “the great filter” comes to mind again. :|
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars.
B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
We might be lucky—maybe people are far less likely to initiate wars if it isn’t clear who will win… I’d like to read about this topic if there’s information on it.
Yeah, I got that, and I think that his statement is easy to understand so I’m not sure why you’re explaining that to me.
You wrote a comment explaining what Eliezer meant.
You were wrong about what Eliezer meant.
You explicitly asked to be told whether you were right.
I told you you were not right.
I made my own comment explaining what Eliezer’s words mean.
Maybe you already understood the first sentence of Eliezer’s comment and only misunderstood the later sentences. That’s great! By all means ignore the parts of my explanation that are redundant.
Note that when you make comments like this, including the request for feedback, then getting a reply like mine is close to the best case scenario. Alternatives would be finding you difficult to speak to and just ignoring you and dismissing what you have to say in the entire thread because this particular comment is a straw man.
The problem that you have with with my reply seems to be caused by part of it being redundant for the purpose of facilitating your understanding. But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing. I cannot realistically be expected to perfectly model which parts of Eliezer’s comment you interpreted correctly and which parts you did not. After all that task is (strictly) more difficult than the task of interpreting Eliezer’s comment correctly. The best I can do is explain Eliezer’s comment in my own words and you can take or leave each part of it.
I wrote them first and was evidently ignored (by karma clickers if not by you).
It is frustrating not being rewarded for one’s contributions when others are.
I didn’t say that he was saying that either.
Let me rephrase. The following quote is not something Eliezer said:
If we’re going to have killer drones, there needs to be something to check their power. Example: counter-drones.
I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn’t think he was talking about that—it doesn’t seem like a thing that needs saying.
Eliezer didn’t say it. He assumed it (and/or various loosely related considerations) when he made his claim. I needed to say it because rather than assuming a meaning like this ‘obvious’ one, you assumed that it was a proposal:
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
What if that’s not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...
Yes. That would be bad. Eliezer is making the observation that if technology evolves in such a way (and it seems likely) then it would be less desirable than if for some (somewhat surprising technical reason) the new dynamic did not facilitate asymmetric warfare.
Now “the great filter” comes to mind again.
Yes. Good point.
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars.
Yes. That would be bad.
Now “the great filter” comes to mind again.
Yes. Good point.
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars.
B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
I do not know, but am interested.
Hmm. I wonder if this situation is comparable to any of the situations we know about.
Clarifies my questions:
When humans feel confused about whether they’re likely to win a deadly conflict that they would hypothetically initiate, are they more likely to react to that confusion by acknowledging it and avoiding conflict, or by being overconfident / denying the risk / going irrational and taking the gamble?
If humans are normally more likely to acknowledge the confusion, what circumstances may make them take a gamble on initiating war?
When humans feel confused about whether a competitor has enough power to destroy them, do they react by staying peaceful? The “obvious” answer to this is yes, but it’s not good to feel certain about things immediately before even thinking about them. For an example: if animals are backed into a corner by a human, they fight, even despite the obvious size difference. There might be certain situations where a power imbalance triggers the “backed into a corner” instinct. For some ideas about what those situations might be, I’d wonder about situations in which people over-react to confusion by “erring on the side of caution” (deciding that the opponent is a threat) and then initiating war to take advantage of the element of surprise as part of an effort at self-preservation. I would guess that whether people initiate war in this scenario probably has a lot to do with how big the element of surprise advantage is and how quickly they can kill their opponent.
Does the imbalance between defense and offense grow over time? If so, would people be more or less likely to initiate conflict if defense essentially didn’t exist?
Now I’m thinking about whether we have data that answers these or similar questions.
But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing.
Sorry for not seeing this intention. Thanks for your efforts.
because this particular comment is a straw man
Do you mean to say that I intentionally attacked someone with a (either an intentional or unintentional) misinterpretation of their words? Since my intention with the comment referenced just prior to your statement here was an attempt to clarify and in no way an attack, I’m am not sure what comment you’re referring to.
This took effort to parse. I think what you’re saying is:
If we’re going to have killer drones, there needs to be something to check their power. Example: counter-drones.
If we’re going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones.
If counter-counter-drones can dominate the original drones, then counter-drones probably aren’t strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.)
(I want everyone to understand, so I’m writing it all out—let me know if I’m right.)
And you propose “no asymmetrical advantage of guerilla drone warfare… etc” which isn’t clear to me because I can interpret multiple meanings:
Trash the drones vs. counter-drones vs. counter-counter-drones idea?
Make sure drones don’t have an advantage at guerilla drone warfare?
Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?
What did your statement mean?
I think if we’re going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike “drone / counter-drone”. Reading this gave me a different idea for checking and balancing killer robots and advanced weapons. I can see some potential cons to it, but I think it might be better than the alternatives. I’m curious about what pros and cons you would think of.
This isn’t quite what Eliezer said. In particular Eliezer wasn’t considering proposals or ‘what we need’ but instead making observations about scenarios and the implications they could have. The key point is the opening sentence:
This amounts to dismissing Suarez’s proposal to make autonomous killer robots illegal as absurd. Unilaterally disarming oneself without first preventing potential threats from having those same weapons is crazy for all the reasons it usually is. Of course there is the possibility of using the threat of nuclear strike against anyone who creates killer robots but that is best considered a separate proposal and discussed on its own terms.
This isn’t saying we need drones (or counter or counter-counter drones). It rather saying:
We don’t (yet) know the details of the relevant technology will develop or the relative strengths and weaknesses thereof.
It would great if we discovered that for some reason it is easier to create drones that kill drones than drones that hurt people. That would mean that defence has an advantage when it comes to drone wars. That will result in less attacking (with drones) and so the drone risk would be much, much lower. (And a few other desirable implications...)
The above doesn’t seem likely. Bugger.
This wouldn’t be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights. It amounts to the same thing.
Yeah, I got that, and I think that his statement is easy to understand so I’m not sure why you’re explaining that to me. If you hadn’t noticed this, I wrote out various cons for the legislation idea which were either identical in meaning to his statement or along the same lines as “making them illegal is absurd”. He got several points for that and his comment put at the top of the page. I wrote them first and was evidently ignored (by karma clickers if not by you).
I didn’t say that he was saying that either.
I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn’t think he was talking about that—it doesn’t seem like a thing that needs saying. Maybe what he meant was not “people will decide whether to fight based on whether it’s likely to succeed” or “people will make formal agreements” but something more like “using killer robots would increase the amount or quality of data we have in a significant way and this will encourage that kind of decision-making”.
What if that’s not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...
Now “the great filter” comes to mind again. :|
Do you know of anyone who has written about:
A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it’s more likely for people to initiate wars when there’s a lot of uncertainty.
We might be lucky—maybe people are far less likely to initiate wars if it isn’t clear who will win… I’d like to read about this topic if there’s information on it.
You wrote a comment explaining what Eliezer meant.
You were wrong about what Eliezer meant.
You explicitly asked to be told whether you were right.
I told you you were not right.
I made my own comment explaining what Eliezer’s words mean.
Maybe you already understood the first sentence of Eliezer’s comment and only misunderstood the later sentences. That’s great! By all means ignore the parts of my explanation that are redundant.
Note that when you make comments like this, including the request for feedback, then getting a reply like mine is close to the best case scenario. Alternatives would be finding you difficult to speak to and just ignoring you and dismissing what you have to say in the entire thread because this particular comment is a straw man.
The problem that you have with with my reply seems to be caused by part of it being redundant for the purpose of facilitating your understanding. But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing. I cannot realistically be expected to perfectly model which parts of Eliezer’s comment you interpreted correctly and which parts you did not. After all that task is (strictly) more difficult than the task of interpreting Eliezer’s comment correctly. The best I can do is explain Eliezer’s comment in my own words and you can take or leave each part of it.
It is frustrating not being rewarded for one’s contributions when others are.
Let me rephrase. The following quote is not something Eliezer said:
Eliezer didn’t say it. He assumed it (and/or various loosely related considerations) when he made his claim. I needed to say it because rather than assuming a meaning like this ‘obvious’ one, you assumed that it was a proposal:
Yes. That would be bad. Eliezer is making the observation that if technology evolves in such a way (and it seems likely) then it would be less desirable than if for some (somewhat surprising technical reason) the new dynamic did not facilitate asymmetric warfare.
Yes. Good point.
I do not know, but am interested.
Hmm. I wonder if this situation is comparable to any of the situations we know about.
Clarifies my questions:
When humans feel confused about whether they’re likely to win a deadly conflict that they would hypothetically initiate, are they more likely to react to that confusion by acknowledging it and avoiding conflict, or by being overconfident / denying the risk / going irrational and taking the gamble?
If humans are normally more likely to acknowledge the confusion, what circumstances may make them take a gamble on initiating war?
When humans feel confused about whether a competitor has enough power to destroy them, do they react by staying peaceful? The “obvious” answer to this is yes, but it’s not good to feel certain about things immediately before even thinking about them. For an example: if animals are backed into a corner by a human, they fight, even despite the obvious size difference. There might be certain situations where a power imbalance triggers the “backed into a corner” instinct. For some ideas about what those situations might be, I’d wonder about situations in which people over-react to confusion by “erring on the side of caution” (deciding that the opponent is a threat) and then initiating war to take advantage of the element of surprise as part of an effort at self-preservation. I would guess that whether people initiate war in this scenario probably has a lot to do with how big the element of surprise advantage is and how quickly they can kill their opponent.
Does the imbalance between defense and offense grow over time? If so, would people be more or less likely to initiate conflict if defense essentially didn’t exist?
Now I’m thinking about whether we have data that answers these or similar questions.
I think a more important question than “how likely am I to win this conflict?” is “will my odds increase or decrease by waiting?”
Sorry for not seeing this intention. Thanks for your efforts.
Do you mean to say that I intentionally attacked someone with a (either an intentional or unintentional) misinterpretation of their words? Since my intention with the comment referenced just prior to your statement here was an attempt to clarify and in no way an attack, I’m am not sure what comment you’re referring to.