I’d give the following announcement: “People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date].” Well, I’d go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.
I can think, straight away, of four or five reason why this would have been very much the wrong thing to do.
You make an enemy of your biggest allies. Nukes or no, the US has never been more powerful than the rest of the world put together.
You don’t react to coming out of one Cold War by initiating another.
This strategy is pointless unless you plan to follow through. The regime that laid down that threat would either be strung up when they launched, or voted straight out when they didn’t.
Mutually assured destruction was what stopped nuclear war happening. Setting one country up as the Guardian of the Nukes is stupid, even if you are that country. I’m not a yank, but I believe this sort of idea is pretty big in the constitution.
Attacking London is a shortcut to getting a pounding. This one’s just conjecture.
Basically he was about ruthlessness for the good of humanity.
Yeah I think the clue is in there. Better to be about the good of humanity, and ruthless if that’s what’s called for. Setting yourself up as ‘the guy who has the balls to make the tough decisions’ usually denotes you as a nutjob. Case in point: von Neumann suggesting launching was the right strategy. I don’t think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.
Case in point: von Neumann suggesting launching was the right strategy. I don’t think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.
Survivorship bias. There were some very near misses (Cuban Missile Crisis, Stanislav Petrov, etc.), and it seems reasonable to conclude that a substantial fraction of the Everett branches that came out of our 1946 included a global thermonuclear war.
I’m not willing to conclude that von Neumann was right, but the fact that we avoided nuclear war isn’t clear proof he was wrong.
If the allies are rational, they should agree that it’s in their interest to establish this strategy. The enemy of everyone is the all-out nuclear war.
This strikes me as a variant of the ultimatum game. The allies would have to accept a large asymmetry of power. If even one of them rejects the ultimatum you’re stuck with the prospect of giving up your strategy (having burned most or all of your political capital with other nations), or committing mass murder.
When you add in the inability of governments to make binding commitments, this doesn’t strike me as a viable strategy.
The UK bomb was developed with the express purpose of providing independance from the US. If the US could keep the USSR nuke-free there’d be less need for a UK bomb. Also, it’s possible that the US could tone down its anti-imperialist rhetoric/covert funding so as to not threaten the Empire.
I think that, by the time you’ve reached the point where you’re about to kill millions for the sake of the greater good, you’d do well to consider all the ethical injunctions this violated. (Especially given all the different ways this could go wrong that UnholySmoke could come up off the top of his head.)
Kaj, I was discussing a hypothetical nuclear strategy. We can’t discuss any such strategy without involving the possibility of killing millions. Do the ethical injunctions imply that such discussions shouldn’t occur?
Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles. Does MAD also violate ethical injunctions? Should it also not have been discussed? (How many different ways could things have gone wrong with MAD?)
Do the ethical injunctions imply that such discussions shouldn’t occur?
Of course not. I’m not saying the strategy shouldn’t be discussed, I’m saying that you seem to be expressing greater certainty of your proposed approach being correct than would be warranted.
(I wouldn’t object to people discussing math, but I would object if somebody thought 2 + 2 = 5.)
Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles
And the world as we know it is still around because Stanislav Petrov ignored that order and insisted the US couldn’t possibly be stupid enough to actually launch that sort of attack.
I would pray that the US operators were equally sensible, but maybe they just got lucky and never had a technical glitch threaten the existence of humanity.
The entire civilised world (which at this point does not include anyone who is still a member of the US government) is in uproar. Your attempts at secret diplomacy are leaked immediately. The people of the UK make tea in your general direction. Protesters march on the White House.
When do you push the button, and how will you keep order in your own country afterwards?
What I’m really getting at here is that your bland willingness to murder millions of non-combatants of a friendly power in peacetime because they do not accede to your empire-building unfits you for inclusion in the human race.
Also, that it’s easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.
So says the man from his comfy perch in an Everett branch that survived the cold war.
What I’m really getting at here is that [a comment you made on LW] unfits you for inclusion in the human race.
Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.
I doubt RichardKennaway believes Wei_Dai is unfit for inclusion in the human race. What he was saying, and what he received upvotes for, is that anyone who’s blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race—and he’s right, that sort of person should not be fit for inclusion in the human race. A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.
anyone who’s blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race
You do realize that the point of my proposed strategy was to prevent the destruction of Earth (from a potential nuclear war between the US and USSR), and not “empire building”?
I don’t understand why Richard and you consider MAD acceptable, but my proposal beyond the pale. Both of you use the words “friendly power in peacetime”, which must be relevant somehow but I don’t see how. Why would it be ok (i.e., fit for inclusion in the human race) to commit to murdering millions of non-combatants of an enemy power in wartime in order to prevent nuclear war, but not ok to commit to murdering millions of non-combatants of a friendly power in peacetime in service of the same goal?
A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.
I also took Richard’s comment personally (he did say “your bland willingness”, emphasis added), which is probably why I didn’t respond to it.
The issue seems to be that nuking a friendly power in peacetime feels to people pretty much like a railroad problem where you need to shove the fat person. In this particular case, since it isn’t a hypothetical, the situation has been made all the more complicated by actual discussion of the historical and current geopolitics surrounding the situation (which essentially amounts to trying to find a clever solution to a train problem or arguing that the fat person wouldn’t weigh enough.) The reaction is against your apparent strong consequentialism along with the fact that your strategy wouldn’t actually work given the geopolitical situation. If one had an explicitly hypothetical geopolitical situation where this would work and then see how they respond it might be interesting.
He could easily have said “bland willingness to” rather than “your bland willingness” so that doesn’t seem to be an example where a pronoun is necessary.
No, it’s an example where using “you” has caused someone to take something personally. Given that the “he/she” problem is that some people take it personally, I haven’t solved the problem, I’ve just shifted it onto a different group of people.
I was commenting on what he said, not guessing at his beliefs.
I don’t think you’ve made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it’s not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn’t render me unfit for existence.
Anyone willing to deploy a nuclear weapon has a “bland willingness to slaughter”. Anyone employing MAD has a “bland willingness to destroy the entire human race”.
I suspect that you have no compelling proof that Wei Dai’s hypothetical nuclear strategy is in fact wrong, let alone one compelling enough to justify the type of personal attack leveled by RichardKennaway. Would you also accuse Eliezer of displaying a “bland willingness to torture someone for 50 years” and sentence him to exclusion from humanity?
What I was saying was horrendous act is not the same as comment advising horrendous act in hypothetical situation. You conflated the two in paraphrasing RichardKennaway’s comment as “comment advising horrendous act in hypothetical situation unfits you for inclusion in the human race” when what he was saying was “horrendous act unfits you for inclusion in the human race”.
I was rather intemperate, and on a different day maybe I would have been less so; or maybe I wouldn’t. I am sorry that I offended Wei Dai.
But then, Wei Dai’s posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens. This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.
You compare the problem to Eliezer’s one of TORTURE vs SPECKS, but there is an important difference between them. TORTURE vs SPECKS is fiction, while Wei Dai spoke of an actual juncture in history in living memory, and actions that actually could have been taken.
What is the TORTURE vs SPECKS problem? The formulation of the problem is at that link, but what sort of thing is this problem? Given the followup posting the very next day, it seems likely to me that the intention was to manifest people’s reactions to the problem. Perhaps it is also a touchstone, to see who has and who has not learned the material on which it stands. What it is not is a genuine problem which anyone needs to solve as anything but a thought experiment. TORTURE vs SPECKS is not going to happen. Other tradeoffs between great evil to one and small evils to many do happen; this one never will. While 50 years of torture is, regrettably, conceivably possible here and now in the real world, and may be happening to someone, somewhere, right now, there is no possibility of 3^^^3 specks. Why 3^^^3? Because that is intended to be a number large enough to produce the desired conclusion. Anyone whose objection is that it isn’t a big enough number, besides manifesting a poor grasp of its magnitude, can simply add another uparrow. The problem is a fictional one, and as such exhibits the reverse meta-causality characteristic of fiction: 3^^^3 is in the problem because the point of the problem is for the solution to be TORTURE; that TORTURE is the solution is not caused by an actual possibility of 3^^^3 specks.
In another posting a year later, Eliezer speaks of ethical rules of the sort that you just don’t break, as safety rails on a cliff he didn’t see. This does not sit well with the TORTURE vs SPECKS material, but it doesn’t have to: TORTURE vs SPECKS is fiction and the later posting is about real (though unspecified) actions.
So, the Cold War. Wei Dai would have the US after WWII threatening to nuke any country attempting to develop or test nuclear weapons. To the scenario of later discovering that (for example) the UK has a well-developed covert nuclear program, he responds:
I’d give the following announcement: “People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date].” Well, I’d go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.
It should, should it? And that, in Wei’s mind, is adequate justification for pressing the button to kill millions of people for not doing what he told them to do. Is this rationality, or the politics of two-year-olds with nukes?
I seem to be getting intemperate again.
It’s a poor sort of rationality that only works against people rational enough to lose. Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats. And so on. How’s TDT/UDT with self-modifying agents modelling themselves and each other coming along?
This is fantasy masquerading as rationality. I stand by this that I said back then:
[I]t’s easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.
To make these threats, you must be willing to actually do what you have said you will do if your enemy does not surrender. The moment you think “but rationally he has to surrender so I won’t have to do this”, you are making an excuse for yourself to not carry it out. Whatever belief you can muster that you would will evaporate like dew in the desert when the time comes.
But then, Wei Dai’s posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens.
Using the word “intemperate” in this way is a remarkable dodge. Wei Dai’s comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai’s statements. The tone of my response was deliberate and quite restrained relative to how I felt.
This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.
Huh? You’re “not excusing” the extremity of your interpersonal behavior on the grounds that the topic was fictional, and fiction is more extreme than reality? And then go on to explain that you don’t behave similarly toward Eliezer with respect to his position on TORTURE vs SPECKS because that topic is even more fictional?
Is this rationality, or the politics of two-year-olds with nukes?
Is this a constructive point, or just more gesturing?
As for the rest of your comment: Thank you! This is the discussion I wanted to be reading all along. Aside from a general feeling that you’re still not really trying to be fair, my remaining points are mercifully non-meta. To dampen political distractions, I’ll refer to the nuke-holding country as H, and a nuke-developing country as D.
You’re very focused on Wei Dai’s statement about backward induction, but I think you’re missing a key point: His strategy does not depend on D reasoning the way he expects them to, it’s just heavily optimized for this outcome. I believe he’s right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.
Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats.
Don’t see how this follows. If both countries precommit, D gets bombed until it halts or otherwise cannot continue development. While this is not H’s preferred outcome, H’s entire strategy is predicated on weighing irreversible nuclear proliferation and its consequences more heavily than the millions of lives lost in the event of a suicidal failure to comply. In other words, D doesn’t wield sufficient power in this scenario to affect H’s decision, while H holds sufficient power to skew local incentives toward mutually beneficial outcomes.
Speaking of nuclear proliferation and its consequences, you’ve been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai’s strategy. Talking about “murdering millions” without at least framing it alongside the horror of proliferation is not productive.
How are you going to launch those nukes, anyway?
Practical considerations like this strike me as by far the best arguments against extreme, theory-heavy strategies. Messy real-world noise can easily make a high-stakes gambit more trouble than it’s worth.
Is this rationality, or the politics of two-year-olds with nukes?
Is this a constructive point, or just more gesturing?
It is a gesture concluding a constructive point.
You’re very focused on Wei Dai’s statement about backward induction, but I think you’re missing a key point: His strategy does not depend on D reasoning the way he expects them to, it’s just heavily optimized for this outcome. I believe he’s right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.
This is a distinction without a difference. If H bombs D, H has lost (and D has lost more).
If both countries precommit, D gets bombed until it halts or otherwise cannot continue development.
That depends on who precommits “first”. That’s a problematic concept for rational actors who have plenty of time to model each others’ possible strategies in advance of taking action. If H, without even being informed of it by D, considers this possible precommitment strategy of D, is it still rational for H to persist and threaten D anyway? Or perhaps H can precommit to ignoring such a precommitment by D? Or should D already have anticipated H’s original threat and backed down in advance of the threat ever having been made? I am reminded of the Forbidden Topic. Counterfactual blackmail isn’t just for superintelligences. As I asked before, does the decision theory exist yet to handle self-modifying agents modelling themselves and others, demonstrating how real actions can arise from this seething mass of virtual possibilities?
Then also, in what you dismiss as “messy real-world noise”, there may be a lot of other things D might do, such as fomenting insurrection in H, or sharing their research with every other country besides H (and blaming foreign spies), or assassinating H’s leader, or doing any and all of these while overtly appearing to back down.
The moment H makes that threat, the whole world is H’s enemy. H has declared a war that it hopes to win by the mere possession of overwhelming force.
Speaking of nuclear proliferation and its consequences, you’ve been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai’s strategy. Talking about “murdering millions” without at least framing it alongside the horror of proliferation is not productive.
I look around at the world since WWII and fail to see this horror. I look at Wei Dai’s strategy and see the horror. loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.
This is a distinction without a difference. If H bombs D, H has lost
This assumption determines (or at least greatly alters) the debate, and you need to make a better case for it. If H really “loses” by bombing D (meaning H considers this outcome less preferable than proliferation), then H’s threat is not credible, and the strategy breaks down, no exotic decision theory necessary. Looks like a crucial difference to me.
That depends on who precommits “first”. [...]
This entire paragraph depends on the above assumption. If I grant you that assumption and (artificially) hold constant H’s intent to precommit, then we’ve entered the realm of bluffing, and yes, the game tree gets pathological.
loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.
My mention of Everett branches was an indirect (and counter-productive) way of accusing you of hindsight bias.
Your talk of “convincing you” is distractingly binary. Do you admit that the severity and number of close calls in the Cold War is relevant to this discussion, and that these are positively correlated with the underlying justification for Wei Dai’s strategy? (Not necessarily its feasibility!)
I look around at the world since WWII and fail to see this horror. I look at Wei Dai’s strategy and see the horror.
Let’s set aside scale and comparisons for a moment, because your position looks suspiciously one-sided. You fail to see the horror of nuclear proliferation? If I may ask, what is your estimate for the probability that a nuclear weapon will be deployed in the next 100 years? Did you even ask yourself this question, or are you just selectively attending to the low-probability horrors of Wei Dai’s strategy?
Then also, in what you dismiss as “messy real-world noise”
Emphasis mine. You are compromised. Please take a deep breath (really!) and re-read my comment. I was not dismissing your point in the slightest, I was in fact stating my belief that it exemplified a class of particularly effective counter-arguments in this context.
You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else.
A model of reality, which assumes that an opponent must be rational, is an incorrect model. At best, it is a good approximation that could luckily return a correct answer in some situations.
I think this is a frequent bias for smart people—assuming that (1) my reasoning is flawless, and (2) my opponent is on the same rationality level as me, therefore (3) my opponent must have the same model of situation as me, therefore (4) if I rationally predict that it is best for my opponent to do X, my opponent will really do X. And then my opponent does non-X, and I am like: WTF?!
I’d give the following announcement: “People of the UK, please vote your government out of office and shut down your nuclear program. If you fail to do so, we will start nuking the following sites in sequence, one per day, starting [some date].” Well, I’d go through some secret diplomacy first, but that would be my endgame if all else failed. Some backward induction should convince the UK government not to start the nuclear program in the first place.
I can think, straight away, of four or five reason why this would have been very much the wrong thing to do.
You make an enemy of your biggest allies. Nukes or no, the US has never been more powerful than the rest of the world put together.
You don’t react to coming out of one Cold War by initiating another.
This strategy is pointless unless you plan to follow through. The regime that laid down that threat would either be strung up when they launched, or voted straight out when they didn’t.
Mutually assured destruction was what stopped nuclear war happening. Setting one country up as the Guardian of the Nukes is stupid, even if you are that country. I’m not a yank, but I believe this sort of idea is pretty big in the constitution.
Attacking London is a shortcut to getting a pounding. This one’s just conjecture.
Yeah I think the clue is in there. Better to be about the good of humanity, and ruthless if that’s what’s called for. Setting yourself up as ‘the guy who has the balls to make the tough decisions’ usually denotes you as a nutjob. Case in point: von Neumann suggesting launching was the right strategy. I don’t think anyone would argue today that he was right, though back then the decision must have seemed pretty much impossible to make.
Survivorship bias. There were some very near misses (Cuban Missile Crisis, Stanislav Petrov, etc.), and it seems reasonable to conclude that a substantial fraction of the Everett branches that came out of our 1946 included a global thermonuclear war.
I’m not willing to conclude that von Neumann was right, but the fact that we avoided nuclear war isn’t clear proof he was wrong.
If the allies are rational, they should agree that it’s in their interest to establish this strategy. The enemy of everyone is the all-out nuclear war.
This strikes me as a variant of the ultimatum game. The allies would have to accept a large asymmetry of power. If even one of them rejects the ultimatum you’re stuck with the prospect of giving up your strategy (having burned most or all of your political capital with other nations), or committing mass murder.
When you add in the inability of governments to make binding commitments, this doesn’t strike me as a viable strategy.
Links in the Markdown syntax are written like this:
The UK bomb was developed with the express purpose of providing independance from the US. If the US could keep the USSR nuke-free there’d be less need for a UK bomb. Also, it’s possible that the US could tone down its anti-imperialist rhetoric/covert funding so as to not threaten the Empire.
I think that, by the time you’ve reached the point where you’re about to kill millions for the sake of the greater good, you’d do well to consider all the ethical injunctions this violated. (Especially given all the different ways this could go wrong that UnholySmoke could come up off the top of his head.)
Kaj, I was discussing a hypothetical nuclear strategy. We can’t discuss any such strategy without involving the possibility of killing millions. Do the ethical injunctions imply that such discussions shouldn’t occur?
Recall that MAD required that the US commit itself to destroy the Soviet Union if it detected that the USSR launched their nuclear missiles. Does MAD also violate ethical injunctions? Should it also not have been discussed? (How many different ways could things have gone wrong with MAD?)
Of course not. I’m not saying the strategy shouldn’t be discussed, I’m saying that you seem to be expressing greater certainty of your proposed approach being correct than would be warranted.
(I wouldn’t object to people discussing math, but I would object if somebody thought 2 + 2 = 5.)
And the world as we know it is still around because Stanislav Petrov ignored that order and insisted the US couldn’t possibly be stupid enough to actually launch that sort of attack.
I would pray that the US operators were equally sensible, but maybe they just got lucky and never had a technical glitch threaten the existence of humanity.
The entire civilised world (which at this point does not include anyone who is still a member of the US government) is in uproar. Your attempts at secret diplomacy are leaked immediately. The people of the UK make tea in your general direction. Protesters march on the White House.
When do you push the button, and how will you keep order in your own country afterwards?
What I’m really getting at here is that your bland willingness to murder millions of non-combatants of a friendly power in peacetime because they do not accede to your empire-building unfits you for inclusion in the human race.
Also, that it’s easy to win these games in your imagination. You just have to think, I will do this, and then my opponent must rationally do that. You have a completely watertight argument. Then your opponent goes and does something else. It does not matter that you followed the rules of the logical system if the system itself is inconsistent.
So says the man from his comfy perch in an Everett branch that survived the cold war.
Downvoted for being one of the most awful statements I have ever seen on this site, far and away the most awful to receive so many upvotes. What the fuck, people.
I doubt RichardKennaway believes Wei_Dai is unfit for inclusion in the human race. What he was saying, and what he received upvotes for, is that anyone who’s blandly willing to murder millions of non-combatants of a friendly power in peacetime because they do not accede to empire-building is unfit for inclusion in the human race—and he’s right, that sort of person should not be fit for inclusion in the human race. A comment on LW is not the same as that bland willingness to slaughter, and you do yourself no favours by incorrectly paraphrasing it as such.
You do realize that the point of my proposed strategy was to prevent the destruction of Earth (from a potential nuclear war between the US and USSR), and not “empire building”?
I don’t understand why Richard and you consider MAD acceptable, but my proposal beyond the pale. Both of you use the words “friendly power in peacetime”, which must be relevant somehow but I don’t see how. Why would it be ok (i.e., fit for inclusion in the human race) to commit to murdering millions of non-combatants of an enemy power in wartime in order to prevent nuclear war, but not ok to commit to murdering millions of non-combatants of a friendly power in peacetime in service of the same goal?
I also took Richard’s comment personally (he did say “your bland willingness”, emphasis added), which is probably why I didn’t respond to it.
The issue seems to be that nuking a friendly power in peacetime feels to people pretty much like a railroad problem where you need to shove the fat person. In this particular case, since it isn’t a hypothetical, the situation has been made all the more complicated by actual discussion of the historical and current geopolitics surrounding the situation (which essentially amounts to trying to find a clever solution to a train problem or arguing that the fat person wouldn’t weigh enough.) The reaction is against your apparent strong consequentialism along with the fact that your strategy wouldn’t actually work given the geopolitical situation. If one had an explicitly hypothetical geopolitical situation where this would work and then see how they respond it might be interesting.
Well, this is evidence against using second-person pronouns to avoid “he/she”.
He could easily have said “bland willingness to” rather than “your bland willingness” so that doesn’t seem to be an example where a pronoun is necessary.
No, it’s an example where using “you” has caused someone to take something personally. Given that the “he/she” problem is that some people take it personally, I haven’t solved the problem, I’ve just shifted it onto a different group of people.
I was commenting on what he said, not guessing at his beliefs.
I don’t think you’ve made a good case (any case) for your assertion concerning who is and is not to be included in our race. And it’s not at all obvious to me that Wei Dai is wrong. I do hope that my lack of conviction on this point doesn’t render me unfit for existence.
Anyone willing to deploy a nuclear weapon has a “bland willingness to slaughter”. Anyone employing MAD has a “bland willingness to destroy the entire human race”.
I suspect that you have no compelling proof that Wei Dai’s hypothetical nuclear strategy is in fact wrong, let alone one compelling enough to justify the type of personal attack leveled by RichardKennaway. Would you also accuse Eliezer of displaying a “bland willingness to torture someone for 50 years” and sentence him to exclusion from humanity?
What I was saying was horrendous act is not the same as comment advising horrendous act in hypothetical situation. You conflated the two in paraphrasing RichardKennaway’s comment as “comment advising horrendous act in hypothetical situation unfits you for inclusion in the human race” when what he was saying was “horrendous act unfits you for inclusion in the human race”.
I was rather intemperate, and on a different day maybe I would have been less so; or maybe I wouldn’t. I am sorry that I offended Wei Dai.
But then, Wei Dai’s posting was intemperate, as is your comment. I mention this not to excuse mine, just to point out how easily this happens. This may be partly the dynamics of the online medium, but in the present case I think it is also because we are dealing in fantasy here, and fantasy always has to be more extreme than reality, to make up for its own unreality.
You compare the problem to Eliezer’s one of TORTURE vs SPECKS, but there is an important difference between them. TORTURE vs SPECKS is fiction, while Wei Dai spoke of an actual juncture in history in living memory, and actions that actually could have been taken.
What is the TORTURE vs SPECKS problem? The formulation of the problem is at that link, but what sort of thing is this problem? Given the followup posting the very next day, it seems likely to me that the intention was to manifest people’s reactions to the problem. Perhaps it is also a touchstone, to see who has and who has not learned the material on which it stands. What it is not is a genuine problem which anyone needs to solve as anything but a thought experiment. TORTURE vs SPECKS is not going to happen. Other tradeoffs between great evil to one and small evils to many do happen; this one never will. While 50 years of torture is, regrettably, conceivably possible here and now in the real world, and may be happening to someone, somewhere, right now, there is no possibility of 3^^^3 specks. Why 3^^^3? Because that is intended to be a number large enough to produce the desired conclusion. Anyone whose objection is that it isn’t a big enough number, besides manifesting a poor grasp of its magnitude, can simply add another uparrow. The problem is a fictional one, and as such exhibits the reverse meta-causality characteristic of fiction: 3^^^3 is in the problem because the point of the problem is for the solution to be TORTURE; that TORTURE is the solution is not caused by an actual possibility of 3^^^3 specks.
In another posting a year later, Eliezer speaks of ethical rules of the sort that you just don’t break, as safety rails on a cliff he didn’t see. This does not sit well with the TORTURE vs SPECKS material, but it doesn’t have to: TORTURE vs SPECKS is fiction and the later posting is about real (though unspecified) actions.
So, the Cold War. Wei Dai would have the US after WWII threatening to nuke any country attempting to develop or test nuclear weapons. To the scenario of later discovering that (for example) the UK has a well-developed covert nuclear program, he responds:
It should, should it? And that, in Wei’s mind, is adequate justification for pressing the button to kill millions of people for not doing what he told them to do. Is this rationality, or the politics of two-year-olds with nukes?
I seem to be getting intemperate again.
It’s a poor sort of rationality that only works against people rational enough to lose. Or perhaps they can be superrational and precommit to developing their programme regardless of what threats you make? Then rationally, you must see that it would therefore be futile to make such threats. And so on. How’s TDT/UDT with self-modifying agents modelling themselves and each other coming along?
This is fantasy masquerading as rationality. I stand by this that I said back then:
To make these threats, you must be willing to actually do what you have said you will do if your enemy does not surrender. The moment you think “but rationally he has to surrender so I won’t have to do this”, you are making an excuse for yourself to not carry it out. Whatever belief you can muster that you would will evaporate like dew in the desert when the time comes.
How are you going to launch those nukes, anyway?
Using the word “intemperate” in this way is a remarkable dodge. Wei Dai’s comment was entirely within the scope of the (admittedly extreme) hypothetical under discussion. Your comment contained a paragraph composed solely of vile personal insult and slanted misrepresentation of Wei Dai’s statements. The tone of my response was deliberate and quite restrained relative to how I felt.
Huh? You’re “not excusing” the extremity of your interpersonal behavior on the grounds that the topic was fictional, and fiction is more extreme than reality? And then go on to explain that you don’t behave similarly toward Eliezer with respect to his position on TORTURE vs SPECKS because that topic is even more fictional?
Is this a constructive point, or just more gesturing?
As for the rest of your comment: Thank you! This is the discussion I wanted to be reading all along. Aside from a general feeling that you’re still not really trying to be fair, my remaining points are mercifully non-meta. To dampen political distractions, I’ll refer to the nuke-holding country as H, and a nuke-developing country as D.
You’re very focused on Wei Dai’s statement about backward induction, but I think you’re missing a key point: His strategy does not depend on D reasoning the way he expects them to, it’s just heavily optimized for this outcome. I believe he’s right to say that backward induction should convince D to comply, in the sense that it is in their own best interest to do so.
Don’t see how this follows. If both countries precommit, D gets bombed until it halts or otherwise cannot continue development. While this is not H’s preferred outcome, H’s entire strategy is predicated on weighing irreversible nuclear proliferation and its consequences more heavily than the millions of lives lost in the event of a suicidal failure to comply. In other words, D doesn’t wield sufficient power in this scenario to affect H’s decision, while H holds sufficient power to skew local incentives toward mutually beneficial outcomes.
Speaking of nuclear proliferation and its consequences, you’ve been pretty silent on this topic considering that preventing proliferation is the entire motivation for Wei Dai’s strategy. Talking about “murdering millions” without at least framing it alongside the horror of proliferation is not productive.
Practical considerations like this strike me as by far the best arguments against extreme, theory-heavy strategies. Messy real-world noise can easily make a high-stakes gambit more trouble than it’s worth.
It is a gesture concluding a constructive point.
This is a distinction without a difference. If H bombs D, H has lost (and D has lost more).
That depends on who precommits “first”. That’s a problematic concept for rational actors who have plenty of time to model each others’ possible strategies in advance of taking action. If H, without even being informed of it by D, considers this possible precommitment strategy of D, is it still rational for H to persist and threaten D anyway? Or perhaps H can precommit to ignoring such a precommitment by D? Or should D already have anticipated H’s original threat and backed down in advance of the threat ever having been made? I am reminded of the Forbidden Topic. Counterfactual blackmail isn’t just for superintelligences. As I asked before, does the decision theory exist yet to handle self-modifying agents modelling themselves and others, demonstrating how real actions can arise from this seething mass of virtual possibilities?
Then also, in what you dismiss as “messy real-world noise”, there may be a lot of other things D might do, such as fomenting insurrection in H, or sharing their research with every other country besides H (and blaming foreign spies), or assassinating H’s leader, or doing any and all of these while overtly appearing to back down.
The moment H makes that threat, the whole world is H’s enemy. H has declared a war that it hopes to win by the mere possession of overwhelming force.
I look around at the world since WWII and fail to see this horror. I look at Wei Dai’s strategy and see the horror. loqi remarked about Everett branches, but imagining the measure of the wave function where the Cold War ended with nuclear conflagration fails to convince me of anything.
This assumption determines (or at least greatly alters) the debate, and you need to make a better case for it. If H really “loses” by bombing D (meaning H considers this outcome less preferable than proliferation), then H’s threat is not credible, and the strategy breaks down, no exotic decision theory necessary. Looks like a crucial difference to me.
This entire paragraph depends on the above assumption. If I grant you that assumption and (artificially) hold constant H’s intent to precommit, then we’ve entered the realm of bluffing, and yes, the game tree gets pathological.
My mention of Everett branches was an indirect (and counter-productive) way of accusing you of hindsight bias.
Your talk of “convincing you” is distractingly binary. Do you admit that the severity and number of close calls in the Cold War is relevant to this discussion, and that these are positively correlated with the underlying justification for Wei Dai’s strategy? (Not necessarily its feasibility!)
Let’s set aside scale and comparisons for a moment, because your position looks suspiciously one-sided. You fail to see the horror of nuclear proliferation? If I may ask, what is your estimate for the probability that a nuclear weapon will be deployed in the next 100 years? Did you even ask yourself this question, or are you just selectively attending to the low-probability horrors of Wei Dai’s strategy?
Emphasis mine. You are compromised. Please take a deep breath (really!) and re-read my comment. I was not dismissing your point in the slightest, I was in fact stating my belief that it exemplified a class of particularly effective counter-arguments in this context.
Fail.
A model of reality, which assumes that an opponent must be rational, is an incorrect model. At best, it is a good approximation that could luckily return a correct answer in some situations.
I think this is a frequent bias for smart people—assuming that (1) my reasoning is flawless, and (2) my opponent is on the same rationality level as me, therefore (3) my opponent must have the same model of situation as me, therefore (4) if I rationally predict that it is best for my opponent to do X, my opponent will really do X. And then my opponent does non-X, and I am like: WTF?!
Richard, I’m with Nesov on this one. Don’t attack the person making the argument.