Bayesians vs. Barbarians
Let’s say we have two groups of soldiers. In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy. In group 2, everyone at all levels knows all about tactics and strategy.
Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?
In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.
Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.
Now there’s a certain viewpoint on “rationality” or “rationalism” which would say something like this:
“Obviously, the rationalists will lose. The Barbarians believe in an afterlife where they’ll be rewarded for courage; so they’ll throw themselves into battle without hesitation or remorse. Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition. They’ll believe in each other’s goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group. Meanwhile, the rationalists will realize that there’s no conceivable reward to be had from dying in battle; they’ll wish that others would fight, but not want to fight themselves. Even if they can find soldiers, their civilians won’t be as cooperative: So long as any one sausage almost certainly doesn’t lead to the collapse of the war effort, they’ll want to keep that sausage for themselves, and so not contribute as much as they could. No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won’t be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun. In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would.”
War is not fun. As many many people have found since the dawn of recorded history, as many many people have found out before the dawn of recorded history, as some community somewhere is finding out right now in some sad little country whose internal agonies don’t even make the front pages any more.
War is not fun. Losing a war is even less fun. And it was said since the ancient times: “If thou would have peace, prepare for war.” Your opponents don’t have to believe that you’ll win, that you’ll conquer; but they have to believe you’ll put up enough of a fight to make it not worth their while.
You perceive, then, that if it were genuinely the lot of “rationalists” to always lose in war, that I could not in good conscience advocate the widespread public adoption of “rationality”.
This is probably the dirtiest topic I’ve discussed or plan to discuss on LW. War is not clean. Current high-tech militaries—by this I mean the US military—are unique in the overwhelmingly superior force they can bring to bear on opponents, which allows for a historically extraordinary degree of concern about enemy casualties and civilian casualties.
Winning in war has not always meant tossing aside all morality. Wars have been won without using torture. The unfunness of war does not imply, say, that questioning the President is unpatriotic. We’re used to “war” being exploited as an excuse for bad behavior, because in recent US history that pretty much is exactly what it’s been used for...
But reversed stupidity is not intelligence. And reversed evil is not intelligence either. It remains true that real wars cannot be won by refined politeness. If “rationalists” can’t prepare themselves for that mental shock, the Barbarians really will win; and the “rationalists”… I don’t want to say, “deserve to lose”. But they will have failed that test of their society’s existence.
Let me start by disposing of the idea that, in principle, ideal rational agents cannot fight a war, because each of them prefers being a civilian to being a soldier.
As has already been discussed at some length, I one-box on Newcomb’s Problem.
Consistently, I do not believe that if an election is settled by 100,000 to 99,998 votes, that all of the voters were irrational in expending effort to go to the polling place because “my staying home would not have affected the outcome”. (Nor do I believe that if the election came out 100,000 to 99,999, then 100,000 people were all, individually, solely responsible for the outcome.)
Consistently, I also hold that two rational AIs (that use my kind of decision theory), even if they had completely different utility functions and were designed by different creators, will cooperate on the true Prisoner’s Dilemma if they have common knowledge of each other’s source code. (Or even just common knowledge of each other’s rationality in the appropriate sense.)
Consistently, I believe that rational agents are capable of coordinating on group projects whenever the (expected probabilistic) outcome is better than it would be without such coordination. A society of agents that use my kind of decision theory, and have common knowledge of this fact, will end up at Pareto optima instead of Nash equilibria. If all rational agents agree that they are better off fighting than surrendering, they will fight the Barbarians rather than surrender.
Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting. One solution is to run a lottery, unpredictable to any agent, to select warriors. Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death.
(A reflectively consistent decision theory works the same way, only without the self-modification.)
You reply: “But in the real, human world, agents are not perfectly rational, nor do they have common knowledge of each other’s source code. Cooperation in the Prisoner’s Dilemma requires certain conditions according to your decision theory (which these margins are too small to contain) and these conditions are not met in real life.”
I reply: The pure, true Prisoner’s Dilemma is incredibly rare in real life. In real life you usually have knock-on effects—what you do affects your reputation. In real life most people care to some degree about what happens to other people. And in real life you have an opportunity to set up incentive mechanisms.
And in real life, I do think that a community of human rationalists could manage to produce soldiers willing to die to defend the community. So long as children aren’t told in school that ideal rationalists are supposed to defect against each other in the Prisoner’s Dilemma. Let it be widely believed—and I do believe it, for exactly the same reason I one-box on Newcomb’s Problem—that if people decided as individuals not to be soldiers or if soldiers decided to run away, then that is the same as deciding for the Barbarians to win. By that same theory whereby, if a lottery is won by 100,000 votes to 99,998 votes, it does not make sense for every voter to say “my vote made no difference”. Let it be said (for it is true) that utility functions don’t need to be solipsistic, and that a rational agent can fight to the death if they care enough about what they’re protecting. Let them not be told that rationalists should expect to lose reasonably.
If this is the culture and the mores of the rationalist society, then, I think, ordinary human beings in that society would volunteer to be soldiers. That also seems to be built into human beings, after all. You only need to ensure that the cultural training does not get in the way.
And if I’m wrong, and that doesn’t get you enough volunteers?
Then so long as people still prefer, on the whole, fighting to surrender; they have an opportunity to set up incentive mechanisms, and avert the True Prisoner’s Dilemma.
You can have lotteries for who gets elected as a warrior. Sort of like the example above with AIs changing their own code. Except that if “be reflectively consistent; do that which you would precommit to do” is not sufficient motivation for humans to obey the lottery, then...
...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away. Even considering that we ourselves might be selected in the lottery. Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival.
...like I said: Real wars = not fun, losing wars = less fun.
Let’s be clear, by the way, that I’m not endorsing the draft as practiced nowadays. Those drafts are not collective attempts by a populace to move from a Nash equilibrium to a Pareto optimum. Drafts are a tool of kings playing games in need of toy soldiers. The Vietnam draftees who fled to Canada, I hold to have been in the right. But a society that considers itself too smart for kings, does not have to be too smart to survive. Even if the Barbarian hordes are invading, and the Barbarians do practice the draft.
Will rational soldiers obey orders? What if the commanding officer makes a mistake?
Soldiers march. Everyone’s feet hitting the ground in the same rhythm. Even, perhaps, against their own inclinations, since people left to themselves would walk all at separate paces. Lasers made out of people. That’s marching.
If it’s possible to invent some method of group decisionmaking that is superior to the captain handing down orders, then a company of rational soldiers might implement that procedure. If there is no proven method better than a captain, then a company of rational soldiers commit to obey the captain, even against their own separate inclinations. And if human beings aren’t that rational… then in advance of the lottery, the general policy that gives you the highest personal expectation of survival is to shoot soldiers who disobey orders. This is not to say that those who fragged their own officers in Vietnam were in the wrong; for they could have consistently held that they preferred no one to participate in the draft lottery.
But an uncoordinated mob gets slaughtered, and so the soldiers need some way of all doing the same thing at the same time in the pursuit of the same goal, even though, left to their own devices, they might march off in all directions. The orders may not come from a captain like a superior tribal chief, but unified orders have to come from somewhere. A society whose soldiers are too clever to obey orders, is a society which is too clever to survive. Just like a society whose people are too clever to be soldiers. That is why I say “clever”, which I often use as a term of opprobrium, rather than “rational”.
(Though I do think it’s an important question as to whether you can come up with a small-group coordination method that really genuinely in practice works better than having a leader. The more people can trust the group decision method—the more they can believe that it really is superior to people going their own way—the more coherently they can behave even in the absence of enforceable penalties for disobedience.)
I say all this, even though I certainly don’t expect rationalists to take over a country any time soon, because I think that what we believe about a society of “people like us” has some reflection on what we think of ourselves. If you believe that a society of people like you would be too reasonable to survive in the long run… that’s one sort of self-image. And it’s a different sort of self-image if you think that a society of people all like you could fight the vicious Evil Barbarians and win—not just by dint of superior technology, but because your people care about each other and about their collective society—and because they can face the realities of war without losing themselves—and because they would calculate the group-rational thing to do and make sure it got done—and because there’s nothing in the rules of probability theory or decision theory that says you can’t sacrifice yourself for a cause—and because if you really are smarter than the Enemy and not just flattering yourself about that, then you should be able to exploit the blind spots that the Enemy does not allow itself to think about—and because no matter how heavily the Enemy hypes itself up before battle, you think that just maybe a coherent mind, undivided within itself, and perhaps practicing something akin to meditation or self-hypnosis, can fight as hard in practice as someone who theoretically believes they’ve got seventy-two virgins waiting for them.
Then you’ll expect more of yourself and people like you operating in groups; and then you can see yourself as something more than a cultural dead end.
So look at it this way: Jeffreyssai probably wouldn’t give up against the Evil Barbarians if he were fighting alone. A whole army of beisutsukai masters ought to be a force that no one would mess with. That’s the motivating vision. The question is how, exactly, that works.
- Rereading Atlas Shrugged by 28 Jul 2020 18:54 UTC; 161 points) (
- Decision Theories: A Less Wrong Primer by 13 Mar 2012 23:31 UTC; 110 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- Commentary On “The Abolition of Man” by 15 Jul 2019 18:56 UTC; 64 points) (
- Of Gender and Rationality by 16 Apr 2009 0:56 UTC; 62 points) (
- Proposal: Butt bumps as a default for physical greetings by 1 Apr 2023 12:48 UTC; 53 points) (
- My Way by 17 Apr 2009 1:25 UTC; 48 points) (
- Two-Tier Rationalism by 17 Apr 2009 19:44 UTC; 48 points) (
- Making Rationality General-Interest by 24 Jul 2013 22:02 UTC; 45 points) (
- The Craft and the Community by 26 Apr 2009 17:52 UTC; 45 points) (
- A Protocol for Optimizing Affection by 30 May 2012 0:38 UTC; 39 points) (
- The consequentialist case for social conservatism, or “Against Cultural Superstimuli” by 14 Apr 2021 21:50 UTC; 28 points) (
- Emotions are not beliefs by 7 Aug 2019 6:27 UTC; 25 points) (
- 12 May 2024 9:40 UTC; 24 points) 's comment on Beware unfinished bridges by (
- 16 Jun 2009 20:02 UTC; 23 points) 's comment on Rationalists lose when others choose by (
- The Archetypal Rational and Post-Rational by 15 Apr 2020 9:40 UTC; 21 points) (
- Tentative Anger by 27 Nov 2021 4:38 UTC; 18 points) (
- 8 Sep 2014 14:43 UTC; 17 points) 's comment on “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) by (
- 28 Sep 2020 22:45 UTC; 17 points) 's comment on On Destroying the World by (
- 23 Sep 2014 2:22 UTC; 14 points) 's comment on Open thread, September 22-28, 2014 by (
- What rationality material should I teach in my game theory course by 14 Jan 2014 2:15 UTC; 11 points) (
- Rationality Reading Group: Part Z: The Craft and the Community by 4 May 2016 23:03 UTC; 10 points) (
- 2 Nov 2014 21:46 UTC; 9 points) 's comment on Open thread, Oct. 27 - Nov. 2, 2014 by (
- 1 Jan 2013 22:05 UTC; 9 points) 's comment on Some scary life extension dilemmas by (
- 17 Jul 2017 14:35 UTC; 9 points) 's comment on LessWrong Is Not about Forum Software, LessWrong Is about Posts (Or: How to Immanentize the LW 2.0 Eschaton in 2.5 Easy Steps!) by (
- 3 Dec 2016 20:01 UTC; 8 points) 's comment on CFAR’s new focus, and AI Safety by (
- 5 Aug 2014 3:13 UTC; 8 points) 's comment on Rationality Quotes August 2014 by (
- 27 Dec 2012 18:40 UTC; 8 points) 's comment on Donation tradeoffs in conscientious objection by (
- 15 May 2013 9:23 UTC; 7 points) 's comment on Avoiding the emergency room by (
- 11 Jun 2017 17:40 UTC; 7 points) 's comment on We are the Athenians, not the Spartans by (
- [SEQ RERUN] Bayesians vs. Barbarians by 3 May 2013 5:39 UTC; 6 points) (
- 22 Oct 2011 23:35 UTC; 6 points) 's comment on Rationality Quotes October 2011 by (
- 23 Mar 2011 18:47 UTC; 6 points) 's comment on The trouble with teamwork by (
- 2 Apr 2011 22:49 UTC; 6 points) 's comment on Open Thread, April 2011 by (
- 15 Jul 2019 23:08 UTC; 6 points) 's comment on Commentary On “The Abolition of Man” by (
- 29 Jul 2020 23:44 UTC; 6 points) 's comment on Rereading Atlas Shrugged by (
- 20 Aug 2019 6:25 UTC; 5 points) 's comment on Partial summary of debate with Benquo and Jessicata [pt 1] by (
- 2 Sep 2013 20:14 UTC; 4 points) 's comment on Rationality Quotes September 2013 by (
- 4 Jan 2012 15:08 UTC; 4 points) 's comment on Welcome to Less Wrong! (2012) by (
- 16 Jun 2009 18:05 UTC; 4 points) 's comment on Rationalists lose when others choose by (
- 2 Nov 2011 20:50 UTC; 4 points) 's comment on Rationality Quotes November 2011 by (
- 27 Sep 2012 17:38 UTC; 4 points) 's comment on How about testing our ideas? by (
- 23 Jan 2010 23:05 UTC; 3 points) 's comment on Raising the Sanity Waterline by (
- 20 Nov 2010 3:55 UTC; 3 points) 's comment on Rationality and being child-free by (
- How Irrationality Can Win: The Power of Group Cohesion by 9 Jul 2010 6:15 UTC; 2 points) (
- 5 Jun 2009 6:19 UTC; 2 points) 's comment on My concerns about the term ‘rationalist’ by (
- 10 Oct 2009 21:11 UTC; 1 point) 's comment on How to get that Friendly Singularity: a minority view by (
- 16 Jan 2012 6:19 UTC; 1 point) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 5 Jan 2012 18:02 UTC; 1 point) 's comment on Rationality quotes January 2012 by (
- 30 May 2012 5:43 UTC; 1 point) 's comment on When None Dare Urge Restraint, pt. 2 by (
- 1 Jul 2009 7:18 UTC; 1 point) 's comment on Controlling your inner control circuits by (
- 10 Oct 2012 7:24 UTC; 0 points) 's comment on Firewalling the Optimal from the Rational by (
- 20 Feb 2013 10:56 UTC; 0 points) 's comment on Memetic Tribalism by (
- 30 May 2012 5:50 UTC; 0 points) 's comment on When None Dare Urge Restraint, pt. 2 by (
- 1 Mar 2012 20:54 UTC; 0 points) 's comment on [SEQ RERUN] Stop Voting For Nincompoops by (
- 8 May 2009 20:00 UTC; 0 points) 's comment on Beware Trivial Inconveniences by (
- 17 Nov 2011 16:13 UTC; 0 points) 's comment on Nonperson Predicates by (
- 2 May 2009 19:44 UTC; 0 points) 's comment on The mind-killer by (
- 21 Sep 2012 2:22 UTC; -1 points) 's comment on Any existential risk angles to the US presidential election? by (
- 6 Dec 2011 5:09 UTC; -2 points) 's comment on 2011 Survey Results by (
- 15 May 2013 8:06 UTC; -9 points) 's comment on Avoiding the emergency room by (
IAWYC, but I think it sidesteps an important issue.
A perfectly rational community will be able to resist the barbarians. But it’s possible, perhaps likely, that as you increase community rationality, there’s a valley somewhere between barbarian and Bayesian where fighting ability decreases until you climb out of it.
I think the most rational societies currently existing are still within that valley. And that a country with the values and rationality level of 21st century Harvard will with high probability be defeated by a country with the values and rationality level of 13th century Mongolia (holding everything else equal).
I don’t know who you’re arguing against, but I bet they are more interested in this problem than in an ideal case with a country of perfect Bayesians.
I agree such a valley is plausible (though far from obvious: more rational societies have better science and better economies; democracies can give guns to working class soldiers whereas aristocracies had to fear arming their peasants; etc.). To speculate about the underlying phenomenon, it seems plausible that across a range of goals (e.g., increasing one’s income; defending one’s society against barbarian hordes):
Slightly above-average amounts of rationality fairly often make things worse, since increased rationality, like any change in one’s mode of decision-making, can move people out of local optima.
Significantly larger amounts of rationality predictably make things better, since, after awhile, the person/society actually has enough skills to notice the expected benefits of “doing things the way most people do them” (which are often considerable; cultural action-patterns don’t come from nowhere) and to fairly evaluate the expected benefits of potential changes, and to solve the intrapersonal or societal coordination problems necessary to actually implement the action from which best results are expected.
Though I agree with Yvain’s points elsewhere that we need detailed, concrete, empirical arguments regarding the potential benefits claimed from these larger amounts of rationality.
More-developed societies develop technology; less-developed societies use them without paying the huge costs of development.
It’s not evident which strategy is a win. Historically, it often appears that those who develop tech win. But not always. Japan has for decades been cashing in on American developments in cars, automation, steelmaking, ICs, and other areas.
If American corporations were required to foot the bill for the education needed for technological development, instead of having it paid for by taxpayers and by students, they might choose not to.
If you patent something, you can charge what you like for the license. Were you suggesting that some countries ignore patent law; or that extenalities (such as failed R&D projects and education costs) don’t get recompensed? Or something else?
That’s probably unfair. Japan files a lot of patents—more than the US by some measures.
The subject was discussed at Overcoming Bias recently.
I’m no economist, but don’t they already pay for it to a certain extent, in the form of the higher wages educated workers demand?
I think that’s more a function of the rarity of the educated individuals of the needed sort, than of the cost of their education.
IANEY but when I engage in such lines of reasoning I tend to be arguing against people (including myself devils advocating) who assert that even ideal bayesians would lose. Typically they include references to game theory, either the prisoners dillema or the commons.
I actually haven’t yet had the chance to argue with someone who I would bet is more interested in the ‘Havard equivalent rationalist’ society. Not because that isn’t the most practically important situation to consider, rather because their motivation for agument is to justify their own way of thinking. There is also plenty of social status to be had in applauding the anti-spock.
“Jeffreyssai probably wouldn’t give up against the Evil Barbarians if he were fighting alone.”
WWJD, indeed.
But since Jeffreyssai is a fictional creation of Eliezer Yudkowsky, appealing to what we imagine he would do is nothing more than an appeal to Eliezer Yudkowsky’s ideas, in the same way that trying to confirm a newspaper’s claim by picking up another copy of the same edition is just appealing to the newspaper again.
How can we test the newspaper instead of appealing to it?
The reference to Jeffreyssai seems to be an explanatory tool, not an appeal.
Build an approximately rational AI.
Fortunately, this is a case where the least convenient possible world is quite unlike the real world, because modern wars are fought less with infantry and more with money and technology. As technology advances, military robots get cheaper, and larger portions of the military move to greater distances from the battlefield. If current trends continue, wars will be fought entirely between machines, until one side runs out of robots and is forced to surrender (or else fight man-vs-machine, which, in spite of what happens in movies, is probably fruitless suicide).
The problem with this theory is that people in a poor country are a lot cheaper than cutting edge military robots. In a serious war, the U.S. would quickly run out of “smart bombs” and such. Military equipment is a pure consumption item, it produces nothing at all, so there is only going to be limited investment in it in peacetime. And modern high-tech military equipment requires long lead times for building up (unlike the situation in WWII).
Robots get cheaper and stronger over time, while people are a fixed parameter.
exactly. i guess the rationalist writing this post didn’t do his research on US military. The landscape is changing. Now you can go to a cube in California and bomb targets all day long and then go have dinner with your wife and kids at 5pm, not that it is any less traumatic to the psyche, but a lot less traumatic than losing a war, right? enjoy: http://emergentfool.com/2009/04/03/military-industrial-complex-redux/
Uh huh. Say, did it ever occur to you that the US military itself isn’t always commanded by sane Presidents?
A couple comments. I think I overall agree, though I admit this is one of those “it gives me the willies, dun want to think about it too much” things for me. (Which, of course, means it’s the sort of thing I especially should think about to see what ways I’m being stupid that I’m not letting myself see...)
Anyways, first, as far as methods of group decision making better than a chain of command type thing… I would expect that “better” in this context, would actually have to have a stricter requirement than merely “produces a more correct answer” but “produces a more correct answer QUICKLY”, since group decision methods that we currently know of tend to, well, take more time, right?
Also, as far as precommiting to follow the captain (or other appropriate officer), that should be “up to limits where actually disobeying, even when taking into account Newcomb type arguments, is actually the Right Thing”. (for example, commitment to obey until “egregious violations of morality” or something)
Semiformalizing this in this context seems tricky, though. Maybe something like this: for morality related disobediences, the rule is obey until “actually losing this battle/war/skirmish/whatever-the-correct-granularity-is would actually be preferable to obeying.”?
I’m just stumped as far as what a sane rule for “captain is ordering us to do something that’s really really self destructively stupid in a way that absolutely won’t achieve anything useful” is. Maybe the “still obey under those circumstances” rule is the Right Way, if the probability of actually being (not just seeming to be) in such a situation is low enough that it’s far better to precommit to obey unconditionally (up to extreme morality situations, as mentioned above)
You’re right, in principle, about both things. There’s a limit to our willingness to follow orders based on raw immorality of the orders. That’s what Nuremburg, Mi Lai, and abu ghraib were about. But we also want to constrain our right to claim that we’re disobeying for morality so we don’t do it in the heat of action unless we’re right. Tough call for the individual to make, and tough to set up proper incentives for.
But that’s the goal. Follow order unless …, but don’t abuse your right to invoke the exception.
To pick a 2 year old Nit:
That was what Nuremburg and Mi Lai were about, but that is not what Abu Ghraib was about. At Abu Ghraib most of the events and acts that were made public, and most of what people are upset about was done by people who were violating orders—with some exceptions, and from what I can tell most of the exceptions were from non-military organizations.
I’m not going to waste a lot more time going into detail, but the people who went to jail went there for violating orders, and the people who got “retired” got it because they were shitty leaders and didn’t make sure their troops where well behaved.
In a “appeal to authority”, I’ve been briefed several times over the last 20 years on the rules of land warfare, I’ve spent time in that area (in fact when the original article was posted I was about 30 miles from Abu Ghraib) and a very good friend of mine was called in to help investigate/document what happened there. When his NDA expires I intend to get him drunk and get the real skinny.
This doesn’t change the thrust of your argument—which not only do I agree with, but is part and parcel of military training these days. It is hammered into each soldier, sailor, marine and airman that you do NOT have to follow illegal orders. Read “Lone Survivor”, a book by Marcus Luttrell about his Seal Team going up against unwinnable odds in the mountains of Afghanistan—because they, as a team, decided not to commit a war crime. Yeah, they voted on it, and it was close. , but one of those things was not like the other and I felt I had to say something.
I’m not completely convinced that all the people who were punished believed they were not doing what their superiors wanted. I understand that that’s the way the adjudication came out, but that’s what I would expect from a system that knows how to protect itself. But I’ll admit I haven’t paid close attention to any of the proceedings.
Is there any good, short, material laying out the evidence that none of the perpetrators heard anything to reinforce the mayhem from their superiors—non-coms etc. included? Your sentence “the people who went to jail went there for violating orders” leaves open the possibility that some of the illegal activity was done by people who thought they were following orders, or at least doing what their superiors wanted.
If you are right, then I’ll agree that Abu Ghraib was orthogonal to the main point. But I’m not completely convinced, and it seems likely to me that it looks exactly like a relevant case to the Arab street. Whether or not there were explicit orders from the top of the institution, it looked to have been pervasive enough to have to count as policy at some level.
Torture and Democracy argues that torture is a craft apprenticeship technique, and develops when superiors say “I want answers and I don’t care how you get them”.
This makes the question of what’s been ordered a little fuzzy.
(This is a reply to both Mr. Hibbert and Ms. Lebovitz)
I’ve got a couple problems here—one is that there wasn’t an incident @Abu Grhaib, there were a couple periods of time in which certain classes if things happened. Another is that some military personnel (this is from memory since it’s just not worth my time right now to google it) from a reservist MP unit, many of whom were prison guards “in real life” abused some prisoners during one or two shifts after a particularly brutal (in terms of casualties to American forces from VBIEDs/Suicide bombers. These particular abuses (getting detainees naked piled up etc) were not done as part of information gathering, and IIRC many of those prisoners weren’t even considered intelligence sources. Abu Grhaib at the time held both iraqi criminal and insurgent/terrorist suspects.
I haven’t paid much attention to the debate since, and have not wasted the cycles on reading any other sources. As I indicated, I’ve been in the military and rejoined the armed forces around the time that story broke (or maybe later, I’m having trouble nailing down exactly when the story broke).
One thing that did come out was that during the period of time the military abuses took place (as in the shifts that they happened on) there WERE NO OFFICERS PRESENT. That is basically what got the Brigadier General in charge “retired”. (she later whined about how she was mistreated by the system. I’ve got no sympathy. Her people were poorly trained and CLEARLY poorly lead from the top down).
There were other photographs that surfaced of “fake torture”—an detainee dressed in a something that looked like a poncho with jumper cables on his arms—he believe the jumper cables were attached to a power source and would light him up like a christmas tree if he stepped down (again IIRC). This was the actions of a non-military questioner, and someone who thought he was following the law—after all he wasn’t doing anything by scaring the guy there was (absent a weak heart) no risk of injury. It was a really awful looking photo though.
Ms. Levbovitz:
I’ve known people (not current military, Vietnam era) who engaged in a variety of rather brutal interrogation techniques. The one I have in mind was raised in a primitive part of the US were violence and poverty were more common that education, and spent a long time fighting an enemy that would do things like chop off arms of people who had vaccination scars.
His superiors didn’t have to tell him anything. (Note I have never said that “we” haven’t engaged in these sorts of behaviors, only that it didn’t happen under our watch in Abu Grhaib (some of the stuff that happened before we took over, when it was Saddam’s prison? It’s hard for me to watch and I have a bit of tough stomach for that sort of thing).
And this notion that “a person being tortured is likely to say whatever he thinks his captors want to hear, making it one of the poorest methods of gathering reliable information” is pure bullshit.
Yes, if I grab random people off the street and waterboard them I will get no useful information. If 5 people break into my house and kidnap my daughter, but only 4 get out he WILL give me the information I want. He will say anything to stop the pain, and that anything happens to be what I want to hear.
This is again orthagonal to what I was discussing with Mr. Hibbert—I was not claiming that torture doesn’t happen (it does), but that most of what the public knows about what happened at Abu Grhaib wasn’t torture or abuse ordered by those above, and in some cases it was not even what the perpetrator thought of as abuse.
Well, that’s more “what laws should there be/what sort of enforcement ought there to be?” I was more asking with regards to “what underlying rule is the Rational Way”? :)
ie, some level of being willing to do what you’re told even if it’s not optimal is a consequence of the need to coordinate groups and stuff. ie, the various newcomb arguments and so on.. I’m just trying to figure out where that breaks down, how stupid the orders have to seem before that implodes, if ever.
The morality one was easier, since the “obey even though you think you know better” thing is based on one boxing and having the goal of winning that battle/whatever. If losing would actually be preferable to obeying, then the single iteration PD/newcomb problem type stuff doesn’t seem to come up as strongly.
Any idea what an explicit rule a rationalist should follow with regards to this might look like? (not necesarally “how do we enforce it” though. Separate question)
Even an upper limit criteria would be okay. ie, something of the form “I don’t know the exact dividing line, but I think I can argue that at least if it gets to this point, then disobeying is rational”
(which is what I did for the morality one, with the “better to lose than obey” criteria.)
No, I don’t have a boiled down answer. When I try to think about it, rational/right includes not just the outcome of the current engagement, but the incentives and lessons left behind for the next incident.
Okay, here’s one example I’ve used before: torture. It’s somewhat orthogonal to the question of following orders, but it bears on the issue of setting up incentives for how often breaking the rules is acceptable. I think the law and the practice should be that torture is illegal and punished strictly. If some person is convinced that imminent harm will result if information isn’t extracted from a suspect, and that it’s worth going to jail for a long time in order to prevent the harm, then they are able to (which is not the same as authorized) torture. But it’s always at the cost of personal sacrifice. So, if you think a million people will die from a nuke, and you’re convinced you can actually get information out of someone by immoral and prohibited means (which I think is usually the weakest link in the chain) and you’re willing to give up your life or your liberty in order to prevent it, then go for it.
But don’t ever expect a hero’s welcome for your sacrifice. It’s a bad choice that’s (conceivably) sometimes necessary. The idea that any moral society would authorize the use of torture in routine situations makes me sick.
I think people exist who will make the personal sacrifice of going to jail for a long time to prevent the nuke from going off. But I do not think people exist who will also sacrifice a friend. But under American law that is what a person would have to do to consult with a friend on the decision of whether to torture: American law punishes people who have foreknowledge of certain crimes but do not convey their foreknowledge to the authorities. So the person is faced with making what may well be the most important decision of their lives without help from any friend or conspiring somehow to keep the authorities from learning about the friend’s foreknowledge of the crime. Although I believe that lying is sometimes justified, this particular lie must be planned out simultaneously with the deliberations over the important decision—potentially undermining those deliberations if the person is unused to high-stakes lies—and the person probably is unused to high-stakes lies if he is the kind of person seriously considering such a large personal sacrifice.
Any suggestions for the person?
Discuss a hypothetical situation with your friend that happens to match up in all particulars with the real-world situation, which you do not discuss.
It isn’t actually important here that your friend be fooled, the goal is to give your friend plausible deniability to protect her from litigation.
Yes. I am sympathetic to that view of “how to deal with stuff like torture/etc”, but that doesn’t answer “when to do it”.
ie, I wasn’t saying “when should it be ‘officially permitted’?” but rather at what point should a rationalist do so? how convinced does a rationalist need to be, if ever?
Or did I completely misunderstand what you were saying?
No, you understood me. I sidestepped the heart of the question.
This is an example where I believe I know what the right incentives structure of the answer is. But I can’t give any guidance on the root question, since in my example case, (torture) I don’t believe in the efficacy of the immoral act. I don’t think you can procure useful information by torturing someone when time is short. And when time isn’t short, there are better choices.
I guess the big question here is why do you not believe it. Since you (and I!) would prefer to live in a world where torture is not effective, we must be aware that our biases is to believe it is not effective—it makes the world nicer. Hence, we must conciously shift up our belief in the effectiveness of torture from our “gut feeling.” Given that, what evidence have you seen that for the purposes of solving NP-like problems (meaning, a problem where a solution is hard to find but easy to verify like “where is the bomb hidden”) is not effective. I would say that for me personally, the amount that my preferences shift in the presence of relatively mild pain (“I prefer not to medicate myself” vs. “Gimme that goddamn pill”) is at least cause to suspect that someone who is an expert at causing vast amounts of pain would be able to make me do things I would normally prefer not to do (like tell them where I hid the bomb) to stop that pain.
Of course, torture used for unverifiable information is completely useless for exactly the same reason—the prisoner will say anything they can get away with to make the pain stop.
Maybe my previous answer would have been cleaner if I had said “I don’t think I can procure useful information by torturing someone when time is short.” It’s a relatively easy choice for me, since I doubt that even with proper tools, that I could appropriately gauge the level of pain to the necessary calibration in order to get detailed information in a few minutes or hours.
When I think about other people who might have more experience, it’s hard to imagine someone who had repeatedly fallen into the situation where they were the right person to perform the torture so they had enough experience to both make the call, and effectively extract information. Do you want to argue that they could have gotten to that point without violating our sense of morality?
Since my question is “What should the law be?”, not “is it ever conceivable that torture could be effective?” I still have to say that the law should forbid torture, and people should expect to be punished if they torture. There may be cases where you or I would agree that in that circumstance it was the necessary thing to do, but I still believe that the system should never condone it.
You talked about two issues that have little to do with each other:
What should the law be? (I didn’t argue with your point here, so re-iterating it is useless?)
A statement that was misleading: apparently you meant that you’re not a good torturer. That is not impossible. I think that given a short amount of time, with someone who knows something specific (where the bomb is hidden), my best chance (in effective, not moral, ordering) is to torture them. I’m not a professional torturer, I luckily never had to torture anyone, but like any human, I have an understanding in pain. I’ve watched movies about torture, and I’ve heard about waterboarding. If I decided that this was the ethical thing to do (which be both agree, in some cases is possible), and I was the only one around, I’d probably try waterboarding. It’s risky, there’s a chance the prisoner might die, but if I have one hour, and 50 million people will die otherwise, I don’t see any better way. So let me ask you flat out—I’m assuming you also read about waterboarding, and that when you need to, you have access to the WP article about waterboarding. What would you do in that situation? Ask nicely?
All that does not go to condone torture. I’m just saying, if a nation of Rationalists is fighting with the Barbarians, then it’s not necessarily in their best interests to decide they will never torture no matter what.
My point wasn’t just that I wouldn’t make a good torturer. It seems to me that ordinary circumstances don’t provide many opportunities for anyone to learn much about torture, (other than from fictional sources). I have little reason to believe that inexperienced torturers would be effective in the time-critical circumstances that seem necessary for any convincing justification of torture. You may believe it, but it’s not convincing to me. So it would be hard to ethically produce trained torturers, and there’s a dearth of evidence on the effectiveness of inexperienced torturers in the circumstances necessary to justify it.
Given that, I think it’s better to take the stance that torture is always unethical. There are conceivable circumstances when it would be the only way to prevent a cataclysm, but they’re neither common, nor easy to prepare for.
And I don’t think I’ve said that it would be ethical, just that individuals would sometimes think it was necessary. I think we are all better off if they have to make that choice without any expectation that we will condone their actions. Otherwise, some will argue that it’s useful to have a course of training in how to perform torture, which would encourage its use even though we don’t have evidence of its usefulness. It seems difficult to produce evidence one way or another on the efficacy of torture without violating the spirit of the Nuremberg Code. I don’t see an ethical way to add to the evidence.
You seem to believe that sufficient evidence exists. Can you point to any?
You wanted an explicit answer to your question. My response is that I would be unhappy that I didn’t have effective tools for finding out the truth. But my unhappiness doesn’t change the facts of the situation. There isn’t always something useful that you can do. When I generalize over all the fictional evidence I’ve been exposed to, it’s too likely that my evidence is wrong as to the identity of the suspect, or he doesn’t have the info I want, or the bomb can’t be disabled anyway. When I try to think of actual circumstances, I don’t come up with examples in which time was short and the information produced was useful. I also can’t imagine myself personally punching, pistol-whipping, pulling fingernails, waterboarding, etc, nor ordering the experienced torturer (who you want me to imagine is under my command) to do so.
Sorry to disappoint you, but I don’t believe the arguments I’ve heard for effectiveness or morality of torture.
Yeah, the “do it, but keep it illegal and be punished for it even if it was needed” is a possible solution given “in principle it may be useful”, which is a whole other question.
But anyways, I was talking about “when should a rationalist soldier be willing to disobey in the name of ‘I think my CO is giving really stupid orders’?”, since I believe I already have a partial solution to the “I think my CO is giving really immoral orders” case (as described above)
As far as when torture would even be plausibly useful (especially plausibly optimal) for obtaining info? I can’t really currently think of any non-contrived situations.
How about this upper limit: when the outcome of (everyone) following orders would be worse than everyone doing their own thing, disobey.
I’ve set my line of retreat at a much higher extreme. I expect humans trained in rationality, when faced with a situation where they must abandon their rationality in order to win, to abandon their rationality. If the most effective way to produce a winning army is to irreversibly alter the brains of soldiers to become barbarians, the pre-lottery agreement, for me, would include that process (say brain washing, drugging and computer implants), as well as appropriate ways to pacify the army once the war has been completed.
I expect a rational society, when faced with the inevitability of war, would pick the most efficient way to pound the enemy into dust, and go as far as this, if required.
Caveats: I don’t actually expect anything this extreme would be required for winning most wars. I have a nagging doubt, that it may not be possible to form a society of humans which is at the same time both rational, and willing to go to such an extreme.
So basically the Culture-Idiran War version of “when you need soldiers, make people born to be warriors”.
I’m wondering whether the rationalists can effectively use mercenaries. Why doesn’t the US have more mercenaries than US soldiers? In the typically poverty-stricken areas where US forces operate, we could hire and equip 100-1000 locals for the price of a single US soldier (which, when you figure in health-care costs, is so much that we basically can’t afford to fight wars using American soldiers anymore). We might also have less war opposition back at home if Americans weren’t dying.
We do use mercenaries: http://www.newsweek.com/2010/08/10/mercenaries-in-iraq-to-take-over-soldiers-jobs.html
But there might be cheaper options. If we paid Afghan girls $10/day to go to school, would the Taliban collapse?
We could be a little more subtle. Start by offering jobs to do something the Taliban wouldn’t consider threatening—Mechanical Turk work-from-home stuff not requiring literacy, via some kind of specialized radio or satellite link with no access to porn or feminism or anything the Taliban would object to. Every family wants one of those terminals and they can make twice as much money if the girls work (from home) too. Gradually offer higher pay for higher skill levels, starting with nonthreatening stuff like arithmetic but escalating to translating the Koran and then to tasks that would involve reading a wide variety of secular material, analyzing political and judicial systems of different countries (still maybe disguised as a translating job)…
There’s no shortage of Afghan girls who already want to go to school or of parents who want to send them. The problem is that there are people who mutilate girls who attend these schools. In the short run, at least, sticks are often more effective at getting the acquiescence of the population than carrots; when collaborators keep getting killed, it’s hard to get willing collaborators no matter how much money you offer.
See also.
I see how the first part of my post could be read as “we need to motivate girls to go to school”, which wasn’t my intent. More a matter of motivating tradition-bound parents to see educated girls as a major source of income. But I understand that going to school can be risky in Taliban-dominated areas, which is why the second part of my post was all home-based and therefore hard for the Taliban to detect. Even so, I agree that any obvious link to the US government could be a problem.
Voted up because dealing with uncooperative people is a necessary part of the art and war is the extreme of “uncooperative”.
Good post.
Also, historically, evil barbarians regularly fall prey to some irrational doctrine or personal paranoia that wastes their resources (sacrifice to the gods, kill all your Jews, kill everybody in the Ukraine, have a cultural revolution).
We in the US probably have a peculiar attitude on the rationality of war because we’ve never, with the possible exception of the War of 1812, fought in a war that was very rational (in terms of the benefits for us). The Revolutionary war? The war with Mexico? The Civil War? The Spanish-American War? WWI? WWII? Korea? Vietnam? Iraq? None of them make sense in terms of self-interest.
(Disclaimer: I’m a little drunk at the moment.)
We stole an awful lot of land by fighting with the American Indians.
I’m not going to dispute the others, but I kind of had the impression that we did pretty well out of the Mexican and Spanish-American wars; I mean, Texas’s oil alone would seem to’ve paid for the (minimal) costs of those two, right?
In terms of national self-interest, yes. But they weren’t causes that I’d personally risk death for.
I’m being inconsistent; I’m using the “national interest” standard for WW2, and the “personal interests” standard for these wars.
Well presumably most people don’t actually risk their lives for the cause. They risk their lives for the prestige, power, money, or whatever. Fighting in a war is a good (but risky) way to gain respect and influence. Also there are social costs to avoiding the fight.
Consider (think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, “That’s so wrong!”, I really weaken my ability to “pan for the gold”.)
Consider that you are using “we” and “self” as a pointer that jumps from one set to another moment by moment. Here is a list of some sets that may be confounded together here, see how many others you can think of. These United States (see the Constitution)
the people residing in that set
citizens who vote
citizens with a peculiar attitude
the President
Congress
organizations (corporations, NGOs, political parties, movements, e-communities, etc.)
the wealthy and powerful
the particular wealthy and powerful who see an opportunity to benefit from an invasion
Multiple Edits: trying to get this site to respect line/ paragraph breaks, formatting. Does this thing have any formatting codes?
There’s a “Help” link below / next to the comment box, and it respects much of the MarkDown standard. To put a single line break at the end of the line, just end the line with two spaces. Paragraph breaks are created by a blank line in-between lines of text.
Drunk rationalizing is a serious crime.
So I should try to be irrational when I’m drunk?
Well sure. Otherwise you’re just wasting the alcohol!
On an individual level or at the national level? At the national (central government) level the Revolution, Mexican-American, Civil War, Span-Am, WWI, and WWII all were very rational. 1812 may have had potential benefits, but it still wasn’t rational considering how weak and ineffectual our armed forces were. Nothing significant was gained by 1812 on either side. Korea, Vietnam and Iraq all had selfish justification (beat the communists, enhance american prestige, demonstrate American force projection, etc.), however they turned out to be mistakes (except maybe Korea).
This struck me as relevant:
“If we desire to defeat the enemy, we must proportion our efforts to his powers of resistance. This is expressed by the power of two factors which cannot be separated, namely, the sum of available means and the strength of the Will.”
Carl Von Clausewitz, On War, Chapter 1, Section 5. Utmost Exertion of Powers
(I’m still planning on putting together a post of game theory, war, and morality, but I think most of you will be inclined to disagree with my conclusions, so I’m really doing my homework for this one.)
This is true. In order to win a war, you must convince your enemy that he has lost. Otherwise, he will simply rise to fight again, at a time of his own choosing.
Israel has won many battles, but I don’t think it’s won any wars—its enemies are still trying to fight it.
The idea of non-violent civil defence is based entirely on this idea. The first step is to ensure everyone knows that just because the enemy has lots of armed men marching through the streets doesn’t mean you’ve lost. The second step is to be as uncooperative, incompetant, disruptive and annoying as possible to destroy the enemy’s will, and encourage them to give up and go home.
This will only work against enemies who are unwilling to make atrocities part of their official pacification doctrine. It took killing ~30% of (male?) Afghans to convert them to Islam.
On a slightly less odious level, collective punishment and population dispersal/resettlement work pretty well.
Yes, like all strategies it depends on the economic, geopolitical, and technological situation you find yourself in. If the enemy is willing to depopulate the land so that they can colonise it, then of course you’re not going to be able to win through non-cooperation but if they need you as workers then there comes a point where your willingness to sustain losses is so great that in order to blackmail you into submission they have to expend so many resources and destroy so much of their potential labour force that it’s not worth doing. That is, unless their goal is directly achieved by commiting atrocities, they are only ably to win by doing so if their willingness to commit atrocities (or other). Also, there’s the effect on morale of commiting atrocities. Iraqi soldiers described how disturbing Iranian Human Wave attacks were, and they were killing (para)military forces who were trying to kill them and invade their homeland. The psychological impact of killing civilians would presumably be much greater. Even if the leaders were willing to do so, the soldiers could lose their will to attack unarmed targets and have to be rotated out, which is expensive and could destroy the invader’s national will to fight. While the Prague invasion was ultimately able to suppress the Czechs (until the late ’80s) the Russians did have a lot of morale problems and needed to rotate their troops out very often. Population dispersal and resettlement need to be worked out on a case by case basis. It may be possible and worthwhile to resist, depending on how able the enemy army is to physically pick up and drag the citizenry to the trains or whatever (or how well your side has prepared their supplies for being starved out). Population dispersal relies on the enemy being able to coerce you to move from one place to another, and can be considered in the same way as anything else the enemy wants to coerce you to do.
I’m not a pacifist, and I’m trying to avoid believing in it to seem wise (“violence doesn’t solve anything”) or be contrary (“Everyone thinks armed defence is necessary, so if I disagree it proves I’m smarter”), but as a non-expert I think it’s a plausible strategy. While it wouldn’t beat the Barbarians (just as standing in front of a trolley won’t stop it, no matter how fat you are), it could beat many real world enemies.
I wonder how well this would have worked on the Mongols? They were certainly willing to slaughter all the inhabitants of a city that resisted—but if you shut up and paid your taxes they usually wouldn’t kill you. I don’t know what they would do with people who were willing to give up their property but not willing to perform labor for them. The Mongols frequently conscripted artisans, engineers, and other skilled workers from conquered peoples into performing supporting roles in their armies—saying “no” was probably a good way to get a sword run through you.
Well, maybe not everyone will innately want to disagree with me… but I still think this will undermine some preconceptions. Wish me luck (I’ll do my damndest).
Cheers.
Sounds like it should be a fun discussion then—I’ll look forward to it =)
Have you written this since the post was made?
Yes; thank you. At http://www.staresattheworld.com/
Nothing particularly relevant to LW, mind you—and not quite as rigorous as this site would demand—more addressing social/political issues there, with a Reactionary bent. Also YouTubing at: http://www.youtube.com/user/Aurini
I really ought to finish that series on sales/manipulation though.
I’ll check out your stuff.
Edit: A bit off topic. I found your argument, that Democracy being interesting is a red flag, very interesting.
For seeing someone’s source code to act as a commitment mechanism, you have to be reasonably sure that what they show you really is their source code—and also that their source code is not going to be modified by another agent between when they show it to you, and when they get a chance to defect.
While it’s possible to imagine these conditions being met, it seems non-trivial to imagine a society where they are met very frequently.
If agents face one-shot prisoner’s dilemmas with each other very often, there are other ways to get them to cooperate—assuming that they have a communications channel. They could use public-key crypto to signal to each other that they are brothers—in a way that only a real brother would know how to do.
Signalling brotherhood is how our cells cooperate with each other. Cells can’t use cryptography—so their signals can more easily be faked—but future agents will be in a better position there.
What about just paying them to fight? You can have an auction of sorts to set the price, but in the end they’d select themselves. You could still use the courage enhancing drugs and shoot those who try to breach the contract.
One might respond “no amount of (positive) money could convince me to fight a war”, but what about at some negative amount? After all, everyone else has to pay for the soldiers.
That “auction of sorts” would be the normal market mechanism, right? There are death rates that vary between professions now, with risks priced into the prevailing market wage for those professions. I don’t see why soldiery should be different.
Well, yeah, nothing special. It’s just that the government doesn’t usually try to use smart mechanisms in deciding what to pay people (soldiers) so unless we’re talking about a private army, then you gotta specify that you pay them right.
That’s why a contractor in Iraq today makes about $200,000/yr for a job that would pay $70,000/yr in the US. (A soldier makes, I think, a median of something like $40,000/yr.)
The problem with this idea is that I have a very strong expectation that the barbarians are going to kill me, then no amount of money would convince me to fight. Even if you enforce payment from all the non-fighters, I still wouldn’t fight. Better to incur a trillion dollars of debt than to die, right? Especially if everyone else around me also incurs a trillion dollars of debt such that after the war, we all agree that this debt is silly and nullify it.
As a soldier you’re not facing certain death at any of the relevant decision points (a statistically irrelevant number of exceptions exist to this rule). You’re facing some probability of death. When you get into your car or onto your bike you’re facing some probability of death. Why do you do that? Commanders don’t (irrelevant exceptions exist) send troops to certain death, because, rationalist or not, they don’t go. War is not like StarCraft.
Eliezer’s point is that, given a certain decision theory (or, failing that, a certain set of incentives to precommitment), rational soldiers could in fact carry out even suicide missions if the tactical incentives were strong enough for them to precommit to a certain chance of drawing such a mission.
This has actually come up: in World War II (citation in Pinker’s “How the Mind Works”), bomber pilots making runs on Japan had a 1 in 4 chance of survival. Someone realized that the missions could be carried out with half the planes if those planes carried bombs in place of their fuel for the return trip; the pilots could draw straws, and half would survive while the other half went on a suicide mission. Despite the fact that precommitting to this policy would have doubled their chances of survival, the actual pilots were unable to adopt this policy (among other things, because they were suspicious that those so chosen would renege rather than carry out the mission).
I think Eliezer believes that a team of soldiers trained by Jeffreysai would be able to precommit in this fashion and carry the mission through if selected. I think that, even if humans can’t meet such a high standard by training and will alone, that there could exist some form of preparation or institution that could make it a workable strategy.
I’ll need to see that citation, actually; it couldn’t possibly have been a 75% fatality rate per mission. (When my father says a number is bogus, he’s usually right.) Even Doolittle’s raid, in which the planes did not have enough fuel to return from Japan but instead had to land in Japan-occupied China, had a better survival rate than one in four: of the 80 airmen involved, 4 were killed and 8 were captured. (Of the eight who were captured, four died before the war ended.)
Correction- it’s for a pilot’s entire quota of missions, not just one:
Yeah, if it’s for an entire quota of missions, the math doesn’t work out—each pilot normally would fly several missions, making the death rate per flight less than 50%, so it wouldn’t be a good deal.
How did Japan convince pilots to be kamikazes?
Chiefly by a code of death-before-dishonor (and death-after-dishonor) which makes sense for a warring country to precommit to. Though it doesn’t seem there was much conscious reasoning that went into the code’s establishment, just an evolutionary optimization on codes of honor among rival daimyo, which resulted in the entire country having the values of the victorious shoguns instilled.
I’m no history expert, but I remember hearing something about cutting off a finger and promising to kill anyone that shows up missing that finger.
For example, I suspect Jeffreysai would have no trouble proposing that anyone designated for a suicide mission who reneged would be tortured for a year and then put to death.
Let’s say somebody who flies out with extra bombs instead of fuel has an overall 0.1% chance of making it back alive through some heroic exploit. Under the existing system, with 25% survival, you’re asking every pilot to face two half-lives worth of danger per mission. With extra bombs, that’s half as many missions, but each mission involves ten half-lives worth of danger. Is it really all that rational to put the pilots in general in five times as much danger for the same results? After all, drawing the long straw doesn’t mean you’re off the hook. Everybody’s going to have to fly a mission sooner or later.
Thinking in terms of “half-lives of danger” is your problem here; you’re looking at the reciprocal of the relevant quantity, and you shouldn’t try and treat those linearly. Instead, try and maximize your probability of survival.
It’s the same trap that people fall into with the question “if you want to average 40 mph on a trip, and you averaged 20 mph for the first half of the route, how fast do you have to go on the second half of the route?”
How do you answer this question?
Edit: MBlume kindly explained offsite before the offspring comments were posted. Er, sorry to have wasted more people’s time than I needed.
It’s still an interesting exercise to try to come up with the most intuitive explanation. One way to do it is to start by specifying a distance. Making the problem more concrete can sometimes get you away from the eye-glazing algebra, though of course then you need to go back and check that your solution generalizes.
A good distance to assign is 40 miles for the whole trip. You’ve gone 20 mph for the first half of the trip, which means that you traveled for an hour and traveled 20 miles. In order for your average speed to be 40 mph you need to travel the whole 40 miles in one hour. But you’ve already traveled for an hour! So—it’s too late! You’ve already failed.
Yes, that’s roughly how MBlume explained it (edited for concision and punctuation):
If that’s an actual chat record, I’m getting old for this world. … okay, on a third read-through, I’m starting to comprehend the rhythm and lingo.
The original had more line breaks and less punctuation, but it’s real—what do you mean?
It felt like I was following, say for analogy, a discussion among filipinos who were switching back and forth between English and Tagalog. But re-reading it twice I started to get the flow and terms. E.g. “nodnod” was opaque initially.
Nowadays young people are all like
Yes, that movie is a nice example of science fiction which deliberately makes up new words (so I presume) to give the viewer that fish out of water “it’s the future” feeling. Star Trek does something like that which I think is called technobabble, which is also deliberately incomprehensible with a sciency twist. I get much the same feeling when I watch certain popular shows from English- but not American-speaking places, where people combine unknown references, unknown words, and pronunciation which I have to struggle to unravel.
Happily, in all cases the simple act of patiently familiarizing myself by repeated viewing works well to bring me up to speed, though I personally have never gone as far as learning Klingon.
If I remember correctly, it’s a blend of English, Russian, and Latin.
I guess it is rather bizarre. But most of the unusual conventions on IRC and other chat services are in order to make it more like a face to face conversation. They generally either allow you to narrate yourself from a third person perspective, or speed up common interactions that take much longer to type than they do in real life.
Although “nodnod” seems unusually nonsensical, since it takes longer to type than “yes”. I cannot say I have seen that used before.
I think it’s actually pretty close to normal English for a chat log.
I don’t doubt it. That’s why I said that I felt that I was getting old for the world. The unusual, out of place thing is me. I’m assuming that the chat log is typical.
Suppose the total trip is a distance d.
So if your average speed is 40 (mph), your total time is d/40.
You have already travelled half the distance at speed 20 (mph), so that took time (d/2)/20 = d/40. Your time left to complete the trip is your total time minus the time spent so far: d/40 - d/40 = 0. In this time you have to travel the remaining distance d/2, so you have travel at a speed (d/2)/0 = infinity, which means it is impossible to actually do.
Let t1 be the time taken to drive the first half of the route.
Let t2 be the time taken to drive the second half.
Let d1 be the distance traveled in the first half.
Let d2 be the distance traveled in the second half.
Let x be what we want to know (namely, the average speed during the second half of the route).
Then the following relations hold:
40 (t1 + t2) = d1 d2.
20 * t1 = d1.
x * t2 = d2.
d1 = d2.
Use algebra to solve for x.
To average 40 mph requires completing the trip in a certain amount of time, and even without doing any algebra, I notice that you will have used all of the available time just completing the first half of the trip, so you’re speed would have to be infinitely fast during the second half.
I am pretty confident in that conclusion, but a little algebra will increase my confidence, so let us calculate as follows: the time you have to do the trip = t1 + t2 = d1 / 40 + d2 / 40, which (since d1 = d2) equals d1 / 20, but (by equation 2) d1 / 20 equals t1, so t2 must be zero.
I expect a high probability of this explanation being completely useless to someone who professes being bad at math. Their eyes are likely to glaze over before the half way point and the second half isn’t infinitely accessible either.
I already had the problem explained to me before I saw the grandparent, but I think you’re right—I might have been able to puzzle it out, but it’d have been work.
Can’t resist the urge to chime in here...
I first started long-distance running when I was 20 years old. Up until that moment I hated everything about running, especially the sharp pain in my lungs whenever I ran more than 50 meters or so. But as soon as I gave it a serious try, the pain went away completely after about a week of morning runs, and from that day it never came back. Now I love running. And sometime later I found out that long-distance running is one of the few areas where humans can beat pretty much every animal out there. Something we’re naturally good at.
Another such area is intelligence.
I know that you are an intelligent person, whatever your other flaws. And I don’t completely understand what can stop an intelligent person from solving trivial puzzles such as orthonormal’s original question. Could your perception of “it’d be work” be the mental equivalent of lung pain that unexperienced runners have? Something that would just go away forever if you gave it a week of effort?
Well, in the department of actual running, I have some kind of mysterious lung issue that means I need to gasp for air a lot even when I’m sitting still and have been for hours and it only gets worse if I try to do exercise more strenuous than a leisurely walk. (Armchair diagnoses appreciated, incidentally—so far I’ve stumped multiple doctors and new Google keywords are good.)
Here is something like the thought process that goes through my head when I encounter a problem of this approximate type:
I know what all those words mean. I could come up with a toy scenario and see what’s interesting about this problem, that someone bothered to bring it up.
It might be the sort of question where coming up with one toy scenario doesn’t answer it because for some reason it doesn’t generalize. Like it could have to do with the distance. I don’t want to come up with five different distances and work it out for all of them. I’d probably make an arithmetic mistake anyway. I can barely compose a mathematically accurate D&D character, and I’m way more motivated there than here. I’m not interested enough in this to do it in a calculator and then re-read the ticker tape. My eyes are swimming just thinking about it.
And because I’m not good at this, I would be reasonably likely to get it wrong, and then, no matter how much time I’d put into it myself, I would need to ask someone. I could get help if I asked. I am cute and friendly and there are helpful people around. I could get help even if I didn’t work on it myself. That would be faster, and then I’d know the answer, and I have to ask anyway, so why not just ask? Why not save the work, and not risk wasting a lot of time on getting a wrong answer and having to stare at all those numbers?
Record yourself (audio and video) during one of your attacks and I’ll have a much better idea. Right now, it’s extremely hard to tell from your description. Obviously, actually listening to you with a stethoscope and being able to perform a few tests would help me even more, of course.
By “attack” do you mean “one of the hundreds of occasions throughout an average day where I attempt to take an especially deep breath to satisfy my customary air hunger” or do you mean “run around until you collapse, gasping, and record that”?
The latter. But wait, you only have attacks when you run?
Running, biking, walking too fast up a hill, jumping on a trampoline, playing DDR.
Ok, but those are all exercise. Do you ever get attacks from cold, from allergens, from waking up in the middle of the night, from fear, from pain, from eating too much before bedtime?
Cold weather can make getting my deep breaths uncomfortable, but doesn’t seem to make me need more of them. I’m not allergic to anything except mushrooms (and that causes nausea, not breathing problems). I’ve never woken up in the middle of the night (that I know of) with breathing problems. I sometimes breathe oddly when afraid/in pain, but it doesn’t seem related. Eating too much at bedtime doesn’t do anything that eating too much at other times doesn’t.
Ok, so a video c audio of you exercising (or aftermath) would be helpful. As I said, not as helpful as seeing you in person.
I’ll try to remember to do this next time, and may induce it deliberately if my next doctor visit is disappointing in this regard.
Sorry for deleting my comment. I’ve been doing this a lot lately—I write something and then notice that it’s stupid for one reason or another. (In this case it was the armchair diagnosing/other-optimizing.) Didn’t think you’d react so fast.
It’s okay. (I hope my thought process is interesting anyway.)
Well your last paragraph was interesting in a way. In fact I don’t understand it. The point of a puzzle is to stretch and work out your brain, not arrive at an answer asap. If you have a bus full of hostages whose fate depends on an arithmetical problem, it’s indeed wiser to ask someone else. But such situations don’t occur often. In fact I sometimes explicitly ask other people to avoid giving me any hints because I want to solve the puzzle myself. Asking for help is analogous to taking the bus instead of your morning run :-)
But well, I guess if you don’t enjoy puzzles already, then saying things like “c’mon jump in, the water’s fine” isn’t going to influence you much. Some things you really have to try before you can see the fun contained within. I think most things I enjoy in life fall in this category...
I hate being frustrated. It happens to me very easily. I hate not knowing the endings to stories, I hate not knowing what I’m getting for my birthday, and the only way I can not hate not knowing the answers to math problems is by not giving a flying fuck about them at all—which isn’t conducive to expending effort on solving them. I’ve generalized the “stop giving a fuck” self-defense strategy to other hatreds-of-not-knowing stuff, mostly to discourage people from teasing me with this neurosis. I believe that other people can enjoy various forms of not-knowing-stuff, or fail to hate it enough to override some competing desire to achieve knowledge on their own. But I don’t.
So basically, I looked at that math problem, sort of cared about knowing the answer, and asked. I got an answer (actually, several) which were quick enough to suit me. If the only way I could have learned the answer were to work it out for myself—or sit through ten minutes of algebra lessons or something—then I would have defensively ceased to care, instead.
Once I know the answer—in this case, that after having gone halfway at 20mph, you need to teleport to get to point B in time—then I can tolerate some further discussion of the scenario or the underlying math (although not arbitrary amounts). This is much the same as how, when I know that character X and character Y in some story eventually get together (or find the MacGuffin, or die, or whatever major plot item), I can often put up with extended periods of wondering exactly when and how.
I’ve found in the past that I remember the right answer better if I can guess it first and then get confirmation. It doesn’t help when I guess wrong, but when I guess right it’s a win.
Has the lung issue been a problem for your whole life? Is it better at some times and worse at others?
I don’t have a theory, but this seems like a reasonable starting point.
The lung thing has gone on for several years; I have a memory that doesn’t make sense without it that has to have taken place in fall 2006. I don’t remember exactly when it started but I have not always had it. (I suspect it began sometime after I started taking iron to treat my anemia, since no one ever connected the two; that would’ve been some months after I turned 17, so, late 2005-early 2006).
It does vary day to day and hour to hour, plus with what I’m doing (walking excessively briskly, or jumping around, or otherwise being active, makes it act up—it was outright crippling on one occasion last summer when I tried to bike a few blocks; I had to pull over and sit on the sidewalk for a while and then verrrrry carefully bike back, walking the thing up hills and only riding on levels and downhill.) There is an overall trend of worsening from year to year.
Who is ‘no one’ and which two did they fail to connect? Why do you say ‘since’?
I’m not a doctor. But it sure sounds to me that your blood is just not carrying enough oxygen to support vigorous exercise. Which is by definition ‘anemia’. Which comes in various forms, the most common of which can be treated by iron supplements, but the most serious of which have other causes and treatments. Just from what I read on the web, my guess would be you have ‘pernicious anemia’.
I would strongly advise going to a doctor again, and asking for blood tests. Be sure the doctor is informed about any ways in which your diet is unusual. Good luck.
“No one” is a couple of internists, my dad and my uncle (both doctors), and various random people I mentioned it to. What I mean is that people who know I used to be anemic and that I now have this dyspnea problem have never asked “I wonder if you still have [anemia-related subcondition] and that’s causing the dyspnea?”
The iron pills working was confirmed with a blood test. Before I took iron pills, my readings on some relevant feature of my blood were so low that I ought to have been fainting on a regular basis; after they were in normal range and I’ve been allowed to give blood several times since.
Asthma/reactive airway disease seems like the obvious thing here, so has that been ruled out? Did they have you blow into a thing to measure whether you were breathing a normal volume of air (spirometry)?
I have never had to blow into a thing. Since I have consulted plural doctors about this, I’m not sure why none of them would have thought of athsma if that were consistent with my symptoms. Why might that be?
Asthma usually shows up in childhood, not at 17, and maybe some of the doctors you saw assumed that previous doctors would have checked out the possibility of asthma already. The definitive test for asthma is called a methacholine challenge; basically you inhale a chemical that irritates your lungs, and if you’re asthma-prone then you have trouble breathing (there is no physical activity involved).
It probably isn’t a heart issue if you haven’t always had it...those are usually congenital...but I could be wrong. (Also not a doctor). But it sounds very debilitating, and worth fixing.
That is troubling. Even if it isn’t asthma it is definitely something to do with the lungs that influences breathing. Measurement of breathing capabilities should be one of the first things they do!
Honestly? No idea. Nursing student here, in no way a doctor. But if I were you I’d go to a doctor and describe my symptoms and say “Could I have asthma?” I’m thinking most likely outcome is they give you some meds to try, which could be a good thing, you know? Certainly before you go after any of the zebras you’ll get by Googling around.
My other thought is it could even be a heart issue, if your lungs check out. I’m kind of..surprised...that no one is more concerned about an apparently healthy young person being unable to breathe, so I’m guessing your oxygen saturation isn’t dropping to scary levels or anything.
Well, I’ve never passed out, and I tend not to take much exercise that this is interfering with, and doctors in general like to say that absolutely everything that could possibly be wrong with me will be fixed with exercise, which advice I ignore, which can then go on indefinitely “explaining” my problem. (I went in with little muscle twitches in my legs and eyelids once—I still get those, still don’t know what they are—doctor says, “Consider getting more exercise”. Exercise my eyelids, right? I’m not blinking enough?)
Clearly that is not what they are suggesting. Generalised exercise does clear up some such twitches. It may not in your case and it is even conceivable that you could die trying. But regardless the exercise suggestion in response to that complaint is not absurd or a valid target of mockery.
If this were the only thing doctors bizarrely said I should exercise for, I could shrug and say that maybe there is a mystical eyelid-exercise connection.
But I get told this a lot (mysterious lumps on your back? Exercise! Left foot makes a crunchy noise when you move it like so? Consider the benefits of exercise. You get lots of headaches? Try exercising! Sometimes the palms of your hands swell up for no reason? I recommend more exercise. You keep getting ingrown toenails? That may go away with enough exercise. If you exercise, you can’t breathe? Maybe you should fix that with exercise.)
I’m pretty sure that they’re coming up with this too quickly, for too many things, which means I’m suspicious of each instance of advice to get more exercise (for a specific problem; I’m tired of, but not suspicious of, the advice that it would just be a good idea in general.) I think it is more likely that they are saying this because I’m fat, not because it’s related to the problems I go in with. Often I am not even asked how much I exercise before a health professional says I need to do more of it.
Perhaps more importantly it sounds like they come up with ‘exercise’ then stop. For many things exercise will help, even if only indirectly, but it certainly isn’t the primary treatment. Sure, exercise is great for asthma, but so are steroids—and sometimes the former isn’t enough. Likewise for blood pressure and heart disease. Come to think of it exercise is great for the symptoms of being a chronic smoker… but if the doctor stopped at ‘exercise more’ without going on to ‘stop smoking dumbass!’ there would be a problem!
Apart from being interpersonally rude that doesn’t seem to reflect well on their medical competence. There is a relationship between exercise and fat storage but it isn’t all that strong. Doctors aren’t supposed to be going along with popular stereotypes!
I’m reminded of the day I was running my first marathon. I was up to the 30km mark and starting to feel it. Then along beside me comes a guy built like a 44 gallon drum on legs. He was puffing away but looked like he was ready to run another 30km then maybe do it again after lunch. While I was slowing down with fatigue and complete glycogen depletion he was accelerating. That rather completely destroyed any preconceptions I may have had that people with a high bodyfat ratio must not exercise.
It’s scary, but I have no problems believing that. Along similar lines my father’s doctor (well, former doctor) concluded from his high cholesterol on his bloodwork that he clearly eats too much fast food and needs to cut back on the KFC. Where was that doctor when the studies were published regarding just what the limits of the influence of diet on cholesterol are? Even worse is my mother’s (former) doctor insisting that she is a long term alcoholic and needs help, despite her protestation to the contrary. She hasn’t consumed a drop of alcohol in her entire life. The high indicators of liver strain are a known side effect of the rather potent medication she takes for neuropathic pain.
I often wonder just how some of these people go through a decade of training and still end up clueless.
I have a notion that sleep deficiency during that training damages their mental flexibility.
I’ll track it down if you care, but I just read something which claims that twitching eyelids are a minor side effect of stress. This does seem to be true for me.
For some people, exercise lowers their stress level, so the doctor’s advice isn’t entirely crazy.
On the other hand, casual surveying has turned up a shockingly consistent result that only about 20% of doctors listen and think. This implies that you need to be persistent if you need a diagnosis for something that’s even a little weird.
Even fairly easy problems like celiac can take a surprisingly long time to get diagnosed.
I’m not under stress most of the time.
Exercise raises my stress level. (One time, exactly one, in my entire life, I felt sort of high after getting some exercise. I thought it was cool and tried to reproduce it and couldn’t.) Even if I’m just walking, gently enough that my lungs don’t care, it makes my feet hurt and the rest of me too hot until all I want to do is fling myself into a frigid shower and collapse until I’m cold enough to believe that it’s safe to leave the water. (I routinely arrange to be too cold, because definitely being too cold is safer and more comfortable than maybe being too hot.)
Oh, you totally should get more exercise, and they are right to tell you that, although maybe they would have better luck if they came across as more helpful. But I’m a slug too and I can breathe ok; I just can’t run very far. I still vote asthma.
No, I should bloody well not get more exercise, because when I do, I can’t fucking breathe. Understood?
(Also I overheat, really easily.)
Walk. Start slow. Exercise indoors. You know the answers here. Advanced COPD patients still need to get out of bed and do physical therapy.
Sorry I made you mad, of course, but it doesn’t change anything.
Edit: downvote wasn’t me
Alicorn has implied that she isn’t completely sedentary or bedridden—she goes out walking, and sometimes gets seriously out of breath.
Is there any reason to think that more exercise will be good for her?
How much do you actually know about the subject?
More exercise is generally good for most people, which is one reason. Aerobic exercise tends to increase aerobic capacity, and decreases the risk of plenty of chronic diseases.
Alicorn admits she doesn’t get much exercise, so she’s clearly not in the tail of people who already exercise strenuously enough that more exercise would be harmful. Light exercise like I suggested is very unlikely to do any harm, and could very well even help her symptoms by improving her endurance.
By way of clarification, I do not intend to blame Alicorn for her symptoms. I understand why she might feel that way, and I regret that she has been made to feel dismissed in the past. My suggestion that she see a doctor and ask about asthma is much more urgent and important, so I want to reemphasize it rather than get stuck on the question of exercise.
Not exercising is a problem for anybody regardless of whether they have a breathing problem or not. It raises your chances of obesity, which later on can lead to lots of nasty consequences, and I think sedentary people show an increase in certain chronic diseases (heart disease for sure, type 2 diabetes possibly but I don’t remember for sure) regardless of weight. In that sense, more exercise would be good for Alicorn.
That being said, I think it’s misleading to say it would directly fix her mystery respiratory problem. (Even if it turns out to be asthma...exercise doesn’t cure asthma and can even trigger it for certain people). It might improve the symptoms in the long run. It might not. There might be another way to improve the situation enough that she can exercise.
Recent studies have suggested that obesity causes lack of exercise, rather than the other way around, and that’s why we see the correlation.
Got a cite? (Not that I disbelieve you, I find it highly plausible—I’m just curious.)
here is an abstract for a longitudinal study that suggested childhood obesity leads to lack of exercise and not the reverse.
I feel certain I’ve seen reference to a more recent study involving adults, but I haven’t turned it up yet.
I’m half convinced. However, I keep reading that inactivity is unhealthy regardless of whether it is fattening. Therefore fat people have good reason to try to resist the tendency to inactivity induced in them by their fatness.
So: why are fat people inactive? My only tentative guess is that it is difficult for them to move their bodies, and they respond to the difficulty by moving less. This suggests the following possible remedy: strength training. With stronger muscles, your body feels like less of a burden, and so you are more likely to move around.
Very plausible. Also, fear and embarrassment could be factors. Several of my heavier friends have told me that they don’t like to go to the gym because they feel self-conscious surrounded by fitter people. This is probably also true of, for example, jogging in public; they are afraid of people watching them and judging them (“Look at that fat guy/girl trying to run!”).
Yes, this was the basis for a Jerry Seinfeld comedy routine: “We need to have a pre-gym, a gym-before-the-gym. A place where you can get yourself fit enough to be comfortable going to the regular gym.” (And this actually isn’t far from the reason for the success of the franchise Curves.)
I was strongly voted up a while back for making the above point and then suggesting we have the analog website for LessWrong—a place where people can learn and discuss this stuff without being intimindated by those who know more.
If only the users of Curves graduated to regular gyms more frequently...
The study (which needs significant followup to create usable results) could have a number of interpretations, including:
*conclusions not fully supported by the data
*obesity leads to less enjoyment of motion
*obesity leads to fewer social opportunities to engage in sports
*low socio-economic status leads to obesity and to inactivity (due to insufficient access to parks, to parents who force you out of the house, etc).
*People don’t record their activity levels every day, so their estimates are colored more by measurable factors (body weight) than by unmeasurable ones (how much they actually moved).
I’d hesitate to read too much into this study.
Interesting. Thanks for the heads-up. I will research that now.
Whereas I dropped 13kg very quickly not by more exercise, but by a change in diet.
The whole area is a minefield of YMMV …
Valid point. I’ve read that for sedentary people, starting an exercise regime is a poor way to lose weight. I still think it’s an excellent way not to gain weight in the first place...children who active, who remain active as teenagers and young adults (and don’t grossly overeat) probably won’t put on the weight in the first place. I did an energy-expenditure study that showed I burn nearly 3000 calories a day, mainly because I can maintain an ‘intense’ level of exercise for an hour or more, whereas someone who is unfit and overweight already probably can’t and so wouldn’t burn nearly as many calories. Muscle mass also burns more calories at rest than fat tissue, so that someone at a high level of fitness can eat more even on days when they don’t exercise.
The moral of the story: I’m going to put my kids in one physical activity after another (like my parents did with my siblings and myself) until they find one they can stick with, and I’m going to try to keep it a part of family life. After all, it takes far less willpower to maintain a lifelong habit than to start a new regime once you start putting on weight.
That was harsh...
I was in exactly the same situation when I was 15 before I was diagnosed with asthma, probably worse since there were a few days where I could not even walk up stairs because my lungs would seize up instantly. My doctor told me to try exercising more in spite of me having a low BMI, being unusually active, and having asthma, since the drugs which are available for people with asthma mainly treat the symptoms. If you want to avoid needing them in the first place, increasing your stamina is the only fix.
Of course, before you can exercise at all, you need to either find effective medications, or exercises which you can manage without killing yourself, but I don’t understand your reaction to Molybdenumblue.
Tangentially, your symptoms do seem to match asthma well to me. I would recommend asking for tests next time you see a doctor.
I think his comment came across as kind of snarky (“oh you should totally...”) and that might be why.
Oh dang girl, I’ve sent you PM’s about how I use my vagina and you still call me he? I think I just won the least feminine woman on the internet award or something.
You are probably justified in being insulted that I completely forgot the vagina conversation was with you.
Heh, I wasn’t insulted. I got into the habit of keeping my sex on the down low when I was playing WoW (because being conspicuously female gets you a lot of bad/icky attention and is only worth it if you play it for special treatment, which I didn’t want), so I don’t tend to spray a lot of girly text-pheromones around. That’s all I meant when I said I won the least feminine woman on the internet award. Definitely didn’t mean you were unfeminine, or un-feminist, or anything like that.
Whoops. I think more to the point, I don’t always remember who has what username. I’m a bit that way in real life too: I don’t always pay attention to who I’m talking to when I get onto discussing ideas. (I’m not sure if you meant “feminist” or “feminine” but I’m neither so that’s fine.)
I understand both of their arguments, but the emotions involved are incomprehensible...
I suppose I would have said nearly the same thing in Moly’s position, and would not have predicted that I was being offensive. It would be helpful to be able to empathize with peoples emotions, but I am apparently horrible at it.
I was diagnosed with asthma just over a year ago. (The only symptom I’ve ever had is that in winter when I get a cold, I cough for the rest of the year unless I go on steroid inhalers). My lung capacity dropped by 22% when I did the methacholine challenge test (inhaling an irritating chemical) but I barely noticed it. This is probably related to the fact that I started swimming competitively when I was eight, and my lung capacity is already much higher than the average for someone my height and weight. (I don’t know if I could have reached this point if my asthma had started before I began swimming, though. Ironically enough, I’m pretty sure my current asthma is caused by too mcch chlorine exposure over the years, and I’m considering taking a summer off from lifeguarding to “detox” myself enough that I can test negative on the asthma test.)
Me too. I get cold easily if I stay still, but even just walking briskly makes me start to sweat. I think this is one of those thigns that you can train with practice; the more you exercise, forcing your body to overheat, the better your body gets at efficiently disposing of the excess. Also getting used to it probably makes it less unpleasant… That being said, I hate exercising indoors on a treadmill or elliptical for precisely this reason. Biking outside is great in the spring and fall months, when my wonky thermostat actually works to my advantage and makes it possible to bike across the city on a −5 C morning.
Also, have you ever tried swimming for exercise? It has the benefits of burning a lot of calories without bringing your heart rate (and out-of-breath-ness) up as high as the same intensity exercise on land. Also if you hate the feeling of being sweaty, which I do, a nice temperature-controlled pool helps a lot. My brother’s asthma has improved drastically since he started swimming competitively...swimming does a lot more for your lung capacity and breath control than other activities of the same intensity.
Swimming in a temp-controlled pool is great on the overheating front and is the only known form of exercise where I am not bothered by sweating. However, pools tend to be either (A) indoors, with stiflingly enclosed humid environments where I can’t breathe comfortably (I sometimes have to stick my head out from behind the shower curtain when I’m in the shower, for reference, and didn’t like being in indoor pool environments even when I was kid and didn’t have clinically significant breathing issues) or (B) outdoors, and open only during the day, such that I have to either wear texturally-obnoxious sunscreen or crisp up like a rasher of bacon. Arranging to swim is also inconvenient—it requires changes to my state of dress, a new venue, etc, twice. Typically it is expensive, in a way that going for a walk is not.
None of these difficulties are individually insurmountable, and if all I had to do was one of living with humidity, or putting on sunscreen, or changing clothes twice, or going to a new location twice, or paying money, I’d get over it. I imagine that I’d swim a lot if I had a pool at my home, which would reduce it to a clothes-changing inconvenience if I swam in the dark. But I do not have a pool at my home.
Incidentally, I took a pulmonary function test a couple weeks ago. The guy who administered it wasn’t technically qualified to say so, but he thought everything looked normal, and if my GP agrees, the next step is probably to assume I have a heart problem.
Thanks for the update.
Agreed that swimming is massively inconvenient, which is one reason I’m trying to start running more...once I move on from working at a pool, it’ll be even less convenient since I won’t already be there in a bathing suit anyway, or be able to swim for free. (There is one interesting thing I’ve noticed about myself...I find it massively inconvenient to take a shower at home, whether before bed or first thing in the morning...for the most part I only shower after teaching swimming lessons or after a workout. This is ok because I’m in the pool nearly every day for some reason or other.)
I wish I had a pool at my home...oh I can dream.
Just a heads-up: pool ventilation varies. I’ve swum at several dozen different pools over my life, and some were awful, with the air hotter than the water and ridiculously humid. Some were excellent. Any big Olympic-size pool that hosts competitions tends to have better ventilation than your local neighborhood pool for kids and old ladies. Saltwater pools tend to have better air quality too, if there are any near your home. And lakes and rivers in summer are my favorite, although I have pretty low squeamishness and I know some people are more bothered by weeds, mud, fish etc.
Best of luck, I hope they figure it out soon.
My experience with swimming is that simply being in water strains your lungs more (as you have to displace water to inhale air). Simply going to the pool and talking to someone while in up to your neck is probably good lung training, but until she can do that comfortably and without attacks I would recommend against trying laps or other sort of exercise.
That’s probably where the long-term benefits of lung capacity come from. Aquafit (where your head doesn’t go underwater) is pretty low-intensity compared to on-land aerobics, and you don’t overheat so much so it might be preferable for her. On the other hand, something like breaststroke is a good way of training yourself to breathe rhythmically while exercising.
Do the twitch go at some harmonic with your heart beat? I have had something similar with eyelids, leg, neck, and other places twitch with the blood flow. More likely to happen after prolonged stress for myself. The solution for myself is to relax my muscles and modify my heart rate, usually with deep breaths.
The twitching is an erratic rhythm. I’ll take my pulse next time one happens and see if it’s even a little connected to my heartbeat, but I suspect not.
Agreed that the doctor telling you to exercise, at this point, is unhelpful and kind of a stupid thing to say. And I don’t think exercise would solve your problem. Not exercising is a risk factor for all sorts of other problems later in life, especially if it’s in combination with a not-very-healthy diet...but that’s a reason to look for solutions to your lung problem, not a reason for doctors to tell you that exercising will solve your lung problem.
Was this an intentional pun, an unintentional pun, or a misspelling of “pleural”?
Unintentional pun, I suppose.
What hypotheses did the doctors check?
Is it ok if I post this thread to my livejournal? A fair number of my readers are smart people with health problems, and they may either have heard of something like what you’ve got or may have information about the reliability of common tests for possible causes.
I don’t know what the hypotheses were. I’ve had a chest x-ray, which I was told revealed nothing interesting.
Post away.
The link.
I’ve gotten a few replies so far. It’s likely that all the replies will arrive within three days.
Mayo clinic, from my very limit experience, can be quite thorough. You will at least have many eyes on the problem and the more the better.
They can offer finical asstance as well if they are not in network for your insurance. http://www.mayohealthsystem.org/mhs/live/locations/LM/pdf/FinancialAssistanceBrochure.pdf
This isn’t a very good example. Making D&D characters that fit the rules can be surprisingly tricky. There’ s just a lot of data to keep track of and lots of little corner case rules.
theonlysheet.com
A mostly solved problem. Although this doesn’t quite handle all possible combinations of those add on books. Like the one which can be gamed to create what amounts to adamantium nano-bots (which are actually fairly reasonable if you think about what a rational individual would do given the physics but are nevertheless not quite intended).
Manual arithmetic and rules knowledge would also be required to work out exactly how much damage can be done when using a locate spell to utterly obliterate nearly everything on an entire continent.
I have to agree that a shorter explanation with just words in it would be bettter for someone with significant aversive math conditioning.
It also doesn’t help the explanation when you make an error. That should be d1 + d2.
Acknowledged.
The probability of drawing the long straw twice in a row is four times as high as the probability of making it back twice in a row given 25% survival.
There is an additional constraint which you are missing here. Training pilots is expensive, and the military wants to recoup that cost by sending everyone they bother to train out on a certain number of missions. They weren’t going to just run two rounds and then decide the war’s over; rather, the plan was to keep sending out bombing raids until they run out of pilots, whether death or retirement, and then train some more pilots so they can continue bombing Japan.
That’s at best an argument against the political viability of iterated straw drawing due to general irrationality, not against the rationality of iterated straw drawing itself. The pilots are definitely worse of when everyone is sent to their death. If the pilots opinions don’t matter sending them all to their death is the best option since it saves training costs. If a compromise acceptable to the pilots need to be found then iterated straw drawing is the best option for everyone for any possible mission count and any gived cadre size, provided the pilots can make the necessary precommitment. Command might possibly reject this compromise due to some pilots appearing to be freeloading, but would act irrationally in doing so.
Of course doing the straw drawing at an earlier stage and optimizing training for whether they are going to be sent to their death or not would be even more efficient, but that level of precommitment seems less psychologically plausible.
The problem is, pilots aren’t optimizing for overall survival. Somebody who wanted to live to see the end of the war, at all costs, could’ve just faked some medical problem and gotten themselves a desk job. The perceived-reproductive-fitness boost associated with being a member of the flight crew is contingent on actually flying (and making it back alive, of course). In simpler terms, nobody gets laid by drawing the long straw.
That’s your third completely unconnected argument, and this one doesn’t make Japan and return missions assuming viability of straw drawing rational either. Even if the pilots are rationally maximizing some combination of survival and military glory that doesn’t mean Japan bombing and return missions with most of the load devoted to fuel are an efficient way to gain it. You could have all pilots volunteering to be part of the draw for one way missions and those who draw long being reassigned to Europe or whereever they can earn glory more fuel efficiently.
You’re assuming that straw drawing is viable. I’m trying to show why it wasn’t.
You seem to have a theory, based on that invalid assumption, about what will and will not work to motivate people to take risks. Does that theory make any useful predictions in this case?
Then you are wasting everyones time, we already know that it wasn’t viable. It was suggested and rejected. The whole discussion was about a) what would be needed to make viable (e. g. sufficiently high rationality level and sufficiently strong precommitment) and b) whether it would be the rational thing to do given the requirements.
No. I was taking your model of what will and will not work to motivate people to take risks and demonstrating that your conclusion did not follow from it.
You’re still not understanding the math here. If we’re looking at this from the military’s perspective, it’s also a win, because an identical quantity of bombs dropped come at the cost of 2 flight crews (and planes) rather than 3.
You had a mistake in your mathematical intuition. That’s OK, it happens to all of us here. The best thing is to admit it. (NB: it’s your argument that’s deeply flawed and not the conclusion, so raising other reasons why this would be a bad policy is irrelevant at the moment.)
I still can’t figure this out. Faced with the choice between leaving, having learned nothing, and continuing to lose karma for every damn post I make, what does your model recommend?
Say you start out with 100 pilots. You have two options:
Send all 100 pilots to Japan with full fuel tanks. 25 make it back alive.
Send 50 pilots to Japan with twice as many bombs. 0 make it back, but 50 are still alive.
No, the military just wants to accomplish its missions. If more pilots are alive after each mission, it means more pilots available for future missions.
Honest questions are far less likely to be voted down than wrong statements.
When the general case seems confusing, it’s often helpful to work out a specific example.
Let’s say that there are 40 targets that need to be bombed, and each plane can carry 10 bombs normally, or 20 bombs if it doesn’t carry fuel for the return trip. We’ll assume for simplicity that half of the bombs loaded on a plane will find their targets (accounting for misses, duds and planes shot down before they’re finished bombing).
Then, with the normal scheme, it would take eight flights on average to bomb the forty targets, and six of those planes would go down. If instead the planes were loaded with extra bombs instead of return fuel, it would take only four such flights (all of which would of course go down).
If there are eight flight crews to begin with, drawing straws for the doomed flights gives you a 50% chance of surviving, whereas the normal procedure leaves you only a 25% chance. If those are all the missions, it’s clearly rational to prefer the lottery system to the normal one. (If instead the missions are going to continue indefinitely, of course, you’re doomed with probability 1 either way.) And of course the military brass would be happy to achieve a given objective with only 2⁄3 the usual losses.
The difficulty with thinking in terms of “half-lives of danger” is that it’s the reciprocal of your probability of survival, so if you try and treat them as simple disutilities, you’ll run into problems. (For instance, if you’re facing a coinflip between dangerous activities A and B, where A consists of one half-life of danger and B consists of five half-lives, your current predicament is not equivalent to the “average value” of three half-lives of danger.)
What if there’s a hidden variable? Say, a newly-trained flight crew has skill of 1, 2, or 3 with equal probability. On any given mission, each bomb has a 10% chance, multiplied by the crew’s skill, to hit it’s target, and then at mission’s end, assuming adequate fuel, the crew has the same chance of returning alive. If they do so, skill increases by one, to a maximum of seven.
Furthermore, let’s say the military has a very long but finite list of targets to bomb, and is mainly concerned with doing so cost-effectively. Building a new plane and training the crew for it costs 10 resources, and then sending them out on a mission costs resources equal to the number of previous missions that specific crew has been sent on, due to medical care, pensions (even if the crew dies, there are certain obligations to any surviving relatives), mechanical repairs and maintenance, etc.
What would the optimal strategy be then?
Further complications aren’t relevant to the main point. Do you understand the theory of the basic example now, or do you not?
Yes, I understand the theory.
OK then. You can of course add additional factors to the basic model, and some of these will mitigate or even overwhelm the original effect. No problem with that. However, your original mathematical intuition about the basic model was mistaken, and that’s what I was talking to you about.
In general: let’s say someone proposes a simple mathematical model X for phenomenon Y, and the model gives you conclusion Z.
It’s always a complicated matter whether X is really a good enough model of Y in the relevant way, and so there’s a lot of leeway granted on whether Z should actually be drawn from Y.
However, it’s a simple mathematical fact whether Z should be drawn from X or not, and so a reply that gets the workings of X wrong is going to receive vigorous criticism.
That’s all I have to say about that. We cool?
It is a longstanding policy of mine to avoid bearing malice toward anyone as a result of strictly theoretical matters. In short, yes, we cool.
Point taken, but I don’t think your math quite works here.
ETA: 1⁄4 survival got it, sorry. Deleting soon.
Well, that’s exactly the objection I tried to cover with the second half of my comment.
The thing is that you’re assuming that it won’t actually be paid so that there effectively is no debt or pay. Under that assumption of course it won’t work, since you’re not actually doing it.
The debt is not silly, it’s a way of saving your country. If you have good debt collectors, and you owe enough you’ll want to fight. Use your imagination.
In the few cases where the probability of dying is near one instead of near zero, being as productive as possible and getting just enough money to survive out (third world type real poor, not “poor” american kind) might still be better than dying. In these cases you’d basically have to punish people that can’t pay instead of helping them pay as much as they can.
Then you are suffering strongly from the Bystander-effect. http://en.wikipedia.org/wiki/Bystander_effect
One could translate this effect as “the warm fuzzy feeling that there are enough people around which will do the job and oneself doesn’t need to bother”.
The effect is very strong. So, adjust your thoughts: The barbarians will kill you either way. There aren’t enough people which care, so you yourself have to rise to do something. (That also applies to everyday life: If you want something done, especially in a busy and people-rich environment, do it yourself.)
In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operations research or systems engineering.
On another note, sometimes the most rational response for ‘winning’ will conflict with our morality, or at least, our emotions. Von Neumann advocated a first strike response against the soviets early on, and he might have been right. Even if his was the most rational decision, you do see the tangle of problems associated with it. What if winning means losing a part of you that was a huge part of the reason you were fighting in the first place.
Any one reminded of ‘The World of Null-A’? Rationalists do win the war over barbarians in this case.
Depending on how broadly you look at it, this trope has been done around a thousand different ways in science fiction.
And the other way around, too, don’t forget Arthur Clarke’s “Superiority”. Of course, the losers there weren’t rationalists, just bureaucrats. There are plenty of people though who consider bureaucracy the most rational means of organizing large groups of people.
If that’s the van Vogt I was reminded of, it’s interesting because it has it that rational people will independently agree on what needs to be done in a military situation (iirc, at least in simple early stages of guerrilla warfare), and not need centralized coordination.
I have no idea whether this is even plausible, but I’m not dead certain it’s wrong either.
As stated, it strikes me as unlikely, but something similar seems plausible.
People who have been trained consistently, and can rely on each other to behave in accordance with that training, find it easier to coordinate bottom-up. (Especially, as you say, in guerrilla situations.)
It’s not precisely “we all independently agree” but “we each have a pretty good intuition about what everyone else will do, and can take that prediction into account when deciding what to do.”
64 such people might independently decide that what’s necessary is to surround a target, realize the other 63 will likely conclude the same thing, select a position on the resulting circle unlikely to be over-selected (if all 64 are starting from the same spot, they can each flip a coin three times to pick an octant and then smooth out any clumpiness when they get there; if they are evenly distributed to start with they can pick the spot nearest to them, etc.), and move there.
This is a recurring theme in Dorsai… the soldiers share a very specific and comprehensive training that allows for this kind of coordinated spontaneity. Of course, this is all fictional evidence, but something like this ought to be true in real life. The question is under what circumstances this sort of self-organization does better than centralized strategic planning.
As the OP uses the term, at least, the Dorsai are more rational than their opponents, even though they might not describe themselves that way. We know this, because they consistently make choices that let them win.
From the USMC Warfighting Doctrine manual, pp. 62-63 (PDF pages 64-65):
(Believe it or not, I didn’t add any emphases to the above: the italicized phrases are that way in the original!)
Now, the USMC warfighting doctrine is specifically intended for state-vs-state warfare, so one may take it with a grain of salt as to whether it’s suitable for dealing with a barbarian horde or other guerrillas. But, at least it’s some non-fictional evidence. ;-)
Interesting. It seems to imply however that a rationalist would always consider, a priori, its own individual survival as the highest ultimate goal, and modulate—rationally—from there. This is highly debatable however: you could have a rationalist father who considers, a priori, the survival of his children to be more important than its own, a rationalist patriot, who considers, a priori, the survival of its political community to be more important than its own etc.
The moral of Ends Don’t Justify Means (Among Humans) was that even if philosophical though experiments demonstrate scenarios where ethical rules should be abandoned for the greater good, real life cases are not as clear cut and we should still obey these moral rules because humans cannot be trusted when they claim that <unethical plan> really does maximize the expected utility—we cannot be trusted when we say “this is the only way” and we cannot be trusted when we say “this is better than the alternative”.
I think this may be the source of the repulsion we all feel toward the idea of selecting soldiers in a lottery and forcing them to fight with drugs and threats of execution. Yes, dying in a war is better than being conquered by the barbarians—I’d rather fight and risk death if the alternative is to get slaughtered anyway together with my loved ones after being tortured, and if the only way to avoid that is to use abandon all ethics than so be it.
But...
Even in a society of rationalists, the leaders are still humans. Not benevolent (“friendly” is not enough here) superintelligent perfect Bayesian AIs. Can we really trust them that this is the only way to win? Can we really trust them to relinquish that power once the war is over? Will living under the barbarians rule be worse than living in a (formerly?) rationalist society that resorted to totalitarianism? Are the barbarians really going to invade us in the first place?
Governments lie about such things in order to grab more power. We have ethics for a reason—it is far too dangerous to rationalize that we are too rational to be bound by these ethics.
Edit: lottery won by two votes → election.
I’ve heard you say a handful of times now: as justified by some decision theory (which I won’t talk about yet), I one-box/cooperate. I’m increasingly interested.
Eliezer has yet to disclose his decision theory, but see “Newcomb’s Problem and Regret of Rationality” for the general rationale. Also Wikipedia on Hofstadter’s superrationality.
I agree with every sentence in this post. (And I read it twice to make sure.)
I don’t understand the assumption that each rationalist prefers to be a civilian while someone else risks her life. They can be rational and use a completely altruistic utility function that values all people equally a priori. The strongest rationalist society is the rationalist society where everyone have the same terminal values (in an absolute rather than relative sense).
That isn’t an assumption Eliezer is making, it’s an assumption he’s attacking.
It doesn’t look like it:
Eliezer is analyzing the situation as a Prisoner’s Dilemma: different players have different utility functions. This analysis would be completely redundant in a society where everyone have the same utility function (or at least sufficiently similar / non-egocentric utility functions). In such a society there wouldn’t be a need for a lottery: the soldiers would be those most skilled for the job. There would be no need for drugs / shooting deserters: the soldiers would want to fight because the choice to fight would be associated with positive expected utility (even if it means high likelihood of death).
Perhaps slightly off topic, but I’m skeptical of the idea that two AIs having access to each other’s source code is in general likely to be a particularly strong commitment mechanism. I find it much easier to imagine how this could be gamed than how it could be trustworthy.
Is it just intended as a rhetorical device to symbolize the idea of a very reliable pre-commitment signal (in which case perhaps there are better choices because it doesn’t succeed at that for me, and I imagine would raise doubts for most people with much programming experience) or is it supposed to be accepted as highly likely to be a very reliable commitment signal (in which case I’d like to see the reasoning expanded upon)?
A life spent on something less valuable than itself is wasted, just as money is squandered on junk. If you want to respect the value of your life, you must spend it on something more valuable to you than you are. If you invest your life into something more valuable than you are, you are not throwing it away, you are ensuring that it is spent wisely.
People sacrifice their best years passing their genes on, knowing that the continuation of the species is more valuable than those years, and they fight in war because freeing themselves and future generations from oppression is more valuable than living a life in slavery.
Most rationalists would see that dying to continue the rational way of life is better than investing their lives into living like a Barbarian after being conquered.
Not to mention the fact that if the rationalists didn’t fight (say they left the area, or surrendered), that would encourage the Barbarians to push them around. After the Barbarians plundered their village, they’d look for a new target and, knowing that rationalists run, the rationalists would be particularly appealing, so they’d be targeted again. Make yourself an easy target and the Barbarians may plunder you so often you can’t survive in any case. Running away from that type of problem does not solve it.
They might be rational egoists. They don’t think anything is more valuable than themselves, since they are all they value.
I found this post very disturbing, so I thought for a bit about why. It reads very much like some kind of SF dystopia, and indeed if it were necessary to agree to this lottery to be part of the hypothetical rationalist community/country, then I wouldn’t wish to be a part of it. One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it’s not impeding anyone else’s right to do the same). No government should have the right to compel its citizens to become soldiers, and that’s what it would become, after the first generation, unless you’re going to choose to exile anyone who reaches adulthood there and then opts out.
Offering financial incentives for becoming a soldier, as has already been discussed in the comments, seems a fairer idea. Consider also that the more objectively evil the Evil Barbarians are, the more people will independently decide that fighting is the better decision. If not enough people support your war, maybe that in itself is a sign that it’s not a good idea. If most of the rationalists would rather lose than fight, that tells you something.
It’s quite difficult to know the right tone of response to take here—the Evil Barbarians are obviously pure thought-experiment, but presumably most of us would view a rationalist country as a good thing. Not if it made decisions like this, though. Sacrificing the individual for the collective isn’t always irrational, but it needs to be the individual who makes that choice based on his or her own values, not due to some perceived social contact. Otherwise you might as well be sacrificed to make more paperclips.
If it was intended as pure metaphor, it’s a disquieting one.
Oh, my first downvote. Interesting. Bad Leisha, you’ve violated some community norm or other. But given that I’m new here and still trying to determine whether or not this community is a good fit for me, I’m curious about the specifics. I wonder what I did wrong.
Necroposting? Disagreeing with the OP? Taking the OP too literally and engaging with the scenario? Talking about my emotional response or personal values? The fact that I do value individual liberty over the collective? Some flaw in my chain of reasoning? (possible, but if so, why not point it out directly so that I can respond to the criticism?)
Note: This post is a concerted rational effort to overcome the cached thought ‘oh no, someone at LW doesn’t like what I wrote :( ’ and should be taken in that spirit.
A single downvote is not an expression of a community norm. It is an expression by a single person that there was something, and it could be pretty much anything, about your post that that one person did not like. I wouldn’t worry until a post gets to −5 or so, and −1 isn’t very predictive that it will.
The “someone at LW doesn’t like what I wrote” part is accurate. You don’t need the “oh no” and “:(” parts. Just because someone disagrees with you, doesn’t mean that you are wrong.
Personally (and I did not vote on your post either way), I don’t think you are quite engaging with the problem posed, which is that each of these hypothetical rationalists would rather win without being in the army themselves than win with being in the army, but would much prefer either of those to losing the war. Straw-man rationality, which Eliezer has spent many words opposing, including these ones, would have each rationalist decline to join up, leaving the dirty work to others. The others do the same, and they all lose the war. The surviving rationalists under occupation by barbarians then get to moan that they were too smart to win. But rationality that consistently loses is not worth the name. It is up to rationalists to find a way to organise collective actions that require a large number of participants for any chance of success, but which everyone would rather leave to everyone else.
Some possible ways look like freely surrendering, for a while, some of one’s freedom. A general principle that Freedom is Good has little to say about such situations.
It’s not just one person though. Having −1 points also means that nobody else thought it deserves more than that, or at least it’s not worth their effort to vote it back up to 0. So if you have reason to think the comment has been read by more than a few people after it was downvoted, even −1 points does reflect the community judgement to some extent.
Indeed, my quality threshold to upvote comments at −1 is much lower than my quality threshold to upvote comments at 0.
What function describes your threshold as the negative values go below −1?
Generally, the only types of comments that are below −3 that I upvote are ones which I think add a perspective to the conversation which should be there but should have a different proponent. It’s rare that I find a comment at less than −3 which I would fully endorse (but I have my settings set to display all comments).
Thank you for your response! It does help to be able to discuss these things, even if it seems a little meta.
Point taken.
Sure, I don’t need them. I included them as evidence of the type of flawed thinking I’m trying to get away from (If you’re familiar with Myers-Briggs, I’m an F-type trying to strengthen her T-function. It doesn’t come naturally).
You’re right. I noted that problem, but evaluated it as being less significant than the specifics of the extended example, which struck me as both morally suspect and, in a sense, odd: it didn’t seem to fit with the tone of most of the other posts I’ve read here. See my reply to dbc for more on that.
I agree. I’d add that those actions need to be collectively decided, but I agree with the principle.
A very sensible value in a heterogenous society, I think. But in this hypothetical nation, everyone is a very good rationalist. So they all, when they shut up and multiply, agree that being a soldier and winning the war is preferable to any outcome involving losing the war, and they all agree that the best thing to do as a group is to have a lottery, and so they all precommit to accepting the results.
No point in giving people the liberty to make their own individual decisions when everyone comes to the same decision anyway. Or more accurately, the society is fully respecting everyone’s individual autonomy, but due to the very unlikely nature of the nation, the effect ends up being one of 100% compliance anyway.
How do you feel about desertion?
It’s psychologically understandable, but morally wrong, provided the deserter entered into an uncoerced agreement with the organization he or she is deserting. If you know the terms before you sign up, you shouldn’t renege on them.
In cases of coercion or force (e.g. the draft) desertion is quite justified.
The topic of this article is how rational agents should solve a particular tragedy of the commons. Certainly, a common moral code is one solution to this problem: an army will have no deserters if each soldier morally refuses to desert. I don’t want to put words in your mouth, but you seem to think that common morality is the best, or perhaps only solution.
I think Eliezer is more interested in situations where this solution is impractical. Perhaps the rationalists are a society composed of people with vastly differing moral codes, but even in this case, they should still be capable of agreeing to coordinate, even if that means giving up things that they individually value.
Yes, I see a common moral framework as a better solution, and I would also assert that a group needs at least a rudimentary version of such a framework in order to maintain cohesion. I assumed that was the case here.
The rational solution to the tragedy of the commons is indeed worth discussing. However, in this case the principle behind the parable was obscured due to its rather objectionable content. I focused on the specifics as they remained more fixed in my mind after reading than the underlying principle. A less controversial example such as advertising or over-grazing would have prevented that outcome.
I know that’s a personal preference, though, and it seems to be a habit of Eliezer’s to choose extreme examples on occasion—I ran into the same problem with Three Worlds Collide. It’s an aspect of his otherwise very valuable writing that I find detracts from, rather than illuminates the points he’s making. I recognize that others may disagree.
With that in mind, I’m happy to close this line of discussion on the grounds that it’s veering off-topic for this thread.
In the least convenient possible world, in which winning like this and losing are the only options, does losing the war to barbarian invaders really bring more liberty than being drafted into a war?
Are you assuming that niceness (not torturing people, not killing civilians) is correlated with rationality?
To the extent that we all have common values, rationality should correlate to achieving those values: so if niceness is a general value, a rationalist community should be nice (or gain enough of another value to make up for the loss).
If niceness is not a reasonably-universal value, empirically our understanding of niceness seems to correlate with rationality.
Reading this all brings a question to my mind. Does your Pareto-good decision theory allow me to get richer than average? Assume decreasing marginal utility of money for everyone.
This is a thoughtful, thorough analysis of some of the inherent problems with organizing rational, self-directing individuals into a communal fighting force. What I don’t understand is why you view it as a special problem that needs a special consideration.
Society is an agreement among a group of people to cooperate in areas of common concern. The society as one body defends the personal safety and livelihood of its component individuals and it furnishes them with certain guarantees of livability and fair play. In exchange, the component individuals pledge to defend the integrity of the society and contribute to it with their labor and ingenuity. This happens and it works because Pareto improvements are best achieved through long-term schemes of cooperation rather than one-off interactions. The obligation to collective defense, then, happens at the moment of social contract and it needs no elaboration. Even glancingly rational people in pseudo-rational societies recognize this on some level, and when society is threatened, they will go to its defense. So, there is no real incentive to defect against society when there is a draft to fight an existential threat because the gains of draft-dodging are greatly outweighed by the risk of the fall of civilization.
I think you go too far in saying that modern drafts are “a tool of kings playing games in need of toy soldiers.” The model of the draft can be abused, as it was in the US during the Vietnam War, where there was no existential threat and draft-dodging was the smart move, but it worked remarkably well during World War II when a truly threatening horde of barbarians did emerge.
Along these lines, why is it that a lottery and chemical courage “is the general policy that gives us the highest expectation of survival?” Why couldn’t we do the job with traditional selective-service optimization for fitness, intelligence, and psychological stability, coupled with the perfectly rational understanding that risking life in combat is better than guaranteeing societal collapse by running from battle?
Reading through your post, especially your suggestions for a coordinated response, I found myself thinking about the absurd spectacle of the Army of Mars in Kurt Vonnegut’s Sirens of Titan. New soldiers could get any kind of ice cream they wanted, right after their memories were wiped and implants were installed to beam the persistent “rent, rent, rented-a-tent” of a snare drum to their mind whenever they were made to march in formation. Somehow I don’t think Vonnegut was suggesting an improvement.
“social contract” [shudders], I don’t remember signing that one.
A “social contract” binding individuals to make self-sacrificing decisions doesn’t seem necessary for a healthy civilization. See David D. Friedman’s Machinery of Freedom for details; for a very (very) brief sketch consider that truck drivers rationally risk death on the roads for pay and that mercenaries face a higher risk of death for more pay—and that merchants will pay both truck drivers and soldiers for their services.
Soldiery doesn’t have to be a special case requiring different rational rules.
What army of free-market mercenaries could seriously hope to drive the modern US Armed Forces, augmented by a draft, to capitulation? Perhaps more relevantly, what army of free-market mercenaries could overcome the fanatical, disciplined mass of barbarians?
What I’m inferring from your comment is that a rational society could defend itself using market mechanisms, not central organization, if the need ever arose. Those mechanisms of the market might do well in supplying soldiers to meet a demand for defense, but I’m skeptical of the ability of the blind market to plan a grand strategy or defeat the enemy in battle. It’s also very difficult to take one’s business elsewhere when you’re hiring men with guns to stop an existential threat and they don’t do a good job of it. In order to defend a society, first there must be understanding that there is a society and that it’s worth defending.
Plenty of private corporations seem to do quite well at grand strategy and defeating enemies in market competition. It doesn’t seem a huge stretch to imagine them achieving similar success in battle. Much of military success comes down to logistics and I think a reasonable case can be made that private corporations already demonstrate greater competence in that area than most government enterprises.
Big ones.
I’m reminded of the Iain M. Banks Culture in its peaceful and militant modes.
It would be really interesting to brainstorm how to improve a military. The conventional structure is more-or-less an evolved artifact, and it has the usual features of inefficiency (the brainpower of the low ranks is almost entirely wasted) and emergent cleverness (resilience to org-chart damage and exploitation of the quirks of human nature to create more effective soldiers). Intelligent design ought to be able to do better.
Here’s one to get it started: how about copying the nerve structure in humans and have separate parallel afferent and efferent ranks? That is, a chain of command going down, and a chain of analysis going up.
I think there’s more contribution from the bottom up in a modern well-functioning military than you realize. One of the obstacles the US military’s trainers face in teaching in other countries is getting officers to listen to their subordinates. In small units, successful leaders listen to their troops, in larger units, officers listen to their subordinates.
But in all those cases, there comes a time when the leader is giving orders, and at that point, the subordinates are trained to follow. The system doesn’t work if it doesn’t insist that the leader gets to decide when it is time to give orders.
But effectiveness comes from leaders who listen since, as you said, there are many more sensors at the edges of the org chart. The Culture is good at many things, but Banks doesn’t show small-unit operations in which the leader gains by listening.
Ah, you miss what I was aiming at. The “sensory” ranks don’t give orders. They’re an upward ideas pump. Rank in the two modes is orthogonal. The “motor” ranks command as normal. High ranks in both listen, but to different things. The “motor” leader wants to know where the enemy are and if the men have a bright tactical idea. The “sensory” collator might be more interested in a clever strategic analysis, a way to shorten the supply chain, or a design for better field camouflage.
If I understand you, I think that’s part of what is supposed to happen, though the communication is more lateral than I said at first. In addition to ideas going from the troops to their sergeants and from squad leaders to their commanders, new innovations spread from squad-to-squad.
After D-Day, the tactics required to get through narrow lanes surrounded by hedge rows were developed by individual tank teams, and tank groups picked up successful ideas from each other. In Iraq, methods for detecting ambushes and IEDs weren’t developed at headquarters and promulgated from the top down, they arose as the result of experiment and spread virally.
There may be an advantage to having specialists who are looking for that kind of idea and for ways of spreading it, but I’d go with the modern management practice of empowering everyone and encouraging innovation by everyone who is in contact with the enemy. In business, it’s good for morale, and in most arenas it multiplies the number of brains trying to solve problems and trying to steal good ideas.
Indeed. I wonder what the “expected utility of future selves” crew makes of this.
I think you may have waded into the trees here, before taking stock of the forest. By which I mean that this problem could definitely use some formalization, and may be much more important than we expect it to be. I’ve been studying the ancient Mongols recently; their conquests tested this, and their empire had the potential to be a locked-in dystopia type of apocalypse, but they failed to create a stable internal organizational structure. Thus, a culture that optimizes for both conquest & control, at the expense of everything else, could be an existential threat to the future of humanity. I think that merits rationalist research.
So, in a superfight of Rationalists vs Barbarians, with equal resources, we can steelman the opposition by substituting in that “Barbarians” means a society 100% optimized toward conquering others & taking all their stuff, at the expense of not being optimized toward anything else. In this case, what would the “Rationalist” society be, and how could they win? Or at least not lose?
I am going to go check if there are other articles which have developed this thought experiment already.
I think that’s an understatement of the potential danger of rationality in war. Not for the rationalist, mind, but for the enemy of the rationalist.
Most rationality, as elaborated on this site, isn’t about impassively choosing to be a civilian or a soldier. It’s about becoming less vulnerable to flaws in thinking.
And war isn’t just about being shot or not shot with bullets. It’s about being destroyed or not destroyed, through the exploitation of weaknesses. And a great deal of rationality, on this very site, is about how to not be destroyed by our inherent weaknesses.
A rationalist, aware of these vulnerabilities and wishing to destroy a non-rationalist, can directly apply their rationality to produce weapons that exploit the weaknesses of a non-rationalist. Their propaganda, to a non-rationalist, can be dangerous, and the techniques used to craft it nigh-undetectable to the untrained eye. Weapons the enemy doesn’t even know are weapons, until long after they begin murdering themselves because of those weapons.
An easy example would be to start an underground, pacifistic religion in the Barbarian nation. Since the barbarians shoot everyone discovered to profess it, every effort to propagate the faith is directly equivalent to killing the enemy (not just that, but even efforts to promote paranoia about the faith also weaken enemy capability!). And what defense do they have, save for other non-rationalist techniques that dark side rationality is empowered to destroy through clever arguments, created through superior understanding?
And we don’t have to wait for a Perfect Future Rationalist to get those things either. We have those weapons right now.
I’m sure meditation and self-hypnosis would be great… but I’m voting to include real virgins in the lottery! Given that a rationalist society would probably have tinkered with the gender balance and the universal appeal of a hero in uniform I wouldn’t expect too many complaints!
You didn’t mention in the Newcomb’s Problem article that you’re a one-boxer.
As a die-hard two-boxer, perhaps someone can explain one-boxing to me. Let’s say that Box A contains money to save 3 lives (if Omega thinks you’ll take it only) or nothing, and Box B contains money to save 2 lives. Conditional on this being the only game Omega will ever play with you, why the hell would you take Box A only?
I suspect what all you one-boxers are doing is that you somehow believe that a scenario like this one will actually occur, and you’re trying to broadcast your intent to one-box so Omega will put money in for you.
Imagine Omega’s predictions have a 99.9% success rate, and then work out the expected gain for one-boxers vs two-boxers.
By stepping back from the issue and ignoring the ‘can’t change the contents now’ issue, you can see that one-boxers do much better than two-boxers, so as we want to maximise our expected payoff, we should become one-boxers.
Not sure if I find this convincing.
I posted that comment four or five months ago. I’m a one-boxer now, haha. Figure that you can either choose to always one-box or choose to pretend like you’re going to one-box but actually two-box. Omega is assumed to be able to tell the difference, so the first option makes more sense.
I can choose through the composition of my mind to save 3 lives by wanting to refuse to take the money to save 2 lives. Or I can choose to save the two lives and thus not get 3 lives. Why the hell would I take both boxes?
I guess that makes sense. If you have the option of choosing what the composition of your mind is.
“Composition of my mind” is a bad phrase for it, but what I mean is that I have a collection of neurons that say “I’m a one-boxer” or similar.
You can find several long discussions of this on Overcoming Bias, and in earlier posts on Less Wrong.
The example of Athens vs. Sparta is our best datapoint: It pitted the ancient world’s most rational society against the ancient world’s greatest warrior society, well-controlled in terms of wealth, technology, geography, and genetics. Their war was evenly matched, but Sparta won in the end.
Sparta was 1⁄3 the size of Athens+Attica (100,000 vs. 300,000), with only 1⁄5 as many citizens (8,000 vs 40,000).
Not a very good data point. Athens at that time was not a community of rationalists. Xenophon’s March of the 10000) or Thucydides History of the Peloponesian War are both fairly readable classical sources for the extreme stupidity (even by modern democratic standards) of the Athenian democratic process. And their army voted for autocratic undercommanders who then had life and death power over the troops. The distant Athenian democracy voted for autocratic overcommanders.
I didn’t say it was very good. I said it was our best. The world has never had a country of rationalists.
If the point of the original post was that a society of inhumanly-rational people can win wars, then it’s of limited applicability at present. I’m assuming that we’re talking about IQ 100 Bayesians. (Which may be an empty set.)
I know this post is long, long dead but:
Isn’t this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
Alternatively, I’m considering all the strategies I could use, based on looking at my opponent’s strategy, and one of them is “Cooperate only if the opponent, when playing against himself, would defect.”
“Common knowledge of each other’s rationality” doesn’t seem to help. Knowing I use TDT doesn’t give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can’t look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I’m going to defect. And win. Rationalists should win.
It is possible to predict the output of a system without emulating the system. We can use the idea ‘of emulating their behavior’ if it helps as an intuition pump but to assume that it is required is a mistake.
Why on earth would I cooperate with you? You just told me you were going to defect!
(But I do respect your grappling with the problem. It is NOT trivial. Well, I should say it is trivial but it is hard to get your head around it, particularly with our existing intuitions.)
A = “Preceded by it’s own quotation with A’s and B’s swapped is B’s source code” preceded by it’s own quotation with A’s and B’s swapped is B’s source code. B = “Preceded by it’s own quotation with B’s and A’s swapped is A’s source code” preceded by it’s own quotation with B’s and A’s swapped is A’s source code.
A and B each now contain the other’s source code.
Edit: I used “followed” when it should have been “preceded”.
No. If you know all relevant data yourself you don’t have to know it again just because B knows it. That is just a naive, inefficient way to implement the ‘source code’. Call the code ‘DRY’ for example. Or consider it an instruction to do a ‘shallow copy’ and a ‘memory free’ after a getting a positive result for a ‘deep compare’.
The idea is that A and B are passed each other’s source code as input (and know their own source code thanks to that theorem that guarantees that Turing machines have access to their own source code WLOG, which I think DanielLC’s comment proves). There’s no reason you can’t do this, although you won’t be able to deduce whether your opponent halts and so forth.
Your opponent might not halt when given himself as input.
The problem with your plan is that TDT agents don’t always cooperate. I will only cooperate if I have reason to believe that you and I are similar enough that we will decide to do the same thing for the same reasons. I hate to burst your bubble, but you are not the first person in all of recorded history to think of this. Other people are allowed to be smart too. If you come up with a clever reason to defect when playing against me, it is very possible (perhaps even likely, although I don’t know you all that well) that I will think of it too.
I’m wondering why you wrote this article. (gensym) you’re describing and assigning the name “war” has virtually nothing to do with any real world “war” situations, so you could as well describe it as a thought experiment, or use some less loaded metaphor.
Too high connotation to denotation ratio for me.
Why do you say that?
I think the point of this post is simply to confront head-on a scenario that’s taken to be a reductio ad absurdam of “rationalism shouldn’t lose to irrationalism”. I don’t see this as intended for practical purposes under conceivable circumstances.
No. Just No.
A society of rational agents ought to reach the conclusion that they should WIN, and do so by any means necessary, yes? Then why not just nuke ’em? *
*replace ‘nuke’ with whatever technology is available; if our rationalist society has nanobots, we could modify them into something less harmful than barbarians.
Offer amnesty to barbarians willing to bandon their ways; make it as possible as we can for individual barbarians to defect to our side; but above all make sure the threat is removed. That’s what constitutes winning.
Turning individual lottery-selected rationalists into “courageous soliders” is not the way to do that. That’s just another way of losing.
Furthermore, the process of selecting soldiers by lottery is a laughably bad heuristic. An army of random individuals, no matter how much courage they have, is going to be utterly slaughtered by an army whose members are young, strong, fast, healthy, and all those other attributes. If the lottery is not random but instead gives higher weight to the individuals best fit to fight, then it is not different from the draft decried above.
This is a terrible post, the first one so awful that I felt moved to step out of the lurkersphere and comment on LW.
Don’t assume the rationalists have super powerful technology.
Fictional beisutsukai would invent it soon enough.
Yeah, that’s a more complex issue—coordination among agents with different risk-bearing efficiencies. If you have an agent known to be fair or sufficiently rigorous rules of reasoning that you can verify fairness, then it’s possible for everyone to know that they’re taking “equal risk” in the sense of being at-risk for being recruited as a teenager. (But is that the same sort of equal risk as being recruited if you have genes for combat effectiveness?)
A society of rationalists would work it out, but it might be more complicated. And as Lawliet observes, you shouldn’t assume you’ve got nukes and the Soviets don’t.
Perhaps there is a reason that America (and other nuclear powers, but America most recently) doesn’t just nuke its enemies. If the enemy group were truly a barbarian horde, with no sympathy generated from the remainder of the world, then perhaps rationalists would find it easier to nuke them. But in any other circumstance (which is to say, the Least Convenient Possible World), the things you described above would be useful (amnesty etc.). We only nuke ’em when that produces the best long-term outcome, including the repercussions of the use itself—such as the willingness of other countries to use such weapons for less defensive purposes.
The draft is objectionable not because it selects for the best soldiers but because it is overused, if I read the original post correctly. Proper use of the lottery/draft is only for directly defending the security of the original state, rather than projecting the whims of kings onto the world.
My reaction is similar to Nanani’s, this is awful.
How are wars, armies, soldiers, and all the trappings relevant to actually practiced rationality? Smart thinking trapped in a stupid metaphor is still stupid, right?
How does one classify these two armies? What IQ, measure of rationality, or other characteristic separates the two sides?
I’d suggest the works of Gene Sharp (http://en.wikipedia.org/wiki/Gene_Sharp) at http://www.aeinstein.org/ as he and his associates are behind some of the few successful (i.e. towards democracy) revolutions since the Cold War.
It is far from clear that the “colour revolutions” resulted in more democracy in respective countries. See e.g. http://en.wikipedia.org/wiki/Saakashvili#Criticism
To be an ally of the West is not the same as to be a democratic country. Similarly, the elections are not automatically rigged if communists win.
So, according to Freedom House, countries with nonviolent revolutions since the late 1990s are improving. There’s not a lot of data beforehand. You named the exception: Georgia’s gotten a little worse since the overthrow of the “rigged” election there. Look at the data: http://www.freedomhouse.org/template.cfm?page=42&year=2008
I’m willing to admit I might have some Western bias, but I try to catch it. The general consensus does seem to be that the elections were rigged, but I don’t know enough to say with much confidence either way.
In my original post, I was referring to the period of actual revolution, not everything since. I know it’s not all sunshine and rainbows. Reality is gritty, nasty stuff. Nonviolent struggle strategy and tactics do not guarantee success nor democracy—but neither do violent methods.
If we’re discussing strategies and tactics, most nonviolent movements do not plan much past overthrow. That’s bad, but again, no worse than violent overthrow.
These are big and fuzzy concepts, for sure. When does a revolution actually end? If a less or equally undemocratic leader is elected, is that a failure of nonviolent struggle, a failure or planning, a failure of the people, or what? Are Freedom House’s metrics valid or consistent? I don’t have good answers.
If you were to wager on whether strategic nonviolent or strategic violent struggles in the modern day were more likely to lead toward a successful overthrow, how would you bet? What about leading toward more democratic overthrows (i.e. elections)?
http://en.wikipedia.org/wiki/The_Last_Article
Interestingly, Denmark used nonviolent resistance very effectively against the Nazis while being a nominal ally of Germany. (If they weren’t distracted by fighting a war, it probably wouldn’t have been nearly as effective, though—the Nazis simply couldn’t spare the manpower to effectively impose martial law, although they did attempt to do so.)
It’s an unfair example. Danes were fellow Aryans, and so were objects of empathy in a sense that folks in India wouldn’t have been.
There were Indians fighting along with Germans:
https://en.wikipedia.org/wiki/Indian_Legion
Agreed.
http://en.wikipedia.org/wiki/The_Great_Explosion
http://www.abelard.org/e-f-russell.php
That story is so totally cheating. The evil empire the author uses is a toothless law-bound caricature. Face the gands with a less congenial interstellar empire, say the Mexica from “Wasteland of Flint”, and they just die, by the very large numbers, and the Mexica enslave the preschool kids and re-colonize the planet. Game over, player two wins.
dclayh, Yes, that came to mind for me too. The small-town Gandhian libertarianism of Russell’s story is entertaining, and just as silly. Yet, you didn’t receive any karma points, and Eliezer received several, so either someone out there thinks a fictional short story is a reasonable rebuttal, or people are scoring for support of a side or entertainment.
Eliezer, I don’t see how Russell or Turtledove even belong as anything more than footnotes, unless the discussion is about fiction writers creating alternate universe just-so stories that tend to align with their ideologies. I didn’t think Less Wrong, of all places, would be where I’d have to insist that short story fiction is not adequate or reasonable evidence, or any sort of rebuttal, against real world claims or case studies.
Please try actually reading Sharp. He’s not Gandhi. Neither is Robert Helvey—he’s actually a retired US colonel.
Having had to explain to other sci-fi lovers in the past why using fiction as a counterargument is so silly, I googled to see if people had written about why it’s silly. GUESS WHAT I FOUND?
The Logical Fallacy of Generalization from Fictional Evidence, by Eliezer Yudkowsky: http://www.overcomingbias.com/2007/10/fictional-evide.html
Yeah, my jaw dropped when I found that. I’m sure you won’t respond to this, as the mass of LW moves on to the most current post, but really? Was this a self-aware joke? Eliezer 2009 is that much less rational than Eliezer 2007?
It’s not evidence, but it is a pointer to an argument from existing knowledge: “you know, X probably would actually result from Y”. (Well, to a bounded rationalist that is an example of evidence, but a kind that it’s not nearly as problematic to get from fiction.)
I think E. was trying to swat the old idea “smart people lose to unreasonable people” specifically in war, and perhaps also metaphorically in general competition.