I find it ironic that you use a military example to illustrate how we can achieve collective action at the civilization level.
Isn’t the fact the Spartans were willing to “come back with their shields—or on it” the epitome of our kind not being able to cooperate?
I always interpreted “our kind” as the whole of humanity, so for me one sub-set of humanity banding together to destroy another subset (or die trying) isn’t a good example of civilization-level cooperation, or the kind of meme that would be useful to spread.
I always interpreted “our kind” as the whole of humanity,
Did you read the linked article? In it Eliezer is contrasting rationalist and religious institutions. You may also want to read this to get an idea for the problem James Miller is trying to address. Here is a relevant quote:
Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.
Now there’s a certain viewpoint on “rationality” or “rationalism” which would say something like this:
“Obviously, the rationalists will lose. The Barbarians believe in an afterlife where they’ll be rewarded for courage; so they’ll throw themselves into battle without hesitation or remorse. Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition. They’ll believe in each other’s goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group. Meanwhile, the rationalists will realize that there’s no conceivable reward to be had from dying in battle; they’ll wish that others would fight, but not want to fight themselves. Even if they can find soldiers, their civilians won’t be as cooperative: So long as any one sausage almost certainly doesn’t lead to the collapse of the war effort, they’ll want to keep that sausage for themselves, and so not contribute as much as they could. No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won’t be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun. In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would.”
And that’s assuming the rationalists don’t simply surrender without a fight on the grounds that “war is a zero sum game”.
I didn’t read the linked article—it certainly seems to frame the issue as rationalists vs. barbarians, not humanity vs. the environment (and the flaws of humanity), so thanks for pointing that out.
I do think fundamentalists/extremists/terrorists have an asymmetrical advantage in the short term in that it’s always easier to cause damage/disorder than improvement/order. This quote above seems to be a particular example of this phenomenon.
However, I have to agree with Jiro’s comment. Extremists may be able to destroy things and kill people, but I wouldn’t say they’ve been able to conquer anything. To me, “conquer” implies taking control of a country, making its economy work for you, dominating the native population, building a palace, etc. Modern extremists commit suicide and then their mastermind hides silently for a decade until helicopters fly in and soldiers kill him.
When referring to actual barbarians, the description of the barbarians seems to lie by omission—even if all the things described above are mostly true, the barbarians have wrecked their economy because central planning doesn’t work no matter how many orders they give, burning people at the stake is bad for investment, their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances, and their inability to have sane discussion means they can’t make tactical decisions or really plan anything well at all. (Etc.) That sort of thing is pretty much the reason that the West hasn’t been conquered by Muslim fundamentalists yet:
Also, barbarism doesn’t arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.
the barbarians have wrecked their economy because central planning doesn’t work no matter how many orders they give
Not all barbarians do that. The communists did that, but they also considered themselves rationalists and were considered such by many people at the time. Muslim fundamentalists generally don’t.
burning people at the stake is bad for investment
Depends on whose being burned and why. Having the highest per capita rate of capital punishment doesn’t seem to have hurt Singapore’s ability to get investment.
their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances
I don’t think so. It might hurt their ability to make scientific advancements, but they’re perfectly capable of using them once someone else makes them.
Also, barbarism doesn’t arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.
‘Rationalist’ societies can also have inherent flaws, like say problems solving the collective action problems associated with wars.
This feels to me like a just world fallacy, or perhaps choosing the most convenient world. Yes, if the barbarians are completely stupid, they are probably not so much of a danger these days. If they are completely anti-science, we probably have better guns.
Now imagine somewhat smarter barbarians, who by themselves are unable to do sophisticated science, but have no problem kidnapping a few scientists and telling them to produce a lot of weapons for them, otherwise their families will be burned at stake. (Even if their religion prevents them from doing science, they may compartmentalize and believe it is okay to use the devil’s tools against the devil himself.) Suddenly, the barbarians have good guns, too.
Maybe the reason why the West hasn’t been conquered by Muslim fundamentalists yet is that Muslims don’t have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn’t (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire. First attacking some smaller but civilized countries to get better weapons for attacking the next ones. With multiple leaders, so that dropping a bomb won’t stop the war. (Maybe one Osama hiding in secret, giving commands to dozen wannabe Genghis Khans who don’t mind getting to paradise too soon.)
Jiro’s fallacy is not in saying that the world is or has been just in this respect, but rather in implicitly saying it must be. I don’t think it’s a coincidence that liberal/secular/enlightenment nations are the most powerful today, but that fact doesn’t negate the point of the barbarian hypothetical.
I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world’s top industrial and scientific powers. But that’s a fact about our world, and the point of the barbarian example is more universal than that.
If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.
But there’s no way to get from here to there except by Omega coming down and constructing exactly the race of barbarians necessary for the hypothetical to work. And if you’re going to say that, there’s no point in referring to them as barbarians and describing their actions in terms like “believes in an afterlife” and “obeys orders” that bring to mind real-life human cultures; you may as well say that Omega is just manipulating each individual barbarian like a player micromanaging a video game and causing him to act in exactly the way necessary for the conquest to work best.
Except of course that if you say “rationalists could be conquered by a set of drones micromanaged by Omega”, without pretending that you’re discussing a real-world situation, most people (assuming they know what you’re talking about) would reply “so what?”
If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.
This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are. And, for that matter, the rationalist nation probably isn’t theoretically optimal either...
On balance, believing true things is an advantage, but there are other kinds of advantages which don’t automatically favor the rationalist side. Sheer numbers, for example.
This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are.
How imperfectly optimized, though? Imperfectly optimized like Omega controlling each barbarian but occasionally rolling the barbarian’s morale check, which fails on a 1 on a D100? Or imperfectly optimized like real life barbarians?
Try the following obviously-unrealistic yet not-obviously-uninteresting hypothetical:
There are two approximately equal-strength warring tribes of barbarians, Tribe A and Tribe B. One day Omega sprinkles magic rationality dust on Tribe A, turning all of its members into rationalists. Tribe B is on the move towards their camp and will arrive in a few days. This is not enough time for Tribe A to achieve any useful scientific or economic advances, nor to accumulate a significant population advantage from non-stake-burning.
Can you see, in that hypothetical, how Eliezer’s points in the linked posts are important?
Or another approach: the quote about the rationalists losing says “Barbarians have advantages A, B, and C over rationalists.” Your response is “But rationalists have larger advantages X, Y, and Z over barbarians, so who cares?” Eliezer’s response is “screw that, if barbarians have any advantages over rationalists, the “rationalists” aren’t rational enough”. My hypothetical’s purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.
My hypothetical’s purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.
Advantages are usually advantages under a specific set of circumstances. If you “control” for X, Y, and Z by postulating a set of circumstances where they have no effect, then of course A, B, and C are better. The rationalists have a set of ideals that works better in a large range of circumstances and in more realistic circumstances. They will not, of course, work better in a situation which is contrived so that they lose, and that’s fairly uninteresting—it’s impossible to have ideals that work under absolutely all circumstances.
Think of being rationalist like wearing a seatbelt. Asking what if the rationalists’ advances over the barbarians just happen not to apply is like asking what if not having a seatbelt would let you be thrown out of the car onto something soft but wearing a seatbelt gets you killed. I would not conclude that there is something wrong with seatbelts just because there are specific unlikely situations where wearing one might get you killed and not wearing one lets you survive.
“It’s impossible to have ideals that work under absolutely all circumstances.”
This is the essentially the proposition Eliezer wrote those posts to refute.
A seat belt is a dumb tool which is extremely beneficial in real-world situations, but we can easily contrive unrealistic cases where it causes harm. The point of those posts is that rationality is ‘smart’; it’s like a seat belt that can analyze your trajectory and disengage itself iff you would come to less harm that way, so that even in the contrived case it doesn’t hurt to wear one.
I don’t read it that way (at least not the long quote, taken from the second article). The way I read it, he is trying to discredit what he thinks is fake rationalism, and is giving barbarians as an example of a major failure which proves that this rationalism is fake. (Pay attention to the use of scare quotes.) I believe my response—that everything fails in unrealistic situations and what he is describing is an unrealistic situation—is on point.
The intention is to discredit a ‘fake rationalism’ and illustrate ‘real rationalism’ in its place.
I think you’re right that if we’re talking about the USA fighting a conventional war against the visigoths, the barbarians’ advantages aren’t even a rounding error. But there are other types of conflicts where that may not be the case. Some possible examples include warfare in other eras or settings, economic competition, democratic politics, and social movements.
Or even if it’s not a conflict—maybe we’d like to be able to have traditional rationalist virtues like empirically testing beliefs, and also be a group that cooperates on prisoners’ dilemmas? Barbarians are a dramatization to elevate these problems to an existential level, but the ultimate point is that rationalists shouldn’t have this kind of disadvantage even if they do have offsetting advantages that make them stronger than any group they’re likely to come into direct conflict with.
but the ultimate point is that rationalists shouldn’t have this kind of disadvantage
Why shouldn’t they? If the “should” means “I would be happier if”, maybe, but it’s not a law of the universe that rationalists must always have an advantage in all situations.
(If nothing else, what if the enemy just hates rationalists? I can even think of real-life enemies who do.)
Rationalists shouldn’t have those disadvantages, because there are a bunch of ways to mitigate them, which the post goes on to enumerate.
Part of Eliezer’s project is to enshrine a definition of ‘rational’ such that a decision that predictably puts you at a disadvantage is not rational.
Are you arguing that Eliezer-rationality is a poor fit for the word’s historic usage and you’d rather cultivate a different kind of rationality that doesn’t allow the kind of unpleasant anti-barbarian measures described in the post? One that forbids them, not conditionally on barbarians not being a major threat, but absolutely?
It’s pretty well established here that a phrase like ‘predictably puts you at a disadvantage’ is a probabilistic term; essentially it means ‘has a negative impact on your expected uitility.’
If the definition was actually what you describe, then whether barbarians demonstrate that that kind of rationality “predictably puts you at a disadvantage” would partly depend on how likely you were to be attacked by barbarians of those types. (Because low probability events contribute less to your expected utility.)
In other words, if that’s what he meant, then optimized barbarians don’t count as an example. He’d have to either use realistic barbarians, or argue that his optimized barbarians are realistic enough that they make substantial contributions to the expected utility.
(Does “I assume he meant something different because if he meant that, his example is useless” count as steelmanning?)
He’s writing about what the rationalists should do conditional on facing that kind of barbarian. The point of the post is not that rationalist communities should all implement military drafts, but rather that they should be capable of such measures if circumstances require them.
He’s writing about what the rationalists should do conditional on facing that kind of barbarian.
That makes a bit more sense but it still has flaws. The first flaw that comes to mind is that the rationalists may have precommitted to support human rights and that the harm that this precommitment causes to the rationalists in the optimized-barbarian scenario is more than balanced by the benefit it causes by making the rationalists unwilling to violate human rights in other scenarios where the rationalists think they are being attacked by sufficiently optimized barbarians, but are not. Whether this precommitment is rational depends on the probability of the optimized-barbarian scenario, and the fact that it is undeniably harmful conditional on the optimized-barbarian scenario doesn’t mean it’s not rational.
(Edit: It is possible to phrase this in a non-precommitment way: “I know that I am running on corrupted hardware and a threat that seems to be optimized barbarians probably isn’t. The expected utility is calculated from P(false belief in optimized barbarians) benefit from not drafting everyone—P(true belief in optimized barbarians) loss from being killed by barbarians. The conclusion is the same: just because the decision is bad for you in the case where the barbarians really are optimized doesn’t make the choice irrational.)
Right, but it’s a consequentialist argument, not a deontological one. “This measure is more likely to harm than to help” is a good argument; “On the outside view this measure is more likely to harm than to help, and we should go with the outside view even though the inside view seems really really convincing” can be a good argument; but you should never say “This measure would help, but we can’t take it because it’s ‘irrational’”.
In a sense, just saying “you shouldn’t be the kind of ‘rational’ that leads you to be killed by optimized barbarians” is already a consequential argument.
you should never say “This measure would help, but we can’t take it because it’s ‘irrational’”.
Yes, you should, if you have precommitted to not take that measure and you are then unlucky enough to be in the situation where the measure would help. Precommitment means that you can’t change your mind at the last minute and say “now that I know the measure would help, I’ll do it after all”.
The example I give above is not the best illustration because you are precommitting because you can’t distinguish between the two scenarios, but imagine a variation: You always can identify optimized barbarians, but if you precommit to not drafting people when the optimized barbarians come, your government will be trusted more in branches of the world where the optimized barbarians don’t come. Again, the measure of worlds with optimized barbarians is small. In that case, you should precommit, and then if the optimized barbarians come, you say “drafting people would help, but I can’t break a precommitment”. If you have made the precommitment by self-modifying so as to be unwilling to take the measure, the unwillingness looks like a failure of rationality (“If only you weren’t unwilling, you’d survive, but since you are, you’ll die!”), when it really isn’t.
Precommitment is also a potentially good reason. I’m not sure what we disagree about anymore.
Is your objection to the Barbarians post that you fear it will be used to justify actually implementing unsavory measures like those it describes for use against barbarians?
If there is precommitment, it may be true that doing X would benefit you, you refuse to do X, and your refusal to do X is rational.
Eliezer said that if doing X would benefit you and you refuse to do X, that is not really rational.
Furthermore, whether X is rational depends on what the probability of the scenario was before it happened—even if the scenario is happening now. Eliezer as interpreted by you believes that if the scenario is happening now, the past probability of the scenario doesn’t affect whether X is rational (That’s why he could use optimized barbarians as an example in the first place, despite its low probability.)
Also, I happen to think that many cases of people “irrationally” acting on principle can be modelled as a type of precommitment. Precommitment is just how we formalize “I’m going to shop at the store with the lower price, even if I have to spend more on gas to get there” or “we should allow free speech/free press/etc. and I don’t care how many terrorists that helps”.
Part of Eliezer’s project is to enshrine a definition of ‘rational’ such that a decision that predictably puts you at a disadvantage is not rational.
Assuming that “predictably puts you at a disadvantage” means “there is at least one situation X such that I can predict that if X occurs, I would be at a disadvantage”, then I don’t agree with this definition. (For instance, if Biblical literalism is true, pretty much every rationalist would be at a disadvantage. Does that mean that every definition of “rational” is bad?)
It’s pretty well established here that a phrase like ‘predictably puts you at a disadvantage’ is a probabilistic term; essentially it means ‘has a negative impact on your expected uitility.’
By the definition you assumed I was using, it would be true to say that buying a lottery ticket predictably increases your wealth. That is not a reasonable way to use words.
(Also, you’ve been disagreeing with Eliezer this whole thread, and only that last post has downvotes)
If you accept the local definition of rationality as winning (and not, say, as “thinking logically and deeply”) then, well, losing means you weren’t sufficiently rational :-/
I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world’s top industrial and scientific powers.
I’m not sure. The liberal world seems to have gotten “softer” since then. Compare the general reaction in the US to the death toll in Iraq (maybe one or two US soldiers a day) with the death toll in WWII.
Maybe the reason why the West hasn’t been conquered by Muslim fundamentalists yet is that Muslims don’t have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn’t (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire.
“Caliph” Abu Bakr al-Baghdadi and his group ISIS have been behaving exactly like this. They are quite young, but don’t appear quite able to take on a Western military yet.
This feels to me like a just world fallacy, or perhaps choosing the most convenient world.
And yet, by definition, a group who are better at rationality win more often. We ought to expect that rational civilizations can beat irrational ones, because rationality is systematized cross-domain winning.
by definition, a group who are better at rationality win more often
Well, there is this “valley of bad rationality” where being more rational about part of the problem but not yet more rational about other part can make people less winning.
Sometimes I feel are we are there at a society level. We have smart individuals, we have science, we fly to the moon, etc. However, superstition and blind hate can be an efficient tool for coordinating a group to fight against another group. We don’t use this tool much (because it doesn’t fit well with rationality and science), but we don’t have an equally strong replacement. Also, only a few people in our civilization do the rationality and science. So even if there is a rationality-based defense, most of our society is too stupid to use it efficiently. On the scale from “barbarians” to “bayesians”, most of our society is somewhere in the middle: not barbaric enough, but still far from rational.
A group that is better at rationality will win more often, but winning more often is not the same thing as “winning in a superset of the situations in which the irrational win”.
By the same reasoning which says that fundamentalists could do better with more efficient methods of conquest, they could do better with more efficient methods of making peace, too. They won’t do as well as with conquest, but they’ll do better than they are doing now. Yet they don’t.
Barbarism is not optimized for conquest. It’s optimized for supporting a set of social structures. Those social structures make them more dangerous as conquerers than the average society, but they’re still not optimized for conquest; there are things which would make conquest more efficient which they would not do.
(To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they’d have to grant equal rights to conquered people who agreed to work with them. Rome did that except in a few rare but famous cases. So did the Mongols. But Muslim fundamentalists can’t give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.)
To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they’d have to grant equal rights to conquered people who agreed to work with them.
Not necessarily. Muslims, in particular, have a history of using slavesoldiers to good effect.
But Muslim fundamentalists can’t give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.
You do realize it’s possible to convert to fundamentalist Islam?
Muslims, in particular, have a history of using slave soldiers to good effect.
I seem to recall, and a glance over the Wikipedia articles suggests, that the Mamluk and Janissary systems involved raising (enslaved) boys into a military environment from a fairly young age. These boys might come from subjugated territories, but they’d in effect have been part of the dominant culture for much of their lives: it’s not a system that could be used to quickly convert conquered territories into additional manpower.
That said, it hasn’t been unusual for empires, modern and otherwise, to make substantial use of auxiliary forces drawn from client states. The Roman military probably relied on them as much as they did on the legions, or more in the late empire.
You do realize it’s possible to convert to fundamentalist Islam?
Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it’s easier for obvious reasons to get soldiers without this requirement than with it. (The same goes for using slaves.)
Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it’s easier for obvious reasons to get soldiers without this requirement than with it.
You seem to be focusing solely on cost; the difference between benefit and cost is what matters, and the benefits of a fighting force with shared values (particularly shared religious ones) are many and obvious.
The Mongols had the advantage of recruiting from a pool of steppe nomads with similar values.
The Roman Republic conquered the Mediterranean basin with an army consisting of Italians that were required to adopt Roman values before joining. Later the Roman legion adopted the looser system you described. Subsequently Roman legions would spend nearly as much effort fighting other Roman legions in civil wars as fighting Rome’s enemies.
I don’t think the spread of Islam many centuries ago counts. Fanaticism isn’t as much of a disadvantage when fighting medieval socieities as it is when fighting modern ones.
To summarize: Fanaticism keeps the culture from escaping the dark ages. If everyone is in the dark ages anyway, not being able to escape the dark ages isn’t much of a disadvantage.
That looks to me like one of those sentences which sound pretty but don’t actually mean much.
In your comment upthread you listed things which make a barbarian society “uncompetitive”. They apply to medieval societies as well. Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively. That holds in any epoch (as long as we don’t get too far into stone age :-/).
When Islam erupted out of the Arabian Peninsula, the “fanatics” easily took over huge—amazingly huge—territories. And it wasn’t just pillage-and-burn, they conquered the lands and established their own rule.
Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively.
Why would I expect this when the society exists hundreds of years ago? The point is that back then, everyone lacked many of the things that fanaticism would cause a society to lack. The fanatics are not at such a disadvantage under such circumstances. The loss in efficiency from it taking weeks to communicate between distant parts of your empire is going to make the loss in efficiency from having a theocracy look like noise. The disadvantage of not getting investors in your country won’t matter when there’s no international investment anyway. The disadvantage of having little in the way of science and engineering won’t matter if there’s hardly any science yet and engineering is at the state of building bridges instead of launching satellites.
The point is that back then, everyone lacked many of the things that fanaticism would cause a society to lack.
Really? Consider trade—a major factor in the society’s wealth and survival for the last several thousands of years. The fanatic barbarians wouldn’t trade, would they?
You don’t think technology mattered before the Industrial Revolution? Oh, but it did. From bronze weapons to early firearms, an army with a technological edge had a big advantage.
Governance didn’t matter in ancient and medieval societies? Do you actually believe that?
Technology mattered before the Industrial Revolution. The kind of technology that fanatics are bad at did not matter before the Industrial Revolution, however, because nobody had it, fanatic or not.
That sort of thing is pretty much the reason that the West hasn’t been conquered by Muslim fundamentalists yet.
Another reason: many members of our military do have the courage of the Spartans. U.S. soldiers don’t put on suicide vests to kill children, but they do fall on grenades and hold hopeless positions under fire so their friends can escape death.
I see what you mean, but in a military conflict it sees that any gain in power or resources is the result of another group losing power or resources (a zero-sum game). I guess that trade/commerce might be a positive-sum example where competition is still involved but on the whole there is societal benefit.
I find it ironic that you use a military example to illustrate how we can achieve collective action at the civilization level.
Isn’t the fact the Spartans were willing to “come back with their shields—or on it” the epitome of our kind not being able to cooperate?
I always interpreted “our kind” as the whole of humanity, so for me one sub-set of humanity banding together to destroy another subset (or die trying) isn’t a good example of civilization-level cooperation, or the kind of meme that would be useful to spread.
Did you read the linked article? In it Eliezer is contrasting rationalist and religious institutions. You may also want to read this to get an idea for the problem James Miller is trying to address. Here is a relevant quote:
And that’s assuming the rationalists don’t simply surrender without a fight on the grounds that “war is a zero sum game”.
I didn’t read the linked article—it certainly seems to frame the issue as rationalists vs. barbarians, not humanity vs. the environment (and the flaws of humanity), so thanks for pointing that out.
I do think fundamentalists/extremists/terrorists have an asymmetrical advantage in the short term in that it’s always easier to cause damage/disorder than improvement/order. This quote above seems to be a particular example of this phenomenon.
However, I have to agree with Jiro’s comment. Extremists may be able to destroy things and kill people, but I wouldn’t say they’ve been able to conquer anything. To me, “conquer” implies taking control of a country, making its economy work for you, dominating the native population, building a palace, etc. Modern extremists commit suicide and then their mastermind hides silently for a decade until helicopters fly in and soldiers kill him.
When referring to actual barbarians, the description of the barbarians seems to lie by omission—even if all the things described above are mostly true, the barbarians have wrecked their economy because central planning doesn’t work no matter how many orders they give, burning people at the stake is bad for investment, their belief in an afterlife is associated with other beliefs that prevent them from making or even efficiently using scientific advances, and their inability to have sane discussion means they can’t make tactical decisions or really plan anything well at all. (Etc.) That sort of thing is pretty much the reason that the West hasn’t been conquered by Muslim fundamentalists yet:
Also, barbarism doesn’t arise at random. Some social structures are more conducive to barbarism than others and they may have inherent flaws which reduce the efficiency of conquest even as their encouragement of barbarism increases it.
Not all barbarians do that. The communists did that, but they also considered themselves rationalists and were considered such by many people at the time. Muslim fundamentalists generally don’t.
Depends on whose being burned and why. Having the highest per capita rate of capital punishment doesn’t seem to have hurt Singapore’s ability to get investment.
I don’t think so. It might hurt their ability to make scientific advancements, but they’re perfectly capable of using them once someone else makes them.
‘Rationalist’ societies can also have inherent flaws, like say problems solving the collective action problems associated with wars.
This feels to me like a just world fallacy, or perhaps choosing the most convenient world. Yes, if the barbarians are completely stupid, they are probably not so much of a danger these days. If they are completely anti-science, we probably have better guns.
Now imagine somewhat smarter barbarians, who by themselves are unable to do sophisticated science, but have no problem kidnapping a few scientists and telling them to produce a lot of weapons for them, otherwise their families will be burned at stake. (Even if their religion prevents them from doing science, they may compartmentalize and believe it is okay to use the devil’s tools against the devil himself.) Suddenly, the barbarians have good guns, too.
Maybe the reason why the West hasn’t been conquered by Muslim fundamentalists yet is that Muslims don’t have an equivalent of Genghis Khan. Someone who would have the courage to conquer the nearest territory, kill horribly everyone who opposed them, let live those who didn’t (and make this fact publicly known), take some men and weapons from the conquered territory and use them to attack the next territory immediately, et cetera, spreading like a wildfire. First attacking some smaller but civilized countries to get better weapons for attacking the next ones. With multiple leaders, so that dropping a bomb won’t stop the war. (Maybe one Osama hiding in secret, giving commands to dozen wannabe Genghis Khans who don’t mind getting to paradise too soon.)
Jiro’s fallacy is not in saying that the world is or has been just in this respect, but rather in implicitly saying it must be. I don’t think it’s a coincidence that liberal/secular/enlightenment nations are the most powerful today, but that fact doesn’t negate the point of the barbarian hypothetical.
I seriously doubt the viability of your Genghis Khan plan for modern fundamentalist Islam, seeing as that same M.O. was tried recently except starting with one of the world’s top industrial and scientific powers. But that’s a fact about our world, and the point of the barbarian example is more universal than that.
If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.
But there’s no way to get from here to there except by Omega coming down and constructing exactly the race of barbarians necessary for the hypothetical to work. And if you’re going to say that, there’s no point in referring to them as barbarians and describing their actions in terms like “believes in an afterlife” and “obeys orders” that bring to mind real-life human cultures; you may as well say that Omega is just manipulating each individual barbarian like a player micromanaging a video game and causing him to act in exactly the way necessary for the conquest to work best.
Except of course that if you say “rationalists could be conquered by a set of drones micromanaged by Omega”, without pretending that you’re discussing a real-world situation, most people (assuming they know what you’re talking about) would reply “so what?”
This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are. And, for that matter, the rationalist nation probably isn’t theoretically optimal either...
On balance, believing true things is an advantage, but there are other kinds of advantages which don’t automatically favor the rationalist side. Sheer numbers, for example.
How imperfectly optimized, though? Imperfectly optimized like Omega controlling each barbarian but occasionally rolling the barbarian’s morale check, which fails on a 1 on a D100? Or imperfectly optimized like real life barbarians?
What about the Bolsheviks? Or the WW2-era Japanese?
Try the following obviously-unrealistic yet not-obviously-uninteresting hypothetical: There are two approximately equal-strength warring tribes of barbarians, Tribe A and Tribe B. One day Omega sprinkles magic rationality dust on Tribe A, turning all of its members into rationalists. Tribe B is on the move towards their camp and will arrive in a few days. This is not enough time for Tribe A to achieve any useful scientific or economic advances, nor to accumulate a significant population advantage from non-stake-burning.
Can you see, in that hypothetical, how Eliezer’s points in the linked posts are important?
Or another approach: the quote about the rationalists losing says “Barbarians have advantages A, B, and C over rationalists.” Your response is “But rationalists have larger advantages X, Y, and Z over barbarians, so who cares?” Eliezer’s response is “screw that, if barbarians have any advantages over rationalists, the “rationalists” aren’t rational enough”. My hypothetical’s purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.
Advantages are usually advantages under a specific set of circumstances. If you “control” for X, Y, and Z by postulating a set of circumstances where they have no effect, then of course A, B, and C are better. The rationalists have a set of ideals that works better in a large range of circumstances and in more realistic circumstances. They will not, of course, work better in a situation which is contrived so that they lose, and that’s fairly uninteresting—it’s impossible to have ideals that work under absolutely all circumstances.
Think of being rationalist like wearing a seatbelt. Asking what if the rationalists’ advances over the barbarians just happen not to apply is like asking what if not having a seatbelt would let you be thrown out of the car onto something soft but wearing a seatbelt gets you killed. I would not conclude that there is something wrong with seatbelts just because there are specific unlikely situations where wearing one might get you killed and not wearing one lets you survive.
“It’s impossible to have ideals that work under absolutely all circumstances.”
This is the essentially the proposition Eliezer wrote those posts to refute.
A seat belt is a dumb tool which is extremely beneficial in real-world situations, but we can easily contrive unrealistic cases where it causes harm. The point of those posts is that rationality is ‘smart’; it’s like a seat belt that can analyze your trajectory and disengage itself iff you would come to less harm that way, so that even in the contrived case it doesn’t hurt to wear one.
I don’t read it that way (at least not the long quote, taken from the second article). The way I read it, he is trying to discredit what he thinks is fake rationalism, and is giving barbarians as an example of a major failure which proves that this rationalism is fake. (Pay attention to the use of scare quotes.) I believe my response—that everything fails in unrealistic situations and what he is describing is an unrealistic situation—is on point.
The intention is to discredit a ‘fake rationalism’ and illustrate ‘real rationalism’ in its place.
I think you’re right that if we’re talking about the USA fighting a conventional war against the visigoths, the barbarians’ advantages aren’t even a rounding error. But there are other types of conflicts where that may not be the case. Some possible examples include warfare in other eras or settings, economic competition, democratic politics, and social movements.
Or even if it’s not a conflict—maybe we’d like to be able to have traditional rationalist virtues like empirically testing beliefs, and also be a group that cooperates on prisoners’ dilemmas? Barbarians are a dramatization to elevate these problems to an existential level, but the ultimate point is that rationalists shouldn’t have this kind of disadvantage even if they do have offsetting advantages that make them stronger than any group they’re likely to come into direct conflict with.
Why shouldn’t they? If the “should” means “I would be happier if”, maybe, but it’s not a law of the universe that rationalists must always have an advantage in all situations.
(If nothing else, what if the enemy just hates rationalists? I can even think of real-life enemies who do.)
Rationalists shouldn’t have those disadvantages, because there are a bunch of ways to mitigate them, which the post goes on to enumerate.
Part of Eliezer’s project is to enshrine a definition of ‘rational’ such that a decision that predictably puts you at a disadvantage is not rational.
Are you arguing that Eliezer-rationality is a poor fit for the word’s historic usage and you’d rather cultivate a different kind of rationality that doesn’t allow the kind of unpleasant anti-barbarian measures described in the post? One that forbids them, not conditionally on barbarians not being a major threat, but absolutely?
If the definition was actually what you describe, then whether barbarians demonstrate that that kind of rationality “predictably puts you at a disadvantage” would partly depend on how likely you were to be attacked by barbarians of those types. (Because low probability events contribute less to your expected utility.)
In other words, if that’s what he meant, then optimized barbarians don’t count as an example. He’d have to either use realistic barbarians, or argue that his optimized barbarians are realistic enough that they make substantial contributions to the expected utility.
(Does “I assume he meant something different because if he meant that, his example is useless” count as steelmanning?)
He’s writing about what the rationalists should do conditional on facing that kind of barbarian. The point of the post is not that rationalist communities should all implement military drafts, but rather that they should be capable of such measures if circumstances require them.
That makes a bit more sense but it still has flaws. The first flaw that comes to mind is that the rationalists may have precommitted to support human rights and that the harm that this precommitment causes to the rationalists in the optimized-barbarian scenario is more than balanced by the benefit it causes by making the rationalists unwilling to violate human rights in other scenarios where the rationalists think they are being attacked by sufficiently optimized barbarians, but are not. Whether this precommitment is rational depends on the probability of the optimized-barbarian scenario, and the fact that it is undeniably harmful conditional on the optimized-barbarian scenario doesn’t mean it’s not rational.
(Edit: It is possible to phrase this in a non-precommitment way: “I know that I am running on corrupted hardware and a threat that seems to be optimized barbarians probably isn’t. The expected utility is calculated from P(false belief in optimized barbarians) benefit from not drafting everyone—P(true belief in optimized barbarians) loss from being killed by barbarians. The conclusion is the same: just because the decision is bad for you in the case where the barbarians really are optimized doesn’t make the choice irrational.)
Right, but it’s a consequentialist argument, not a deontological one. “This measure is more likely to harm than to help” is a good argument; “On the outside view this measure is more likely to harm than to help, and we should go with the outside view even though the inside view seems really really convincing” can be a good argument; but you should never say “This measure would help, but we can’t take it because it’s ‘irrational’”.
In a sense, just saying “you shouldn’t be the kind of ‘rational’ that leads you to be killed by optimized barbarians” is already a consequential argument.
Yes, you should, if you have precommitted to not take that measure and you are then unlucky enough to be in the situation where the measure would help. Precommitment means that you can’t change your mind at the last minute and say “now that I know the measure would help, I’ll do it after all”.
The example I give above is not the best illustration because you are precommitting because you can’t distinguish between the two scenarios, but imagine a variation: You always can identify optimized barbarians, but if you precommit to not drafting people when the optimized barbarians come, your government will be trusted more in branches of the world where the optimized barbarians don’t come. Again, the measure of worlds with optimized barbarians is small. In that case, you should precommit, and then if the optimized barbarians come, you say “drafting people would help, but I can’t break a precommitment”. If you have made the precommitment by self-modifying so as to be unwilling to take the measure, the unwillingness looks like a failure of rationality (“If only you weren’t unwilling, you’d survive, but since you are, you’ll die!”), when it really isn’t.
Precommitment is also a potentially good reason. I’m not sure what we disagree about anymore.
Is your objection to the Barbarians post that you fear it will be used to justify actually implementing unsavory measures like those it describes for use against barbarians?
If there is precommitment, it may be true that doing X would benefit you, you refuse to do X, and your refusal to do X is rational.
Eliezer said that if doing X would benefit you and you refuse to do X, that is not really rational.
Furthermore, whether X is rational depends on what the probability of the scenario was before it happened—even if the scenario is happening now. Eliezer as interpreted by you believes that if the scenario is happening now, the past probability of the scenario doesn’t affect whether X is rational (That’s why he could use optimized barbarians as an example in the first place, despite its low probability.)
Also, I happen to think that many cases of people “irrationally” acting on principle can be modelled as a type of precommitment. Precommitment is just how we formalize “I’m going to shop at the store with the lower price, even if I have to spend more on gas to get there” or “we should allow free speech/free press/etc. and I don’t care how many terrorists that helps”.
TDT/UDT and the outside view are how we formalize precommitment.
It seems like the fastest way to get modded down is to disagree with Eliezer.
Assuming that “predictably puts you at a disadvantage” means “there is at least one situation X such that I can predict that if X occurs, I would be at a disadvantage”, then I don’t agree with this definition. (For instance, if Biblical literalism is true, pretty much every rationalist would be at a disadvantage. Does that mean that every definition of “rational” is bad?)
It’s pretty well established here that a phrase like ‘predictably puts you at a disadvantage’ is a probabilistic term; essentially it means ‘has a negative impact on your expected uitility.’
By the definition you assumed I was using, it would be true to say that buying a lottery ticket predictably increases your wealth. That is not a reasonable way to use words.
(Also, you’ve been disagreeing with Eliezer this whole thread, and only that last post has downvotes)
If you accept the local definition of rationality as winning (and not, say, as “thinking logically and deeply”) then, well, losing means you weren’t sufficiently rational :-/
I’m not sure. The liberal world seems to have gotten “softer” since then. Compare the general reaction in the US to the death toll in Iraq (maybe one or two US soldiers a day) with the death toll in WWII.
“Caliph” Abu Bakr al-Baghdadi and his group ISIS have been behaving exactly like this. They are quite young, but don’t appear quite able to take on a Western military yet.
And yet, by definition, a group who are better at rationality win more often. We ought to expect that rational civilizations can beat irrational ones, because rationality is systematized cross-domain winning.
Well, there is this “valley of bad rationality” where being more rational about part of the problem but not yet more rational about other part can make people less winning.
Sometimes I feel are we are there at a society level. We have smart individuals, we have science, we fly to the moon, etc. However, superstition and blind hate can be an efficient tool for coordinating a group to fight against another group. We don’t use this tool much (because it doesn’t fit well with rationality and science), but we don’t have an equally strong replacement. Also, only a few people in our civilization do the rationality and science. So even if there is a rationality-based defense, most of our society is too stupid to use it efficiently. On the scale from “barbarians” to “bayesians”, most of our society is somewhere in the middle: not barbaric enough, but still far from rational.
A group that is better at rationality will win more often, but winning more often is not the same thing as “winning in a superset of the situations in which the irrational win”.
By the same reasoning which says that fundamentalists could do better with more efficient methods of conquest, they could do better with more efficient methods of making peace, too. They won’t do as well as with conquest, but they’ll do better than they are doing now. Yet they don’t.
Barbarism is not optimized for conquest. It’s optimized for supporting a set of social structures. Those social structures make them more dangerous as conquerers than the average society, but they’re still not optimized for conquest; there are things which would make conquest more efficient which they would not do.
(To use just one example, for a country to embark on conquest and use the men from the conquered country to continue conquering more countries, they’d have to grant equal rights to conquered people who agreed to work with them. Rome did that except in a few rare but famous cases. So did the Mongols. But Muslim fundamentalists can’t give non-Muslims or rival Muslims equal rights without no longer being Muslim fundamentalists.)
Not necessarily. Muslims, in particular, have a history of using slave soldiers to good effect.
You do realize it’s possible to convert to fundamentalist Islam?
I seem to recall, and a glance over the Wikipedia articles suggests, that the Mamluk and Janissary systems involved raising (enslaved) boys into a military environment from a fairly young age. These boys might come from subjugated territories, but they’d in effect have been part of the dominant culture for much of their lives: it’s not a system that could be used to quickly convert conquered territories into additional manpower.
That said, it hasn’t been unusual for empires, modern and otherwise, to make substantial use of auxiliary forces drawn from client states. The Roman military probably relied on them as much as they did on the legions, or more in the late empire.
The late Roman Empire wasn’t exactly successful at conquering anything, or even at keeping the Empire from falling apart.
Yes, but requiring that soldiers do so makes the process of conquest less optimized, since it’s easier for obvious reasons to get soldiers without this requirement than with it. (The same goes for using slaves.)
You seem to be focusing solely on cost; the difference between benefit and cost is what matters, and the benefits of a fighting force with shared values (particularly shared religious ones) are many and obvious.
By that reasoning, it’s the Romans and the Mongols who are un-optimized for conquest.
The Mongols had the advantage of recruiting from a pool of steppe nomads with similar values.
The Roman Republic conquered the Mediterranean basin with an army consisting of Italians that were required to adopt Roman values before joining. Later the Roman legion adopted the looser system you described. Subsequently Roman legions would spend nearly as much effort fighting other Roman legions in civil wars as fighting Rome’s enemies.
For a counterpoint, look at the speed and magnitude of the original spread of Islam in the VII-VIII centuries.
Also there is Iran.
I don’t think the spread of Islam many centuries ago counts. Fanaticism isn’t as much of a disadvantage when fighting medieval socieities as it is when fighting modern ones.
Why is that so?
To summarize: Fanaticism keeps the culture from escaping the dark ages. If everyone is in the dark ages anyway, not being able to escape the dark ages isn’t much of a disadvantage.
That looks to me like one of those sentences which sound pretty but don’t actually mean much.
In your comment upthread you listed things which make a barbarian society “uncompetitive”. They apply to medieval societies as well. Essentially, you would expect the non-fanatic society to be richer, have better technology, and be governed more effectively. That holds in any epoch (as long as we don’t get too far into stone age :-/).
When Islam erupted out of the Arabian Peninsula, the “fanatics” easily took over huge—amazingly huge—territories. And it wasn’t just pillage-and-burn, they conquered the lands and established their own rule.
Why would I expect this when the society exists hundreds of years ago? The point is that back then, everyone lacked many of the things that fanaticism would cause a society to lack. The fanatics are not at such a disadvantage under such circumstances. The loss in efficiency from it taking weeks to communicate between distant parts of your empire is going to make the loss in efficiency from having a theocracy look like noise. The disadvantage of not getting investors in your country won’t matter when there’s no international investment anyway. The disadvantage of having little in the way of science and engineering won’t matter if there’s hardly any science yet and engineering is at the state of building bridges instead of launching satellites.
Really? Consider trade—a major factor in the society’s wealth and survival for the last several thousands of years. The fanatic barbarians wouldn’t trade, would they?
You don’t think technology mattered before the Industrial Revolution? Oh, but it did. From bronze weapons to early firearms, an army with a technological edge had a big advantage.
Governance didn’t matter in ancient and medieval societies? Do you actually believe that?
Technology mattered before the Industrial Revolution. The kind of technology that fanatics are bad at did not matter before the Industrial Revolution, however, because nobody had it, fanatic or not.
Another reason: many members of our military do have the courage of the Spartans. U.S. soldiers don’t put on suicide vests to kill children, but they do fall on grenades and hold hopeless positions under fire so their friends can escape death.
I see competition among different groups of people, with those able to overcome their collective action problems gaining power and resources.
I see what you mean, but in a military conflict it sees that any gain in power or resources is the result of another group losing power or resources (a zero-sum game). I guess that trade/commerce might be a positive-sum example where competition is still involved but on the whole there is societal benefit.