If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.
But there’s no way to get from here to there except by Omega coming down and constructing exactly the race of barbarians necessary for the hypothetical to work. And if you’re going to say that, there’s no point in referring to them as barbarians and describing their actions in terms like “believes in an afterlife” and “obeys orders” that bring to mind real-life human cultures; you may as well say that Omega is just manipulating each individual barbarian like a player micromanaging a video game and causing him to act in exactly the way necessary for the conquest to work best.
Except of course that if you say “rationalists could be conquered by a set of drones micromanaged by Omega”, without pretending that you’re discussing a real-world situation, most people (assuming they know what you’re talking about) would reply “so what?”
If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.
This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are. And, for that matter, the rationalist nation probably isn’t theoretically optimal either...
On balance, believing true things is an advantage, but there are other kinds of advantages which don’t automatically favor the rationalist side. Sheer numbers, for example.
This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are.
How imperfectly optimized, though? Imperfectly optimized like Omega controlling each barbarian but occasionally rolling the barbarian’s morale check, which fails on a 1 on a D100? Or imperfectly optimized like real life barbarians?
Try the following obviously-unrealistic yet not-obviously-uninteresting hypothetical:
There are two approximately equal-strength warring tribes of barbarians, Tribe A and Tribe B. One day Omega sprinkles magic rationality dust on Tribe A, turning all of its members into rationalists. Tribe B is on the move towards their camp and will arrive in a few days. This is not enough time for Tribe A to achieve any useful scientific or economic advances, nor to accumulate a significant population advantage from non-stake-burning.
Can you see, in that hypothetical, how Eliezer’s points in the linked posts are important?
Or another approach: the quote about the rationalists losing says “Barbarians have advantages A, B, and C over rationalists.” Your response is “But rationalists have larger advantages X, Y, and Z over barbarians, so who cares?” Eliezer’s response is “screw that, if barbarians have any advantages over rationalists, the “rationalists” aren’t rational enough”. My hypothetical’s purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.
My hypothetical’s purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.
Advantages are usually advantages under a specific set of circumstances. If you “control” for X, Y, and Z by postulating a set of circumstances where they have no effect, then of course A, B, and C are better. The rationalists have a set of ideals that works better in a large range of circumstances and in more realistic circumstances. They will not, of course, work better in a situation which is contrived so that they lose, and that’s fairly uninteresting—it’s impossible to have ideals that work under absolutely all circumstances.
Think of being rationalist like wearing a seatbelt. Asking what if the rationalists’ advances over the barbarians just happen not to apply is like asking what if not having a seatbelt would let you be thrown out of the car onto something soft but wearing a seatbelt gets you killed. I would not conclude that there is something wrong with seatbelts just because there are specific unlikely situations where wearing one might get you killed and not wearing one lets you survive.
“It’s impossible to have ideals that work under absolutely all circumstances.”
This is the essentially the proposition Eliezer wrote those posts to refute.
A seat belt is a dumb tool which is extremely beneficial in real-world situations, but we can easily contrive unrealistic cases where it causes harm. The point of those posts is that rationality is ‘smart’; it’s like a seat belt that can analyze your trajectory and disengage itself iff you would come to less harm that way, so that even in the contrived case it doesn’t hurt to wear one.
I don’t read it that way (at least not the long quote, taken from the second article). The way I read it, he is trying to discredit what he thinks is fake rationalism, and is giving barbarians as an example of a major failure which proves that this rationalism is fake. (Pay attention to the use of scare quotes.) I believe my response—that everything fails in unrealistic situations and what he is describing is an unrealistic situation—is on point.
The intention is to discredit a ‘fake rationalism’ and illustrate ‘real rationalism’ in its place.
I think you’re right that if we’re talking about the USA fighting a conventional war against the visigoths, the barbarians’ advantages aren’t even a rounding error. But there are other types of conflicts where that may not be the case. Some possible examples include warfare in other eras or settings, economic competition, democratic politics, and social movements.
Or even if it’s not a conflict—maybe we’d like to be able to have traditional rationalist virtues like empirically testing beliefs, and also be a group that cooperates on prisoners’ dilemmas? Barbarians are a dramatization to elevate these problems to an existential level, but the ultimate point is that rationalists shouldn’t have this kind of disadvantage even if they do have offsetting advantages that make them stronger than any group they’re likely to come into direct conflict with.
but the ultimate point is that rationalists shouldn’t have this kind of disadvantage
Why shouldn’t they? If the “should” means “I would be happier if”, maybe, but it’s not a law of the universe that rationalists must always have an advantage in all situations.
(If nothing else, what if the enemy just hates rationalists? I can even think of real-life enemies who do.)
Rationalists shouldn’t have those disadvantages, because there are a bunch of ways to mitigate them, which the post goes on to enumerate.
Part of Eliezer’s project is to enshrine a definition of ‘rational’ such that a decision that predictably puts you at a disadvantage is not rational.
Are you arguing that Eliezer-rationality is a poor fit for the word’s historic usage and you’d rather cultivate a different kind of rationality that doesn’t allow the kind of unpleasant anti-barbarian measures described in the post? One that forbids them, not conditionally on barbarians not being a major threat, but absolutely?
It’s pretty well established here that a phrase like ‘predictably puts you at a disadvantage’ is a probabilistic term; essentially it means ‘has a negative impact on your expected uitility.’
If the definition was actually what you describe, then whether barbarians demonstrate that that kind of rationality “predictably puts you at a disadvantage” would partly depend on how likely you were to be attacked by barbarians of those types. (Because low probability events contribute less to your expected utility.)
In other words, if that’s what he meant, then optimized barbarians don’t count as an example. He’d have to either use realistic barbarians, or argue that his optimized barbarians are realistic enough that they make substantial contributions to the expected utility.
(Does “I assume he meant something different because if he meant that, his example is useless” count as steelmanning?)
He’s writing about what the rationalists should do conditional on facing that kind of barbarian. The point of the post is not that rationalist communities should all implement military drafts, but rather that they should be capable of such measures if circumstances require them.
He’s writing about what the rationalists should do conditional on facing that kind of barbarian.
That makes a bit more sense but it still has flaws. The first flaw that comes to mind is that the rationalists may have precommitted to support human rights and that the harm that this precommitment causes to the rationalists in the optimized-barbarian scenario is more than balanced by the benefit it causes by making the rationalists unwilling to violate human rights in other scenarios where the rationalists think they are being attacked by sufficiently optimized barbarians, but are not. Whether this precommitment is rational depends on the probability of the optimized-barbarian scenario, and the fact that it is undeniably harmful conditional on the optimized-barbarian scenario doesn’t mean it’s not rational.
(Edit: It is possible to phrase this in a non-precommitment way: “I know that I am running on corrupted hardware and a threat that seems to be optimized barbarians probably isn’t. The expected utility is calculated from P(false belief in optimized barbarians) benefit from not drafting everyone—P(true belief in optimized barbarians) loss from being killed by barbarians. The conclusion is the same: just because the decision is bad for you in the case where the barbarians really are optimized doesn’t make the choice irrational.)
Right, but it’s a consequentialist argument, not a deontological one. “This measure is more likely to harm than to help” is a good argument; “On the outside view this measure is more likely to harm than to help, and we should go with the outside view even though the inside view seems really really convincing” can be a good argument; but you should never say “This measure would help, but we can’t take it because it’s ‘irrational’”.
In a sense, just saying “you shouldn’t be the kind of ‘rational’ that leads you to be killed by optimized barbarians” is already a consequential argument.
you should never say “This measure would help, but we can’t take it because it’s ‘irrational’”.
Yes, you should, if you have precommitted to not take that measure and you are then unlucky enough to be in the situation where the measure would help. Precommitment means that you can’t change your mind at the last minute and say “now that I know the measure would help, I’ll do it after all”.
The example I give above is not the best illustration because you are precommitting because you can’t distinguish between the two scenarios, but imagine a variation: You always can identify optimized barbarians, but if you precommit to not drafting people when the optimized barbarians come, your government will be trusted more in branches of the world where the optimized barbarians don’t come. Again, the measure of worlds with optimized barbarians is small. In that case, you should precommit, and then if the optimized barbarians come, you say “drafting people would help, but I can’t break a precommitment”. If you have made the precommitment by self-modifying so as to be unwilling to take the measure, the unwillingness looks like a failure of rationality (“If only you weren’t unwilling, you’d survive, but since you are, you’ll die!”), when it really isn’t.
Precommitment is also a potentially good reason. I’m not sure what we disagree about anymore.
Is your objection to the Barbarians post that you fear it will be used to justify actually implementing unsavory measures like those it describes for use against barbarians?
If there is precommitment, it may be true that doing X would benefit you, you refuse to do X, and your refusal to do X is rational.
Eliezer said that if doing X would benefit you and you refuse to do X, that is not really rational.
Furthermore, whether X is rational depends on what the probability of the scenario was before it happened—even if the scenario is happening now. Eliezer as interpreted by you believes that if the scenario is happening now, the past probability of the scenario doesn’t affect whether X is rational (That’s why he could use optimized barbarians as an example in the first place, despite its low probability.)
Also, I happen to think that many cases of people “irrationally” acting on principle can be modelled as a type of precommitment. Precommitment is just how we formalize “I’m going to shop at the store with the lower price, even if I have to spend more on gas to get there” or “we should allow free speech/free press/etc. and I don’t care how many terrorists that helps”.
Part of Eliezer’s project is to enshrine a definition of ‘rational’ such that a decision that predictably puts you at a disadvantage is not rational.
Assuming that “predictably puts you at a disadvantage” means “there is at least one situation X such that I can predict that if X occurs, I would be at a disadvantage”, then I don’t agree with this definition. (For instance, if Biblical literalism is true, pretty much every rationalist would be at a disadvantage. Does that mean that every definition of “rational” is bad?)
It’s pretty well established here that a phrase like ‘predictably puts you at a disadvantage’ is a probabilistic term; essentially it means ‘has a negative impact on your expected uitility.’
By the definition you assumed I was using, it would be true to say that buying a lottery ticket predictably increases your wealth. That is not a reasonable way to use words.
(Also, you’ve been disagreeing with Eliezer this whole thread, and only that last post has downvotes)
If you accept the local definition of rationality as winning (and not, say, as “thinking logically and deeply”) then, well, losing means you weren’t sufficiently rational :-/
If the country of rationalists is attacked by a country of barbarians who are perfectly optimized for conquest, the rationalists will get conquered.
But there’s no way to get from here to there except by Omega coming down and constructing exactly the race of barbarians necessary for the hypothetical to work. And if you’re going to say that, there’s no point in referring to them as barbarians and describing their actions in terms like “believes in an afterlife” and “obeys orders” that bring to mind real-life human cultures; you may as well say that Omega is just manipulating each individual barbarian like a player micromanaging a video game and causing him to act in exactly the way necessary for the conquest to work best.
Except of course that if you say “rationalists could be conquered by a set of drones micromanaged by Omega”, without pretending that you’re discussing a real-world situation, most people (assuming they know what you’re talking about) would reply “so what?”
This is not inconsistent with the claim that, if the country of rationalists is attacked by a country of barbarians who are imperfectly optimized for conquest, the rationalists might get conquered, with the risk depending on how optimized the barbarians are. And, for that matter, the rationalist nation probably isn’t theoretically optimal either...
On balance, believing true things is an advantage, but there are other kinds of advantages which don’t automatically favor the rationalist side. Sheer numbers, for example.
How imperfectly optimized, though? Imperfectly optimized like Omega controlling each barbarian but occasionally rolling the barbarian’s morale check, which fails on a 1 on a D100? Or imperfectly optimized like real life barbarians?
What about the Bolsheviks? Or the WW2-era Japanese?
Try the following obviously-unrealistic yet not-obviously-uninteresting hypothetical: There are two approximately equal-strength warring tribes of barbarians, Tribe A and Tribe B. One day Omega sprinkles magic rationality dust on Tribe A, turning all of its members into rationalists. Tribe B is on the move towards their camp and will arrive in a few days. This is not enough time for Tribe A to achieve any useful scientific or economic advances, nor to accumulate a significant population advantage from non-stake-burning.
Can you see, in that hypothetical, how Eliezer’s points in the linked posts are important?
Or another approach: the quote about the rationalists losing says “Barbarians have advantages A, B, and C over rationalists.” Your response is “But rationalists have larger advantages X, Y, and Z over barbarians, so who cares?” Eliezer’s response is “screw that, if barbarians have any advantages over rationalists, the “rationalists” aren’t rational enough”. My hypothetical’s purpose is to try to control for X, Y, and Z so we have to think about A, B, and C.
Advantages are usually advantages under a specific set of circumstances. If you “control” for X, Y, and Z by postulating a set of circumstances where they have no effect, then of course A, B, and C are better. The rationalists have a set of ideals that works better in a large range of circumstances and in more realistic circumstances. They will not, of course, work better in a situation which is contrived so that they lose, and that’s fairly uninteresting—it’s impossible to have ideals that work under absolutely all circumstances.
Think of being rationalist like wearing a seatbelt. Asking what if the rationalists’ advances over the barbarians just happen not to apply is like asking what if not having a seatbelt would let you be thrown out of the car onto something soft but wearing a seatbelt gets you killed. I would not conclude that there is something wrong with seatbelts just because there are specific unlikely situations where wearing one might get you killed and not wearing one lets you survive.
“It’s impossible to have ideals that work under absolutely all circumstances.”
This is the essentially the proposition Eliezer wrote those posts to refute.
A seat belt is a dumb tool which is extremely beneficial in real-world situations, but we can easily contrive unrealistic cases where it causes harm. The point of those posts is that rationality is ‘smart’; it’s like a seat belt that can analyze your trajectory and disengage itself iff you would come to less harm that way, so that even in the contrived case it doesn’t hurt to wear one.
I don’t read it that way (at least not the long quote, taken from the second article). The way I read it, he is trying to discredit what he thinks is fake rationalism, and is giving barbarians as an example of a major failure which proves that this rationalism is fake. (Pay attention to the use of scare quotes.) I believe my response—that everything fails in unrealistic situations and what he is describing is an unrealistic situation—is on point.
The intention is to discredit a ‘fake rationalism’ and illustrate ‘real rationalism’ in its place.
I think you’re right that if we’re talking about the USA fighting a conventional war against the visigoths, the barbarians’ advantages aren’t even a rounding error. But there are other types of conflicts where that may not be the case. Some possible examples include warfare in other eras or settings, economic competition, democratic politics, and social movements.
Or even if it’s not a conflict—maybe we’d like to be able to have traditional rationalist virtues like empirically testing beliefs, and also be a group that cooperates on prisoners’ dilemmas? Barbarians are a dramatization to elevate these problems to an existential level, but the ultimate point is that rationalists shouldn’t have this kind of disadvantage even if they do have offsetting advantages that make them stronger than any group they’re likely to come into direct conflict with.
Why shouldn’t they? If the “should” means “I would be happier if”, maybe, but it’s not a law of the universe that rationalists must always have an advantage in all situations.
(If nothing else, what if the enemy just hates rationalists? I can even think of real-life enemies who do.)
Rationalists shouldn’t have those disadvantages, because there are a bunch of ways to mitigate them, which the post goes on to enumerate.
Part of Eliezer’s project is to enshrine a definition of ‘rational’ such that a decision that predictably puts you at a disadvantage is not rational.
Are you arguing that Eliezer-rationality is a poor fit for the word’s historic usage and you’d rather cultivate a different kind of rationality that doesn’t allow the kind of unpleasant anti-barbarian measures described in the post? One that forbids them, not conditionally on barbarians not being a major threat, but absolutely?
If the definition was actually what you describe, then whether barbarians demonstrate that that kind of rationality “predictably puts you at a disadvantage” would partly depend on how likely you were to be attacked by barbarians of those types. (Because low probability events contribute less to your expected utility.)
In other words, if that’s what he meant, then optimized barbarians don’t count as an example. He’d have to either use realistic barbarians, or argue that his optimized barbarians are realistic enough that they make substantial contributions to the expected utility.
(Does “I assume he meant something different because if he meant that, his example is useless” count as steelmanning?)
He’s writing about what the rationalists should do conditional on facing that kind of barbarian. The point of the post is not that rationalist communities should all implement military drafts, but rather that they should be capable of such measures if circumstances require them.
That makes a bit more sense but it still has flaws. The first flaw that comes to mind is that the rationalists may have precommitted to support human rights and that the harm that this precommitment causes to the rationalists in the optimized-barbarian scenario is more than balanced by the benefit it causes by making the rationalists unwilling to violate human rights in other scenarios where the rationalists think they are being attacked by sufficiently optimized barbarians, but are not. Whether this precommitment is rational depends on the probability of the optimized-barbarian scenario, and the fact that it is undeniably harmful conditional on the optimized-barbarian scenario doesn’t mean it’s not rational.
(Edit: It is possible to phrase this in a non-precommitment way: “I know that I am running on corrupted hardware and a threat that seems to be optimized barbarians probably isn’t. The expected utility is calculated from P(false belief in optimized barbarians) benefit from not drafting everyone—P(true belief in optimized barbarians) loss from being killed by barbarians. The conclusion is the same: just because the decision is bad for you in the case where the barbarians really are optimized doesn’t make the choice irrational.)
Right, but it’s a consequentialist argument, not a deontological one. “This measure is more likely to harm than to help” is a good argument; “On the outside view this measure is more likely to harm than to help, and we should go with the outside view even though the inside view seems really really convincing” can be a good argument; but you should never say “This measure would help, but we can’t take it because it’s ‘irrational’”.
In a sense, just saying “you shouldn’t be the kind of ‘rational’ that leads you to be killed by optimized barbarians” is already a consequential argument.
Yes, you should, if you have precommitted to not take that measure and you are then unlucky enough to be in the situation where the measure would help. Precommitment means that you can’t change your mind at the last minute and say “now that I know the measure would help, I’ll do it after all”.
The example I give above is not the best illustration because you are precommitting because you can’t distinguish between the two scenarios, but imagine a variation: You always can identify optimized barbarians, but if you precommit to not drafting people when the optimized barbarians come, your government will be trusted more in branches of the world where the optimized barbarians don’t come. Again, the measure of worlds with optimized barbarians is small. In that case, you should precommit, and then if the optimized barbarians come, you say “drafting people would help, but I can’t break a precommitment”. If you have made the precommitment by self-modifying so as to be unwilling to take the measure, the unwillingness looks like a failure of rationality (“If only you weren’t unwilling, you’d survive, but since you are, you’ll die!”), when it really isn’t.
Precommitment is also a potentially good reason. I’m not sure what we disagree about anymore.
Is your objection to the Barbarians post that you fear it will be used to justify actually implementing unsavory measures like those it describes for use against barbarians?
If there is precommitment, it may be true that doing X would benefit you, you refuse to do X, and your refusal to do X is rational.
Eliezer said that if doing X would benefit you and you refuse to do X, that is not really rational.
Furthermore, whether X is rational depends on what the probability of the scenario was before it happened—even if the scenario is happening now. Eliezer as interpreted by you believes that if the scenario is happening now, the past probability of the scenario doesn’t affect whether X is rational (That’s why he could use optimized barbarians as an example in the first place, despite its low probability.)
Also, I happen to think that many cases of people “irrationally” acting on principle can be modelled as a type of precommitment. Precommitment is just how we formalize “I’m going to shop at the store with the lower price, even if I have to spend more on gas to get there” or “we should allow free speech/free press/etc. and I don’t care how many terrorists that helps”.
TDT/UDT and the outside view are how we formalize precommitment.
It seems like the fastest way to get modded down is to disagree with Eliezer.
Assuming that “predictably puts you at a disadvantage” means “there is at least one situation X such that I can predict that if X occurs, I would be at a disadvantage”, then I don’t agree with this definition. (For instance, if Biblical literalism is true, pretty much every rationalist would be at a disadvantage. Does that mean that every definition of “rational” is bad?)
It’s pretty well established here that a phrase like ‘predictably puts you at a disadvantage’ is a probabilistic term; essentially it means ‘has a negative impact on your expected uitility.’
By the definition you assumed I was using, it would be true to say that buying a lottery ticket predictably increases your wealth. That is not a reasonable way to use words.
(Also, you’ve been disagreeing with Eliezer this whole thread, and only that last post has downvotes)
If you accept the local definition of rationality as winning (and not, say, as “thinking logically and deeply”) then, well, losing means you weren’t sufficiently rational :-/