I don’t feel like digging up the whole sordid backstory (though this would be a good starting point), but I get the impression he’s upset that we’re not a vector for his politics.
That whole “mindkiller” thing really rubs some people the wrong way; for such a person, politics are so bound up with ideals of rationality that staying away from them looks not just ignorant but willfully and maliciously so. (Compare the “reality-based community” on the left, or Eric Raymond’s “anti-idiotarianism” on the right. Not that we’re entirely innocent of this sort of thinking ourselves.) Combine that with the absurdity heuristic and our bad habit of parochialism in some areas, and you’ve got most of the ingredients for a hatchet job.
I don’t feel like digging up the whole sordid backstory (though this would be a good starting point), but I get the impression he’s upset that we’re not a vector for his politics.
More specifically, he’s upset that we’re willing to tolerate people who point out that many of his ideology’s claims are in fact falsifiable and false.
That whole “mindkiller” thing really rubs some people the wrong way; for such a person, politics are so bound up with ideals of rationality that staying away from them looks not just ignorant but willfully and maliciously so. (Compare the “reality-based community” on the left, or Eric Raymond’s “anti-idiotarianism” on the right. Not that we’re entirely innocent of this sort of thinking ourselves.) Combine that with the absurdity heuristic and our bad habit of parochialism in some areas, and you’ve got most of the ingredients for a hatchet job.
I think it’s more that the things Arthur is responding to are, in fact, very racist.
Reading just his comments (I control-F’d ‘arthur chu’ on the second linked post), it seems that he clearly understands the dynamics of racial privilege, the concept of minstrelsy, and how those relate to contemporary social justice struggles in macro and micro (American belly dancers), but seemingly none of the people he is communicating with do.
This is a quote I can identify highly with:
That post [the one debunking false rape statistics] is exactly my problem with Scott. He seems to honestly think that it’s a worthwhile use of his time, energy and mental effort to download evil people’s evil worldviews into his mind and try to analytically debate them with statistics and cost-benefit analyses.
He gets mad at people whom he detachedly intellectually agrees with but who are willing to back up their beliefs with war and fire rather than pussyfooting around with debate-team nonsense.
There is a saying in the anarchist community, dating back to the Spanish Civil War: No Parasan. It means “No platform” and maps to “no platform for fascists.” It’s why groups like the “national anarchists” and other third-positionist fascist groups get kicked out (in a literal sense, their propaganda destroyed and their bodies thrown out) of anarchist events.
No Parasan actually has a very compelling social and cognitive explanation: The more you detachedly argue with fascists, the more normal and accepted their politics become. The BNP is a great example of this. And as it becomes viewed as an acceptable alternative, fascism gains support (which is why the BNP could succeed as a novel fascist organization in the UK).
When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism, attempting to dispassionately argue causes this phenomenon at best, and at worst actively silences the people victimized by such structures.
Scott Alexander is not an ally to rape survivors because he has normalized their oppression. He has tried to debate with a monster instead of annihilating it.
I think the real reason why Arthur Chu is so pissed off at LW is because LW has all the mental tools available to recognize that discussing certain things can be harmful when the topic is something rich white men care about: typically AI risk. They get that you only tell wizards strong enough of a particular spell, but then disregard it. In a word, Less Wrong is blind to its privilege.
This isn’t surprising, because rich white men in general are blind to their privilege, and even relatively well-off women or people of color can be blind to their oppression if they’re able to buy their way out of it, but that doesn’t mean those concepts don’t exist.
Less Wrong is right that politics is the mind killer. Less Wrong just had it’s mind killed early on when it decided it would only ever accept the politics of the status quo. If Less Wrong had been around in the 1820′s it would have supported slavery. If it was around in the 1940′s it would have supported Jim Crow. The fact that Less Wrong can’t collectively see this is infuriating to some people, including to an extent myself.
I’m not personally about to say Less Wrong is a religion or a robot cult or whatever, it has hilarious views sometimes that are pretty normal for any internet community, but as someone who identifies problems with the contemporary political order and wants to change them, Less Wrong cannot be my comrade in this fight.
As an aside, unrelated from anything I’ve said previously, anyone should agree that LW habitually loses even though it should win. CFAR is a good attempt and the most winning thing I think I’ve seen out of Less Wrong, but it’s far from that I feel LW could produce. Maybe the people who could produce those things aren’t on less wrong because they’re producing them, but if they knew so much of their own success came from knowing basic rationality, why wouldn’t they recruit here?
Edit: As another aside:
In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence.
Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win. I truly and sincerely hope that every last person working for MIRI would kill and die to bring about friendly AI. I hope that if I had the choice of sacrificing myself so that all of humanity could live forever I would take it. If a fight is so important that you must win it, you must win it. You can win by the long sword, or you can win by the short sword (to quote Musashi) but you must win.
Any rationalist should see this trivially. It is a failure of Scott’s that he hasn’t, though I suppose he could be appealing publicly to the a widely-held principle in order to win this particular debate.
There are some interesting ideas in what you wrote, but unfortunately, the whole comment is written in a mindkilling way. Yeah, that’s probably your point, so… uhm...
he clearly understands the dynamics of racial privilege, the concept of minstrelsy, and how those relate to contemporary social justice struggles in macro and micro (American belly dancers), but seemingly none of the people he is communicating with do.
Well, one way to deal with people who don’t understand what you are trying to tell them, is to explain. It’s not the only way—for example, you could also also bully them into submission—but it is the way that most LW readers probably prefer. So, if this cause is so important to you, what don’t you write an article here, explaining what Arthur Chu gets and we don’t? And by explaining, I mean… explaining.
If Less Wrong had been around in the 1820′s it would have supported slavery.
More likely, it would discourage object-level debates about slavery (both for and against), so that Americans from both North and South could debate about something else: rationality, etc.
By the way, libertarians are not exactly supporters of the status quo. (By which I am not suggesting that libertarians are most frequent here; but this is what LW is frequently accused of.)
When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism, attempting to dispassionately argue causes this phenomenon at best, and at worst actively silences the people victimized by such structures.
How about other oppressive social structures?
Let me give you an example. Every time I go to a LW meetup to nearby Vienna, I cross a line that 25 years ago would get me killed. And I usually remind myself about the fact, and how happy I am to be able to go to Vienna like no big deal, when so many people got killed for trying.
There is a memorial to all those killed people, on a border between Slovakia and Austria. I happen to visit it about once in a month; not for political reasons, it just happens to be on my favorite walking path in nature. You know, countries usually protect their borders to prevent other people from getting in. But socialist countries protected their borders to prevent people from running away. In socialism, people were considered property of their country. When they tried to escape their masters, that was a similar kind of crime as when a black person tried to run away from their master. And if they succeeded to run away, their families were punished instead. To legally leave the socialist country, e.g. on a vacation, you had to leave hostages at home. It happened when I was a child; in happened in a place I still live. The second most frequent cause of death on borders of socialist countries were allegedly the suicides of soldiers who couldn’t bear anymore the moral burden of having to kill all those innocent people.
So, according to your arguments, what exactly it is that I am supposed to do about it? How exactly am I supposed to react to you? From my point of view, you are a blind and evil person. Should I scream “Freedom!”, try to accuse you of random bad things, say you should be banned from LW, say that LW is a horrible website if it does not ban you immediately? Should I even use lies to support my case, because the most important thing is to win, and to destroy all those murderous socialism-sympathisers? Because otherwise I am dishonoring the memory of the millions who were tortured and murdered in the name of… things you defend, kind of. Is that the right thing to do?
The thing is, I understand this is not how it “feels from inside” to you. Which makes things a lot more complicated. Welcome in the real world, where the good things are not achieved by sorting people into the “good” ones and the “evil” ones, and then attacking the “evil” ones by whetever means available.
More likely, it would discourage object-level debates about slavery (both for and against), so that Americans from both North and South could debate about something else: rationality, etc.
Notice your confusion. Either your model is false or the data is wrong. You’ve decided the data (what I told you) was wrong.
But could your model be wrong?
How would such a policy support slavery? Why do I think that? Pretend that I am as intelligent as you and try to determine what would make you believe that.
Should I even use lies to support my case, because the most important thing is to win, and to destroy all those murderous socialism-sympathisers? Because otherwise I am dishonoring the memory of the millions who were tortured and murdered in the name of… things you defend, kind of. Is that the right thing to do?
Yes. You should. You are a rationalist and you should win. Never deceive yourself that losing is appropriate! It is only ever appropriate to win. It is only ever good to win. Losing is never good.
If you find this too complicated, think about it in the simplest possible terms. The truth is the truth and to win is to win.
If you truly oppose me to the same extent Arthur Chu opposes casual racism on the Internet and I oppose the concept of capitalism you should do whatever you can to win. If you in your full art as a rationalist decide that is the path to winning you must take that path.
But I don’t think you do have that level of commitment in you. There’s a very large difference between identifying a social suboptimality and truly having something to protect. And I think that even in the face of all the things you said, all the very true and very real horrors of Marxism, you could not even summon the internal strength to protect yourself against that.
This is a sort of resolve that Less Wrong does not teach. It’s only found in true adversity, in situations where you have something to protect and you must fight to protect it.
I do not think you have fought that fight. Very few people have.
You provided data of your imagination, I provided data of mine… there is no way to determine the outcome experimentally… even if we asked Eliezer, he couldn’t know for sure what exactly Eliezer1820 would do… is there a meaningful way to settle this? I don’t see any.
I’m sorry, are you aware of the reasons why I think what I do? Have you thought about this for even one minute?
If you’re truly incapable of reconstructing that then maybe there isn’t anything we can do. But I don’t believe you’re incapable.
I think the scenario you describe is exactly what would happen with 1820′s LW. I also think that provides material support for slavery. I also think that when slavery was brought up, probably it would be similarly treated to discussions of racism now.
Informed by that, think about it for five minutes, and PM me your answer. We can go from there.
There is a saying in the anarchist community, dating back to the Spanish Civil War: No Parasan. It means “No platform” and maps to “no platform for fascists.”
/facepalm
The saying that goes to the Spanish Civil War is No Pasaran which means “They shall not pass”. It was used by Dolores Ibarruri in her famous speech and it it still popular in some anti-fascist circles. See e.g. here,
Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win. I truly and sincerely hope that every last person working for MIRI would kill and die to bring about friendly AI.
So, did you get your gun and bullets yet? How goes your list of people who will be the first against the wall?
The list of enemies? X-D 512 bits is too short for a reasonable public key and too long for a symmetric key. Just right for a standard hash, though. Hashes don’t verify authorship, of course...
If everyone lies for their preferred cause, those who see through the lies trust no one, and those who don’t see through them act on false information.
If everyone believes enemies of their preferred cause should be driven out of society, as many societies as causes arise, and none can so much as trade with another.
If everyone believes their opponents must be purged, everyone purges everyone else.
If everyone decides they must win by the sword, the Hobbesian state of nature results.
(Oh, hell, first I realize Kant was not an idiot, and now I realize Hobbes was not an idiot. Of course the state of nature is ahistorical—that’s part of the point!)
Breaking down the Schelling fence around the social norms built up over the state of nature is an effective way to gain power, but once you gain power, you have to make sure that the fence is restored—and that’s hard to do. It’s easier to destroy than to build. You can’t weigh winning by the sword against the status quo; you have to weigh one action with p probability of winning hard enough to restore the fence (and q probability of having your burning of the accumulated arrangements / store of knowledge be net-beneficial for whatever definition of ‘net-beneficial’ you’re using, vs. 1-q probability of having them not be) and 1-p probability of just breaking the fence.
In reality, of course, fence-wreckers understand that the opposite side wants to preserve the fence, and use that to their advantage. (The rain it raineth on the just / And also on the unjust fella / But chiefly on the just, because / The unjust hath the just’s umbrella.) Alinsky understood this: you appeal to moral principles when you’re out of power, but if you get power, you crush the people who appeal to moral principles—even the ones you espoused before you go power. If you have enough power to crush your opponents, you have enough power to crush your opponents—but this represents… not quite a burning of capital, but a sick sort of investment which may or may not pay off. You may be able to crush your opponents, but it doesn’t necessarily follow that your opponents aren’t able to crush you. And if you crush your opponents, you can’t use them, their skills, knowledge, etc.
This is the part where I attempt to avoid performing an amygdala hijack by using the phrase ‘amygdala hijack’, and reference Atlas Shrugged: the moochers crush the capitalists, so the capitalists leave, and the moochers don’t have access to the benefits of their talents anymore so their society falls apart. It’s not a perfect analogy—it’s been a while since I read it, but I don’t think the moochers saw themselves as aligned against the capitalists. But it’s close enough; if it helps, imagine they were Communists.
There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you’re so strongly arguing for defecting from.
There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you’re so strongly arguing for defecting from.
In the consequentialist ethics family, there’s act consequentialism, rule consequentialism, and a concept I cannot recall the name of linked here or possibly written here long ago of what I will call winning consequentialism. It dictates you consider every action according to every possible consequentialism and you pick the one with the best consequences.
I think it was called plus-consequentialism in the post, or maybe n-consequentialism, but it seems to capture this.
But your failure lies in assuming that winning consequentialism will always result in this sort of clean outcome. Less Wrong attempts to change the world not by the sword, or by emotional appeals, not even base electoralism, but by comments on the Internet. Is it really the case that this is always the winning outcome?
An experiment: Suppose you find yourself engaged in a struggle (any struggle) where you correctly apply winning consequentialism considering all contexts and cooperation norms and find that you should crush your enemy. What do you then do?
Your consequentialism sounds suspiciously like the opposite and I wonder how deeply you are committed to it.
I think it’s more that the things Arthur is responding to are, in fact, very racist.
Care to taboo what you mean by “racist”. In particular is it “racist” to believe that traits like intelligence correlate with where someone’s ancestors came from? Does it matter if there is evidence for the belief in question? Does it matter if the belief is true?
Also why is “racism” so uniquely awful. If you look at the history of the 20th century far more people have been killed in the name of egalitarian ideologies (specifically communism) then in the name of ideologies generally considered “racist”.
When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism,
Taboo “opressive”. Judging by how your calling capitalism oppressive it appears that improving the living standard of most of the world is “oppression”. If so we could probably use more of it.
In other words, if a fight is important to you, fight nasty. If that means lying, lie.
If you find yourself needing to lie for your cause, what your effectively admitting is that the truth doesn’t support it. You may want to consider updating on that fact when deciding whether you should really be supporting said cause.
Also as I explain here Yvain’s reason for not lying for your cause is not the best one he could give. The biggest problem is that it will fill your cause with people who believe said lies.
If you find yourself needing to lie for your cause, what your effectively admitting is that the truth doesn’t support it.
Not necessarily. You may deal with irrational people, who will not be moved by truth. Or the inferential distances can be long, and you only have very short time to convince people before something irreversible happens—although in this case, you are creating problems in the long run.
(I generally agree with what you said. This is just an example of how this generalization is also leaky. And of course, because we run on the corrupted hardware, every situation will likely seem to be the one where the generalization does not apply.)
In other words, if a fight is important to you, fight nasty. If that means lying, lie. If that means insults, insult. If that means silencing people, silence.
Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win.
Whatever you happened to believe, the winningest answer would be “No, never lie”. Because now that you’ve claimed your political position is likely to be based on lies, I’ve updated to consider arguments from that position as having zero evidential weight.
I would have thought that The Boy Who Cried Wolf was an adequate explanation in childhood of the selfish reasons to be honest.
If the Butlerian Jihad is at your door looking for the FAI researchers in your floorboards, you lie and tell them you’re a loyal luddite. If you need a little more funding to finish your FAI and you can get it by pretending to be working on the next Snapchat clone to get VC money, lie to your VCs.
Like literally everything in life, lying has risks. but if you in your art as a rationalist decide those risks are acceptable only base dogmatism dictates you be honest and turn over your friends to be executed or allow humans to continue to die.
The moral of The Boy Who Cried Wolf is not to be honest; it’s to not get caught lying.
(As an entire aside, do you really think that any political position, or even any fact anywhere that has touched human minds, is not at least partially based on lies? You might as well update the world to have zero evidential weight and spend all your time on a webforum arguing about ethics instead of going into the real world and effecting your goals.)
Weird. What did we ever do to him? Aside from compliment him on his effectiveness, I mean?
I don’t feel like digging up the whole sordid backstory (though this would be a good starting point), but I get the impression he’s upset that we’re not a vector for his politics.
That whole “mindkiller” thing really rubs some people the wrong way; for such a person, politics are so bound up with ideals of rationality that staying away from them looks not just ignorant but willfully and maliciously so. (Compare the “reality-based community” on the left, or Eric Raymond’s “anti-idiotarianism” on the right. Not that we’re entirely innocent of this sort of thinking ourselves.) Combine that with the absurdity heuristic and our bad habit of parochialism in some areas, and you’ve got most of the ingredients for a hatchet job.
More specifically, he’s upset that we’re willing to tolerate people who point out that many of his ideology’s claims are in fact falsifiable and false.
I think it’s more that the things Arthur is responding to are, in fact, very racist.
Reading just his comments (I control-F’d ‘arthur chu’ on the second linked post), it seems that he clearly understands the dynamics of racial privilege, the concept of minstrelsy, and how those relate to contemporary social justice struggles in macro and micro (American belly dancers), but seemingly none of the people he is communicating with do.
This is a quote I can identify highly with:
There is a saying in the anarchist community, dating back to the Spanish Civil War: No Parasan. It means “No platform” and maps to “no platform for fascists.” It’s why groups like the “national anarchists” and other third-positionist fascist groups get kicked out (in a literal sense, their propaganda destroyed and their bodies thrown out) of anarchist events.
No Parasan actually has a very compelling social and cognitive explanation: The more you detachedly argue with fascists, the more normal and accepted their politics become. The BNP is a great example of this. And as it becomes viewed as an acceptable alternative, fascism gains support (which is why the BNP could succeed as a novel fascist organization in the UK).
When discussing existing oppressive social structures such as patriarchy, white supremacy, colonialism, and capitalism, attempting to dispassionately argue causes this phenomenon at best, and at worst actively silences the people victimized by such structures.
Scott Alexander is not an ally to rape survivors because he has normalized their oppression. He has tried to debate with a monster instead of annihilating it.
I think the real reason why Arthur Chu is so pissed off at LW is because LW has all the mental tools available to recognize that discussing certain things can be harmful when the topic is something rich white men care about: typically AI risk. They get that you only tell wizards strong enough of a particular spell, but then disregard it. In a word, Less Wrong is blind to its privilege.
This isn’t surprising, because rich white men in general are blind to their privilege, and even relatively well-off women or people of color can be blind to their oppression if they’re able to buy their way out of it, but that doesn’t mean those concepts don’t exist.
Less Wrong is right that politics is the mind killer. Less Wrong just had it’s mind killed early on when it decided it would only ever accept the politics of the status quo. If Less Wrong had been around in the 1820′s it would have supported slavery. If it was around in the 1940′s it would have supported Jim Crow. The fact that Less Wrong can’t collectively see this is infuriating to some people, including to an extent myself.
I’m not personally about to say Less Wrong is a religion or a robot cult or whatever, it has hilarious views sometimes that are pretty normal for any internet community, but as someone who identifies problems with the contemporary political order and wants to change them, Less Wrong cannot be my comrade in this fight.
As an aside, unrelated from anything I’ve said previously, anyone should agree that LW habitually loses even though it should win. CFAR is a good attempt and the most winning thing I think I’ve seen out of Less Wrong, but it’s far from that I feel LW could produce. Maybe the people who could produce those things aren’t on less wrong because they’re producing them, but if they knew so much of their own success came from knowing basic rationality, why wouldn’t they recruit here?
Edit: As another aside:
Holy shit yes! If you have anything to protect use all of your available strength to protect it! Shut up and multiply, think for at least five minutes about the problem, apply every ounce of your technique and then win. I truly and sincerely hope that every last person working for MIRI would kill and die to bring about friendly AI. I hope that if I had the choice of sacrificing myself so that all of humanity could live forever I would take it. If a fight is so important that you must win it, you must win it. You can win by the long sword, or you can win by the short sword (to quote Musashi) but you must win.
Any rationalist should see this trivially. It is a failure of Scott’s that he hasn’t, though I suppose he could be appealing publicly to the a widely-held principle in order to win this particular debate.
There are some interesting ideas in what you wrote, but unfortunately, the whole comment is written in a mindkilling way. Yeah, that’s probably your point, so… uhm...
Well, one way to deal with people who don’t understand what you are trying to tell them, is to explain. It’s not the only way—for example, you could also also bully them into submission—but it is the way that most LW readers probably prefer. So, if this cause is so important to you, what don’t you write an article here, explaining what Arthur Chu gets and we don’t? And by explaining, I mean… explaining.
More likely, it would discourage object-level debates about slavery (both for and against), so that Americans from both North and South could debate about something else: rationality, etc.
By the way, libertarians are not exactly supporters of the status quo. (By which I am not suggesting that libertarians are most frequent here; but this is what LW is frequently accused of.)
How about other oppressive social structures?
Let me give you an example. Every time I go to a LW meetup to nearby Vienna, I cross a line that 25 years ago would get me killed. And I usually remind myself about the fact, and how happy I am to be able to go to Vienna like no big deal, when so many people got killed for trying.
There is a memorial to all those killed people, on a border between Slovakia and Austria. I happen to visit it about once in a month; not for political reasons, it just happens to be on my favorite walking path in nature. You know, countries usually protect their borders to prevent other people from getting in. But socialist countries protected their borders to prevent people from running away. In socialism, people were considered property of their country. When they tried to escape their masters, that was a similar kind of crime as when a black person tried to run away from their master. And if they succeeded to run away, their families were punished instead. To legally leave the socialist country, e.g. on a vacation, you had to leave hostages at home. It happened when I was a child; in happened in a place I still live. The second most frequent cause of death on borders of socialist countries were allegedly the suicides of soldiers who couldn’t bear anymore the moral burden of having to kill all those innocent people.
So, according to your arguments, what exactly it is that I am supposed to do about it? How exactly am I supposed to react to you? From my point of view, you are a blind and evil person. Should I scream “Freedom!”, try to accuse you of random bad things, say you should be banned from LW, say that LW is a horrible website if it does not ban you immediately? Should I even use lies to support my case, because the most important thing is to win, and to destroy all those murderous socialism-sympathisers? Because otherwise I am dishonoring the memory of the millions who were tortured and murdered in the name of… things you defend, kind of. Is that the right thing to do?
The thing is, I understand this is not how it “feels from inside” to you. Which makes things a lot more complicated. Welcome in the real world, where the good things are not achieved by sorting people into the “good” ones and the “evil” ones, and then attacking the “evil” ones by whetever means available.
Notice your confusion. Either your model is false or the data is wrong. You’ve decided the data (what I told you) was wrong.
But could your model be wrong?
How would such a policy support slavery? Why do I think that? Pretend that I am as intelligent as you and try to determine what would make you believe that.
Yes. You should. You are a rationalist and you should win. Never deceive yourself that losing is appropriate! It is only ever appropriate to win. It is only ever good to win. Losing is never good.
If you find this too complicated, think about it in the simplest possible terms. The truth is the truth and to win is to win.
If you truly oppose me to the same extent Arthur Chu opposes casual racism on the Internet and I oppose the concept of capitalism you should do whatever you can to win. If you in your full art as a rationalist decide that is the path to winning you must take that path.
But I don’t think you do have that level of commitment in you. There’s a very large difference between identifying a social suboptimality and truly having something to protect. And I think that even in the face of all the things you said, all the very true and very real horrors of Marxism, you could not even summon the internal strength to protect yourself against that.
This is a sort of resolve that Less Wrong does not teach. It’s only found in true adversity, in situations where you have something to protect and you must fight to protect it.
I do not think you have fought that fight. Very few people have.
Yudkowsky says the Art cannot be for itself alone, or it will lapse into a wastefulness. This is what has happened to Less Wrong.
You provided data of your imagination, I provided data of mine… there is no way to determine the outcome experimentally… even if we asked Eliezer, he couldn’t know for sure what exactly Eliezer1820 would do… is there a meaningful way to settle this? I don’t see any.
I’m sorry, are you aware of the reasons why I think what I do? Have you thought about this for even one minute?
If you’re truly incapable of reconstructing that then maybe there isn’t anything we can do. But I don’t believe you’re incapable.
I think the scenario you describe is exactly what would happen with 1820′s LW. I also think that provides material support for slavery. I also think that when slavery was brought up, probably it would be similarly treated to discussions of racism now.
Informed by that, think about it for five minutes, and PM me your answer. We can go from there.
Yes, you’re part of a movement that believes lying is justified for their cause and as a result have started to believe their own lies.
/facepalm
The saying that goes to the Spanish Civil War is No Pasaran which means “They shall not pass”. It was used by Dolores Ibarruri in her famous speech and it it still popular in some anti-fascist circles. See e.g. here,
So, did you get your gun and bullets yet? How goes your list of people who will be the first against the wall?
72c9d439eaa864ff4f68583cfa6e80f0ee5e60b66596cad18d6e9eb0dbfd6f0aa9e33cc92629e55b5fd5dfa7eeeabbbde95ee383df3175a69ee701d9a45c0117
And what am I supposed to do with these 64 bytes?
Verify his authorship of a posthumously published rant after he goes Kaczinski?
The list of enemies? X-D 512 bits is too short for a reasonable public key and too long for a symmetric key. Just right for a standard hash, though. Hashes don’t verify authorship, of course...
If everyone lies for their preferred cause, those who see through the lies trust no one, and those who don’t see through them act on false information.
If everyone believes enemies of their preferred cause should be driven out of society, as many societies as causes arise, and none can so much as trade with another.
If everyone believes their opponents must be purged, everyone purges everyone else.
If everyone decides they must win by the sword, the Hobbesian state of nature results.
(Oh, hell, first I realize Kant was not an idiot, and now I realize Hobbes was not an idiot. Of course the state of nature is ahistorical—that’s part of the point!)
Breaking down the Schelling fence around the social norms built up over the state of nature is an effective way to gain power, but once you gain power, you have to make sure that the fence is restored—and that’s hard to do. It’s easier to destroy than to build. You can’t weigh winning by the sword against the status quo; you have to weigh one action with p probability of winning hard enough to restore the fence (and q probability of having your burning of the accumulated arrangements / store of knowledge be net-beneficial for whatever definition of ‘net-beneficial’ you’re using, vs. 1-q probability of having them not be) and 1-p probability of just breaking the fence.
In reality, of course, fence-wreckers understand that the opposite side wants to preserve the fence, and use that to their advantage. (The rain it raineth on the just / And also on the unjust fella / But chiefly on the just, because / The unjust hath the just’s umbrella.) Alinsky understood this: you appeal to moral principles when you’re out of power, but if you get power, you crush the people who appeal to moral principles—even the ones you espoused before you go power. If you have enough power to crush your opponents, you have enough power to crush your opponents—but this represents… not quite a burning of capital, but a sick sort of investment which may or may not pay off. You may be able to crush your opponents, but it doesn’t necessarily follow that your opponents aren’t able to crush you. And if you crush your opponents, you can’t use them, their skills, knowledge, etc.
This is the part where I attempt to avoid performing an amygdala hijack by using the phrase ‘amygdala hijack’, and reference Atlas Shrugged: the moochers crush the capitalists, so the capitalists leave, and the moochers don’t have access to the benefits of their talents anymore so their society falls apart. It’s not a perfect analogy—it’s been a while since I read it, but I don’t think the moochers saw themselves as aligned against the capitalists. But it’s close enough; if it helps, imagine they were Communists.
There ought to be a term for the difference between considering an action in and of itself and considering an action along with its game-theoretic effects, potential slippery slopes, and so on. Perhaps there already is. There also ought to be a term for the seemingly-irrational-and-actually-irrational-in-the-context-of-considering-an-action-in-and-of-itself cooperation-norms that you’re so strongly arguing for defecting from.
In the consequentialist ethics family, there’s act consequentialism, rule consequentialism, and a concept I cannot recall the name of linked here or possibly written here long ago of what I will call winning consequentialism. It dictates you consider every action according to every possible consequentialism and you pick the one with the best consequences.
I think it was called plus-consequentialism in the post, or maybe n-consequentialism, but it seems to capture this.
But your failure lies in assuming that winning consequentialism will always result in this sort of clean outcome. Less Wrong attempts to change the world not by the sword, or by emotional appeals, not even base electoralism, but by comments on the Internet. Is it really the case that this is always the winning outcome?
An experiment: Suppose you find yourself engaged in a struggle (any struggle) where you correctly apply winning consequentialism considering all contexts and cooperation norms and find that you should crush your enemy. What do you then do?
Your consequentialism sounds suspiciously like the opposite and I wonder how deeply you are committed to it.
Care to taboo what you mean by “racist”. In particular is it “racist” to believe that traits like intelligence correlate with where someone’s ancestors came from? Does it matter if there is evidence for the belief in question? Does it matter if the belief is true?
Also why is “racism” so uniquely awful. If you look at the history of the 20th century far more people have been killed in the name of egalitarian ideologies (specifically communism) then in the name of ideologies generally considered “racist”.
Taboo “opressive”. Judging by how your calling capitalism oppressive it appears that improving the living standard of most of the world is “oppression”. If so we could probably use more of it.
If you find yourself needing to lie for your cause, what your effectively admitting is that the truth doesn’t support it. You may want to consider updating on that fact when deciding whether you should really be supporting said cause.
Also as I explain here Yvain’s reason for not lying for your cause is not the best one he could give. The biggest problem is that it will fill your cause with people who believe said lies.
Not necessarily. You may deal with irrational people, who will not be moved by truth. Or the inferential distances can be long, and you only have very short time to convince people before something irreversible happens—although in this case, you are creating problems in the long run.
(I generally agree with what you said. This is just an example of how this generalization is also leaky. And of course, because we run on the corrupted hardware, every situation will likely seem to be the one where the generalization does not apply.)
Whatever you happened to believe, the winningest answer would be “No, never lie”. Because now that you’ve claimed your political position is likely to be based on lies, I’ve updated to consider arguments from that position as having zero evidential weight.
I would have thought that The Boy Who Cried Wolf was an adequate explanation in childhood of the selfish reasons to be honest.
I don’t think I have claimed that.
If the Butlerian Jihad is at your door looking for the FAI researchers in your floorboards, you lie and tell them you’re a loyal luddite. If you need a little more funding to finish your FAI and you can get it by pretending to be working on the next Snapchat clone to get VC money, lie to your VCs.
Like literally everything in life, lying has risks. but if you in your art as a rationalist decide those risks are acceptable only base dogmatism dictates you be honest and turn over your friends to be executed or allow humans to continue to die.
The moral of The Boy Who Cried Wolf is not to be honest; it’s to not get caught lying.
(As an entire aside, do you really think that any political position, or even any fact anywhere that has touched human minds, is not at least partially based on lies? You might as well update the world to have zero evidential weight and spend all your time on a webforum arguing about ethics instead of going into the real world and effecting your goals.)