Well, as for me, reading half the sequences change my attitude a lot by simply convicing me to dare to be rational, that it is not socially disapproved at least here. I would not call it norms, as the term “norms” I understand as “do this or else”. And it is not the specific techniques in the sequences, but the attitudes. Not trying to be too clever, not showing off, not trying to use arguments as soldiers, not trying to score points, not being tribal, something I always liked but on e.g. Reddit there was quite a pressure to not do so.
So it is not that these things are norms but plain simply that they are allowed.
A good parallel is that throughout my life, I have seen a lot of tough-guy posturing in high school, in playgrounds, bars, locker rooms etc. And when I went to learn some boxing then paradoxically, that was the place I felt it is the most approved to be weak or timid. Because the attitude is that we are all here to develop, and therefore being yet underdeveloped is OK. One way to look at is that most people out in life tend to see human characteristics as fixed: you are smart of dumb, tough or puny and you are just that, no change, no development. Or putting it different, it is more of a testing, exam-taking attitude, not learning attitude: i.e. on the test, the exam, you are supposed to prove you already have whatever virtue is valued there, it is too late to say I am working on it. But in the boxing gym where everybody is there to get tougher, there is no such testing attitude, you can be upfront about your weakness or timidity and as long as you are working on it you get respect, because the learning attitude kills the testing attitude, because in learning circumstances nobody considers such traits too innate. Similarly on LW, the rationality learning attitude kills the rationality testing attitude and thus the smarter-than-though posturing, points-scoring attitude gets killed by it, because showing off inborn IQ is less important than learning the optimal use of whatever amount of IQ there is. Thus, there is no shame in admitting ignorance or using wrong reasoning as long as one there is an effort to improve it.
I think this is why. And this has little to do with topics and little to do with enforced norms.
I like your example and “learning environment” vs “testing environment”.
However, I am afraid that LW is attractive also for people who instead of improving their rationality want to do other things; such as e.g. winning yet another website for their political faction. Some people use the word “rationality” simply as a slogan to mean “my tribe is better than your tribe”.
There were a few situations when people wrote (on their blogs) something like: “first I liked LW because they are so rational, but then I was disappointed to find out they don’t fully support my political faction, which proves they are actually evil”. (I am exaggerating to make a point here.) And that’s the better case. The worse case is people participating on LW debates and abusing the voting system to downvote comments not beause those comments are bad from the espistemic rationality point of view, but because they were written by people who disagree (or are merely suspect to disagree) with their political tribe.
This is all fine, but what is missing for me is the reasoning behind something like ”… and this is bad enough to taboo it completely and forfeit all potential benefits, instead of taking these risks”—at least if I understand you right. The potential benefits is coming up with ways to seriously improve the world. The potential risk is, if I get it right, that some people will behave irrationally and that will make some other people angry.
Idea: let’s try to convince the webmaster to make a third “quarantine” tab, to the right from the discussion tab, visible only to people logged in. That would cut down negative reflections from blogs, and also downvotes could be turned off there.
An alternative without programming changes would be biweekly “incisive open threads”, similar to Ozy’s race-and-gender open threads, and downvoting customarily tabood in them. Try at least one?
An alternative without programming changes would be biweekly “incisive open threads”, similar to Ozy’s race-and-gender open threads
Feel free to start a “political thread”. Worst case: the thread gets downvoted.
However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then—because it is likely to happen again.
and downvoting customarily tabood in them.
Not downvoting brings also has its own problems: genuinely stupid arguments remain visible (or can even get upvotes from their faction), people can try winning the debate by flooding the opponent with many replies.
Okay, I do not know how to write it diplomatically, so I will be very blunt here to make it obvious what I mean: The current largest threat to the political debate on LW is a group called “neoreactionaries”. They are something like “reinventing Nazis for clever contrarians”; kind of a cult around Michael Anissimov who formerly worked at MIRI. (You can recognize them by quoting Moldbug and writing slogans like “Cthulhu always swims left”.) They do not give a fuck about politics being the mindkiller, but they like posting on LessWrong, because they like the company of clever people here, and they were recruited here, so they probably expect to recruit more people here. Also, LessWrong is pretty much the only debate forum on the whole internet that will not delete them immediately. If you start a political debate, you will find them all there; and they will not be there to learn anything, but to write about how “Cthulhu always swims left”, and trying to recruit some LW readers. -- Eugine Nier was one of them, and he was systematically downvoting all comments, including completely innocent comments outside of any political debate, of people who dared to disagree with him once somewhere. Which means that if a new user happened to disagree with him once, they usually soon found themselves with negative karma, and left LessWrong. No one knows how many potential users we may have lost this way.
I am afraid that if you start a political thread, you will get many comments about how “Cthulhu always swims left”, and anyone who reacts negatively will be accused of being a “progressive” (which in their language means: not a neoreactionary). If you will ask for further explanation, you will either receive none, or a link to some long and obscurely written article by Moldbug. If you downvote them, they will create sockpuppets and upvote their comments back; if you disagree with them in debate, expect your total karma to magically drop by 100 points overnight.
Therefore I would prefer simply not doing this. But if you have to do it, give it a try and see for yourself. But please read the older political threads first.
However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then—because it is likely to happen again.
I am afraid that if you start a political thread, you will get many comments about how “Cthulhu always swims left”
Just out of curiosity, I looked at the latest politics thread in Vaniver’s list. Despite being explicitly about NRx, in contains only two references to “Cthulhu”, both by people arguing against NRx.
and anyone who reacts negatively will be accused of being a “progressive” (which in their language means: not a neoreactionary).
Rather anyone who isn’t sufficiently progressive gets called a neoreactionary.
Viliam_Bur is the person who gets messages asking him to deal with mass downvotes, so I am sympathetic to him not wanting us to attract more mass downvoters.
Not anymore, but yeah, this is where my frustration is coming from. Also, for every obvious example of voting manipulation, there are more examples of “something seems fishy, but there is no clear definition of ‘voting manipulation’ and if I go down this slippery slope, I might end up punishing people for genuine votes that I just don’t agree with, so I am letting it go”. But most of these voting games seem to come from one faction of LW users, which according to the surveys is just a tiny minority.
(When the “progressives” try to push their political agenda on LW—and I don’t remember them doing this recently—at least they do it by writing accusatory articles, and by complaining about LW and rationality on other websites, not by playing voting games. So their disruptions do not require moderator oversight.)
I don’t understand this word “was”—I just lost another 9+ karma paperclips to Eugine Nier.
Not to put too fine a point on it, but this seems less like a problem with political threads and more like a problem with someone driving most of the world’s population (especially the educated western population) away from existential risk prevention in general and FAI theory in particular.
Well, as for me, reading half the sequences change my attitude a lot by simply convicing me to dare to be rational, that it is not socially disapproved at least here. I would not call it norms, as the term “norms” I understand as “do this or else”. And it is not the specific techniques in the sequences, but the attitudes. Not trying to be too clever, not showing off, not trying to use arguments as soldiers, not trying to score points, not being tribal, something I always liked but on e.g. Reddit there was quite a pressure to not do so.
So it is not that these things are norms but plain simply that they are allowed.
A good parallel is that throughout my life, I have seen a lot of tough-guy posturing in high school, in playgrounds, bars, locker rooms etc. And when I went to learn some boxing then paradoxically, that was the place I felt it is the most approved to be weak or timid. Because the attitude is that we are all here to develop, and therefore being yet underdeveloped is OK. One way to look at is that most people out in life tend to see human characteristics as fixed: you are smart of dumb, tough or puny and you are just that, no change, no development. Or putting it different, it is more of a testing, exam-taking attitude, not learning attitude: i.e. on the test, the exam, you are supposed to prove you already have whatever virtue is valued there, it is too late to say I am working on it. But in the boxing gym where everybody is there to get tougher, there is no such testing attitude, you can be upfront about your weakness or timidity and as long as you are working on it you get respect, because the learning attitude kills the testing attitude, because in learning circumstances nobody considers such traits too innate. Similarly on LW, the rationality learning attitude kills the rationality testing attitude and thus the smarter-than-though posturing, points-scoring attitude gets killed by it, because showing off inborn IQ is less important than learning the optimal use of whatever amount of IQ there is. Thus, there is no shame in admitting ignorance or using wrong reasoning as long as one there is an effort to improve it.
I think this is why. And this has little to do with topics and little to do with enforced norms.
I like your example and “learning environment” vs “testing environment”.
However, I am afraid that LW is attractive also for people who instead of improving their rationality want to do other things; such as e.g. winning yet another website for their political faction. Some people use the word “rationality” simply as a slogan to mean “my tribe is better than your tribe”.
There were a few situations when people wrote (on their blogs) something like: “first I liked LW because they are so rational, but then I was disappointed to find out they don’t fully support my political faction, which proves they are actually evil”. (I am exaggerating to make a point here.) And that’s the better case. The worse case is people participating on LW debates and abusing the voting system to downvote comments not beause those comments are bad from the espistemic rationality point of view, but because they were written by people who disagree (or are merely suspect to disagree) with their political tribe.
This is all fine, but what is missing for me is the reasoning behind something like ”… and this is bad enough to taboo it completely and forfeit all potential benefits, instead of taking these risks”—at least if I understand you right. The potential benefits is coming up with ways to seriously improve the world. The potential risk is, if I get it right, that some people will behave irrationally and that will make some other people angry.
Idea: let’s try to convince the webmaster to make a third “quarantine” tab, to the right from the discussion tab, visible only to people logged in. That would cut down negative reflections from blogs, and also downvotes could be turned off there.
An alternative without programming changes would be biweekly “incisive open threads”, similar to Ozy’s race-and-gender open threads, and downvoting customarily tabood in them. Try at least one?
Feel free to start a “political thread”. Worst case: the thread gets downvoted.
However, there were already such threads in the past. Maybe you should google them, look at the debate and see what happened back then—because it is likely to happen again.
Not downvoting brings also has its own problems: genuinely stupid arguments remain visible (or can even get upvotes from their faction), people can try winning the debate by flooding the opponent with many replies.
Another danger is that political debates will attract users like Eugine Nier / Azathoth123.
Okay, I do not know how to write it diplomatically, so I will be very blunt here to make it obvious what I mean: The current largest threat to the political debate on LW is a group called “neoreactionaries”. They are something like “reinventing Nazis for clever contrarians”; kind of a cult around Michael Anissimov who formerly worked at MIRI. (You can recognize them by quoting Moldbug and writing slogans like “Cthulhu always swims left”.) They do not give a fuck about politics being the mindkiller, but they like posting on LessWrong, because they like the company of clever people here, and they were recruited here, so they probably expect to recruit more people here. Also, LessWrong is pretty much the only debate forum on the whole internet that will not delete them immediately. If you start a political debate, you will find them all there; and they will not be there to learn anything, but to write about how “Cthulhu always swims left”, and trying to recruit some LW readers. -- Eugine Nier was one of them, and he was systematically downvoting all comments, including completely innocent comments outside of any political debate, of people who dared to disagree with him once somewhere. Which means that if a new user happened to disagree with him once, they usually soon found themselves with negative karma, and left LessWrong. No one knows how many potential users we may have lost this way.
I am afraid that if you start a political thread, you will get many comments about how “Cthulhu always swims left”, and anyone who reacts negatively will be accused of being a “progressive” (which in their language means: not a neoreactionary). If you will ask for further explanation, you will either receive none, or a link to some long and obscurely written article by Moldbug. If you downvote them, they will create sockpuppets and upvote their comments back; if you disagree with them in debate, expect your total karma to magically drop by 100 points overnight.
Therefore I would prefer simply not doing this. But if you have to do it, give it a try and see for yourself. But please read the older political threads first.
I upvoted for this:
And, to further drive home the point, I’ll link to the ones I could easily find: Jan 2012, Aug 2012, Dec 2012, Jan 2013, Feb 2013, more Feb 2013, Oct 2013, Jun 2014, Nov 2014.
Just out of curiosity, I looked at the latest politics thread in Vaniver’s list. Despite being explicitly about NRx, in contains only two references to “Cthulhu”, both by people arguing against NRx.
Rather anyone who isn’t sufficiently progressive gets called a neoreactionary.
Y’know, you do sound mindkilled about NRx…
Viliam_Bur is the person who gets messages asking him to deal with mass downvotes, so I am sympathetic to him not wanting us to attract more mass downvoters.
Not anymore, but yeah, this is where my frustration is coming from. Also, for every obvious example of voting manipulation, there are more examples of “something seems fishy, but there is no clear definition of ‘voting manipulation’ and if I go down this slippery slope, I might end up punishing people for genuine votes that I just don’t agree with, so I am letting it go”. But most of these voting games seem to come from one faction of LW users, which according to the surveys is just a tiny minority.
(When the “progressives” try to push their political agenda on LW—and I don’t remember them doing this recently—at least they do it by writing accusatory articles, and by complaining about LW and rationality on other websites, not by playing voting games. So their disruptions do not require moderator oversight.)
I don’t understand this word “was”—I just lost another 9+ karma paperclips to Eugine Nier.
Not to put too fine a point on it, but this seems less like a problem with political threads and more like a problem with someone driving most of the world’s population (especially the educated western population) away from existential risk prevention in general and FAI theory in particular.