You may consider them fools and come up with a string of reasons not to take them seriously (and I am likely to agree), but that doesn’t get any typos fixed, and I’m assuming that’s the actual aim.
Off-topic, but I have run across this line of thinking recently, in regards to Wikileaks for example. Some expressed the view that since people on average are not good at dealing with information (jump to conclusions, cherry-pick to support hating or loving conclusion x, etc), revealing all this information actually makes people make worse decisions.
I couldn’t articulate it at the time, but I feel like the reason people aren’t good with information is because we don’t give them enough, and that if we gave them this amount of information regularly, they would develop the skills to use it properly.
Something similar for blunt communication. People aren’t good with it, so don’t do it vs people aren’t good with it, so do it to let them learn to be good with it.
But people are used to blunt communication: it’s what rude people do when attempting to dominate them by showing that social rules don’t apply to the rude people.
And a list of reasons why blunt communication is more efficient does not mean that this effect does not happen.
And as I’ve noted, I’ve seen over and over people say they prefer unvarnished communication but mean they want to be free to send it; when they receive it, particularly when they receive it back, they tend to lash out.
My rough heuristic: A certain amount of wrapping is required for your message to be received. No-one said this would be easy, but it’s still not optional. Hang out on a community for a bit before diving in. If you’re a newbie, use a bit more politeness wrapper than you would when comfortable, because n00b questions get closer inspection. Etc., etc. Try not to be a dick, even if that’s quite difficult.
(This reads back to me a bit like platitudes, but is actually bitterly-won heuristics.)
But people are used to blunt communication: it’s what rude people do when attempting to dominate them by showing that social rules don’t apply to the rude people.
Yeah, unfortunately. That reads like “people are used to information: it’s what officials manipulate and deceive you with when they attempt to dominate you.” It’s the wrong understanding. Of course, as much as I’d like to justify my desire to be blunt, simply being blunt isn’t going to solve this problem.
I think that when one finds oneself writing this sentence, it is time to take a step back and think pretty hard about what one is saying.
We’re not talking about a mathematical fact that can be proven or disproven as correct; we’re not taking about people having the “wrong understanding” of, say, how Bayes’s Theorem works. What we are doing is describing a culture in which behaving in x way signals y, to wit, being blunt and direct signals rudeness. This is hard to stomach for people who are part of a subculture where that is not the case, but being part of that subculture and having that preference does not make that particular meme in the larger culture “wrong.” It’s not even meaningful to give it a value; it’s just an observation of the way it is. You can play along and be accepted/effective in that culture, or not. It’s your choice.
It’s the prescriptive/descriptive divide. When I say it’s the wrong understanding, I mean that if I were to prescribe what understandings people ought to have of communication protocols, I would be in error if I prescribed this one. This understanding is worse than another understanding they could have. There doesn’t seem to be any point being purely descriptive about anything.
You can play along and be accepted/effective in that culture, or not. It’s your choice.
False dilemma. I can agitate for change in that culture.
Oh hey, so it is. Well observed. (This is not sarcasm; I actually hadn’t noticed.)
ought … error … worse
The meaningfulness of these words relies on sharing the relevant parts of a value system, and we haven’t come anywhere near establishing that that’s the case. If you mean that it’s definitely more useful for people to behave in the way you prefer, you have not yet convinced me of that.
There doesn’t seem to be any point being purely descriptive about anything.
That depends on the goal, doesn’t it? If you’re a mapmaker, being purely descriptive rather than prescriptive is the whole point. When I’m setting about to choose my own behavior, I would like to have as good a descriptive map as possible of the way the world is now; if I find a part I dislike, I might then choose a behavior with which I intend to change it, but even while doing that I’m best served by having an accurate description in place.
False dilemma. I can agitate for change in that culture.
Fair point, but as above, it’s useful to have a very good understanding of what you’re trying to go about changing; and even then, simply contradicting it or behaving as if social norms aren’t what they are may not be sufficient to convince anyone of the rightness of your position.
If you’re a mapmaker, being purely descriptive rather than prescriptive is the whole point.
Indeed, excellent counter-example. I was wrong to say there is no point in being descriptive.
If you mean that it’s definitely more useful for people to behave in the way you prefer, you have not yet convinced me of that.
I am not sure that it is more useful. There appears, to me, to be some correlation between intelligence and blunt communication (nerds speak bluntly, mundanes politely) but that could be intelligence and contrarianism, or any other of many potential factors. I am not giving it any weight. However, I do think it’s the case that it is useful for “people who behave in this way” to congregate and continue to behave in this way with each other.
That is, when their value systems are sufficiently similar in relevant areas, I can say that being more polite is an error for them. And LessWrong is one place where the value systems sufficiently coincide.
There appears, to me, to be some correlation between intelligence and blunt communication (nerds speak bluntly, mundanes politely) but that could be intelligence and contrarianism, or any other of many potential factors
My pet theory about this is that intelligence correlates with not fitting in socially, which then correlates with deliberately doing things differently to prove that not-fitting-in is a choice. If you hang out in any subcultures (goth, roleplaying, etc), you’ll tend to see a lot of that kind of countersignalling. I guess that’s a point in favour of your contrarianism argument.
Another alternative is that intelligence correlates with realising that communication styles are just styles and not the natural order, which then frees them up to switch between styles at will.
There appears, to me, to be some correlation between intelligence and blunt communication
I’d really, really, really want to see any sort of numbers before presuming to make any such statement. You are talking about the nerd subculture, not about the world. I could just as well compare academics to stevedores and get the opposite plausible statement.
And LessWrong is one place where the value systems sufficiently coincide.
This comes across as wishful thinking on your part.
Since this dispute began, I have been trying to be more analytical in my reactions to comments—trying to determine what it is about them, in style or content, that I like.
I liked this comment, and upvoted it, partly because of its well-chosen counter-illustration, but also for reasons of style. It is relatively blunt, but the padding that it carries has a nice “rationalist” flavor. “I’d … want to see … numbers … before presuming …”. “This comes across as …” rather than simply “This is …”.
But in the course of making this analysis, it occurred to me that I am conducting the analysis as a bystander, rather than as the direct recipient of this feedback. I’m living in a forum where everything I write is perused by one recipient and ten bystanders. I know that the reaction of the recipient (and my reaction when I am in the recipient’s role) will be witnessed by these ubiquitous bystanders. The bystanders will judge—vote responses up or down. One reason we communicate differently here is that we are playing to an audience—not just conducting one-to-one communication.
You (David_Gerard) keep pointing out that we LessWrong denizens can cheerfully “dish out” bluntness, but we are not so happy about receiving it. True enough, but also a rather shallow observation. Surely, the ability to receive criticism without taking offense is a life skill every bit as important as the ability to dispense criticism without giving offense. One virtue of the culture of observed blunt communication that we cultivate here is that we get plenty of practice at receiving criticism, plus plenty of negative feedback if we respond by taking offense.
This may sound like more rationalization, but it is not. This environment has helped me to improve my own ability to “take coaching”, though I know I have a long way to go. Unfortunately, and this is the point you and Lionhearted have been consistently making, operating in this culture does not provide useful practice and feedback on the other important life-skill—offering criticism or correction without giving offense.
I liked this comment, and upvoted it, partly because of its well-chosen counter-illustration, but also for reasons of style. It is relatively blunt, but the padding that it carries has a nice “rationalist” flavor. “I’d … want to see … numbers … before presuming …”. “This comes across as …” rather than simply “This is …”.
You have correctly reverse-engineered how I wrote it ;-)
One virtue of the culture of observed blunt communication that we cultivate here is that we get plenty of practice at receiving criticism, plus plenty of negative feedback if we respond by taking offense.
I really don’t see it as a very blunt culture. (I suppose I should stress this more.) A frequently difficult one, but not blunt. Most comments are thoughtful and the commenters take due care. Some are indeed blunt to the point of rudeness, and you’ll see their good but blunt comments get lots of upvotes for content and downvotes for tone.
I’d really, really, really want to see any sort of numbers before presuming to make any such statement.
Hence why I said
I am not giving it any weight.
Now.
This comes across as wishful thinking on your part.
Really? Can you give me some examples of groups that do share the same value systems? I feel like LessWrong is at the extreme end of ‘well established value system’, as it regards bluntness/politeness.
There are many places on the Internet that are less polite than here. For example, youtube comments.
Quick summary of some politeness protocols that most LW users employ:
Avoidance of ad-hom attacks and empty statements of emotion/opinion (“this is a bad idea” without any supporting evidence)
Being charitable—frequently people will write something along the lines of “It looks like you’re arguing X. X is bad because...” This is a very important aspect of politeness—it means taking the potential status hit to yourself of having misinterpreted the other person and provides them a line of retreat in case they really did mean X and now want to distance themselves from it.
In the same vein, clarifying what it is that you’re disagreeing about and why before descending with the walls of text.
Acknowledging when someone has made a particularly good post/argument even if you disagree with many of their points.
Apart from 2, they seem more like being rational than being polite. Possibly there is some overlap between politeness protocols in normal experience and rationality protocols on LessWrong. Possibly there is also some overlap between rudeness indicators in normal experience and rationality protocols on LessWrong.
frequently people will write something along the lines of “It looks like you’re arguing X. X is bad because...” This is a very important aspect of politeness—it means taking the potential status hit to yourself of having misinterpreted the other person and provides them a line of retreat in case they really did mean X and now want to distance themselves from it.
I’d say that that part (the bolded section) is bad if true, whether or not it is a “polite” thing to do. People should get used to being able to say “I was wrong” when they find out they were wrong. If someone’s post is genuinely ambiguous, then it’s fine to say “That sounds like you’re saying X, if so then I think that’s wrong, here’s why”, but if I say something that’s actually wrong and not particularly open to misinterpretation, and someone corrects me, then I wouldn’t consider them to be doing me a favour by giving me an out to allow me to change my mind while claiming that I didn’t mean it that way in the first place.
I wouldn’t consider them to be doing me a favour by giving me an out to allow me to change my mind while claiming that I didn’t mean it that way in the first place.
Why is it my responsibility to force you to admit your mistakes? Whether you take that line of retreat is more a reflection of your character than mine.
But one of the funny things about being polite is that by leaving them a graceful way out it’s actually easier for them to admit that they were wrong. Attack their status by making it clear that they were wrong and all you do is encourage status-saving behaviour. Now maybe you might say that this is a good thing because people need to learn how to admit to their mistakes even when they feel under attack, but most people are very very bad at that kind of graciousness. It’s much easier for someone to admit that they’re wrong if they don’t feel like it would lead to further attacks.
But one of the funny things about being polite is that by leaving them a graceful way out it’s actually easier for them to admit that they were wrong. Attack their status by making it clear that they were wrong and all you do is encourage status-saving behaviour. Now maybe you might say that this is a good thing because people need to learn how to admit to their mistakes even when they feel under attack, but most people are very very bad at that kind of graciousness. It’s much easier for someone to admit that they’re wrong if they don’t feel like it would lead to further attacks.
Good points; what you say (“by leaving them a graceful way out it’s actually easier for them to admit that they were wrong”) sounds quite plausible. (And I will admit that when I wrote the “I wouldn’t consider them to be doing me a favour...” bit, I was thinking ”...and neither should anyone else”, which neglects the fact that getting to that point can be a difficult process and that saying that everyone should do it isn’t helpful.) Though I would still say that I’d support a norm of encouraging newer users to get used to acknowledging mistakes, not taking disagreements/counterarguments/corrections as personal attacks, not taking unembellished corrections as meanness, etc.
Though I would still say that I’d support a norm of encouraging newer users to get used to acknowledging mistakes, not taking disagreements/counterarguments/corrections as personal attacks, not taking unembellished corrections as meanness, etc.
Upvoted for complete agreement—although this community is already far better at it than anywhere else I’ve ever been.
You seem to be asserting that if I give you an out you will take it and change your mind without admitting you were wrong, but if I don’t give you an out you will change your mind and admit you were wrong.
Which, OK, good for you. Of course, even better would be to not take the out if offered, and admit you were wrong even when you aren’t forced to… but still, the willingness to admit error when you don’t have a line of retreat is admirable.
The problem arises if we’re dealing with people who lack that willingness… who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds.
Are you suggesting that such people don’t exist? That trying to change their minds isn’t worthwhile? Something else?
You seem to be asserting that if I give you an out you will take it and change your mind without admitting you were wrong, but if I don’t give you an out you will change your mind and admit you were wrong.
I do not assert that, and I’m not just saying that because you proved me wrong but gave me an out by saying “You seem to be asserting”. :)
If I, personally, have been convinced that I was wrong about something, then I’ll say so, whether or not I have the option of pretending I actually meant something else. And that’s certainly encouraged by LW’s atmosphere (and it’s been explicitly discussed and advocated here at times). What I was disagreeing with was erratio’s implication that giving people that option is a polite and desirable thing to do.
You say that there are people “who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds,” and you are correct, and I also don’t claim that trying to change their minds isn’t worthwhile. But on Less Wrong, if a commenter would prefer to (and would be able to) actually hold onto a mistaken belief rather than acknowledge having been mistaken, then they have bigger rationality problems than just being wrong about some particular question; helping them solve that (which requires allowing ourselves to notice it) seems more important. I can’t say I’ve actually seen much of this here, but if we observed that some user frequently abandoned debates that they seemed to be losing, and later expressed the same opinions without acknowledging the strong counterarguments that they had previously ignored… then I’d just say that Less Wrong may not be a good fit for them (or that they need to lurk more and/or read more of things like “How To Actually Change Your Mind”, etc.). I would not say that we should have been more accommodating to their aversion to admitting error.
(Also, we should stop using the phrase “line of retreat” as we’re using it here, because it will make people think of the post “Leave a Line of Retreat” even though we’re talking about something pretty different.)
Agreed that a commenter who chooses to hold onto a mistaken belief rather than admit error is being imperfectly rational, and agreed that we are under no obligation to be “accommodating to their aversion.”
I’m more confused about the rest of this. Perhaps a concrete example will clarify my confusion.
Suppose Sam says something that’s clearly wrong, and suppose I have a choice between two ways of framing the counterarguments. One frame (F1) gives Sam a way of changing their mind without having to admit they’re wrong. The other (F2) does not.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You seem to agree that Sam is possible, and that changing Sam’s mind is worthwhile. And it seems clear that F1 has a better chance of changing Sam’s mind than F2 does. (Confirm?) So I think we would agree that in general, using F1 rather than F2 is worthwhile.
But, you say, on Less Wrong things are different. Here, using F2 rather than F1 is more likely to help Sam solve their “bigger rationality problems,” and therefore the preferred choice. (Confirm?)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
You mention that to help Sam solve those problems, we have to “allow ourselves to notice” those problems. You also suggest that Sam just isn’t a good fit for the site at all, or that they need to lurk more, or that they need to read the appropriate posts. I can sort of see how some of those things might be part of an answer to my question, but it’s not really clear to me what that answer is.
Can you clarify that?
(Incidently, it seems to me that that post is all about the fact that people are more likely to change their minds when it’s emotionally acceptable to do so, which is precisely what I’m talking about. But sure, I’m happy to stop using the phrase if you think it’s misleading.)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
F2 forces Sam to admit they were wrong. Not being able to admit you are wrong is a rationality problem, because not all truths are presented as F1 counterarguments—some, including experimental results, are F2 counterarguments to your state of mind. So F1 doesn’t attempt to solve the aversion to being wrong; F2 does.
The question of whether F2′s attempt succeeds often enough to be worth it is another question, one I don’t have any numbers or impressions on.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You said:
F2 forces Sam to admit they were wrong.
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
If you’re rejecting my supposition… that is, if you’re asserting that Sam as described just doesn’t exist, or isn’t worth discussing… then I agree with you. But I explicitly asked you if that was what you meant, and you said it wasn’t.
If you’re accepting my supposition… well, that suggests that even though Sam won’t change his mind under F2, F2 is good because it makes Sam change his mind. That’s just nonsense.
If there’s a third possibility, I don’t see what it is.
Yeah, no, my idea was that F2 forces Sam to admit they were wrong, given that they change their mind. When considering the case of ‘on LessWrong’, I skipped the bit that says Sam that does not change their mind. Ooops. Yeah, I don’t think there are many Sams on LessWrong.
Moving on… let me requote the line that started this thread:
frequently people will write something along the lines of “It looks like you’re arguing X. X is bad because...” This is a very important aspect of politeness—it means taking the potential status hit to yourself of having misinterpreted the other person and provides them a line of retreat in case they really did mean X and now want to distance themselves from it.
That seems to me to be a pretty good summary of the strategy I used here… I summarized the position I saw you as arguing, then went on to explain what was wrong with that position.
Looking at the conversation, that strategy at least seems to have worked well… at least, it got us to a place where we could resolve the disagreement in a couple of short moves.
But you seem to be saying that, when dealing with people as rational as the typical LWer, it’s not a good strategy.
So, OK: what ought I have said instead, and how would saying that have worked better?
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
This by itself would have worked, and to the extent it could be described as working better, it would have punished me for not properly constructing your model in my head, something which I consider required for a response.
My theory is that some folks here really value the perceived freedom to not spare people’s feelings with their posts/comments. I assume these folks experience the implied obligation in other social contexts to spare people’s feelings as onerous, though of course I don’t know.
Of course, that doesn’t mean they actually go around hurting people’s feelings all the time, no matter how much they may value the fact that they are free to do so.
Meanwhile, other folks carry on being kind/attentive/polite. I assume they don’t consider this an onerous obligation and behave here more or less as they do elsewhere along this axis.
And the sorts of folks who in most Internet channels create most of the emotional disturbance don’t seem to post much at all… either because karma works, or because they haven’t found the place, or because the admins are really good at filtering them out, or because the conversations here bore them, or some combination of those and other reasons.
The end result seems to be a “nice” level noticeably higher than most of the Internet, coupled with strong emotional support for not being “nice.” I found the dichotomy a little bewildering at first, but I’m kind of used to it now.
And the sorts of folks who in most Internet channels create most of the emotional disturbance don’t seem to post much at all… either because karma works, or because they haven’t found the place, or because the admins are really good at filtering them out, or because the conversations here bore them, or some combination of those and other reasons.
The end result seems to be a “nice” level noticeably higher than most of the Internet, coupled with strong emotional support for not being “nice.” I found the dichotomy a little bewildering at first, but I’m kind of used to it now.
If you browse the −1 comments, you’ll see people being voted down for behaving dickishly. Some of these then get upvoted.
Yes, I just meant going through recent comments, and threads with collapsed posts. I don’t know of a way to browse “worst comments” … it’s not clear it’d even be a good idea to have one.
On reflection, I don’t think the blunt—nice scale is serving us very well at all. You see me endorsing some elements of “blunt” styles that are definitely negative, and I see you endorsing some elements of “nice” styles that are definitely negative. Neither of us are actually endorsing the negative elements, we’re just accidentally including them because our language is too imprecise. I think.
LessWrong has adopted some elements of bluntness because those elements serve the community well. Some of the elements would not serve us well in other social settings. When someone points out in a top-level post that this is the case, those who already know this instead see them suggesting that LessWrong should abandon these elements of bluntness.
On an arbitrary scale of 1 to 10 where 1 is Crocker’s Rules for everyone and 10 is horrifying, mincing politeness… 3. LessWrong on average is 3, but the good bits are 2.
Hmm. Getting an answer forced me to figure out exactly why I was asking. ;) I guess the followup question is, where on that scale would you put the threshold for everyday, out-in-public polite conversation between neurotypical adults? That is, the expected level, below which someone would come across as rude.
Between strangers, 7. Between acquaintances or friends, variation but it would congeal into two large groups hovering around 6 and 4.
If you want to see 9s and 10s you have to look for certain types of unstable power dynamics.
Basically, I like LessWrong’s approach because it feels more like ‘friendship group where politeness of 4-3 is okay’ and less like ‘strangers you should be polite to’.
I guess the followup question is, where on that scale would you put the threshold for everyday, out-in-public polite conversation between neurotypical adults?
Not enough information. Are the adults male, female or mixed? How much status do they have? What national background? Polite means a very different thing here (Australia) than it does in the US for example.
Yeah, but the scale we’re using isn’t very precise. The variables you mention will move the threshold around, certainly, but not so much that shokwave can’t at least give me a smallish range. We can limit it to modern, Western, and no significant status differences from each other.
Polite means a very different thing here (Australia) than it does in the US for example.
This kind of statement is one of the reasons I consider ‘politeness’ to be an almost irrelevant metric to consider when evaluating people’s statements. The relationship between politeness and social ‘defection’ is utterly negligible.
On the subject of Wikileaks, I strongly recommend this blog post and the 2006 paper it analyses. Assange sets out in detail precisely what he’s trying to achieve and how he plans to do it. It’s the roadmap for Wikileaks. Casual commentators on the subject, particularly in the media, seem almost completely unaware of it.
On a personal note, I was somewhat perturbed to discover that Wikileaks is slightly my fault. Um, whoops.[/brag]
Off-topic, but I have run across this line of thinking recently, in regards to Wikileaks for example. Some expressed the view that since people on average are not good at dealing with information (jump to conclusions, cherry-pick to support hating or loving conclusion x, etc), revealing all this information actually makes people make worse decisions.
I couldn’t articulate it at the time, but I feel like the reason people aren’t good with information is because we don’t give them enough, and that if we gave them this amount of information regularly, they would develop the skills to use it properly.
Something similar for blunt communication. People aren’t good with it, so don’t do it vs people aren’t good with it, so do it to let them learn to be good with it.
But people are used to blunt communication: it’s what rude people do when attempting to dominate them by showing that social rules don’t apply to the rude people.
And a list of reasons why blunt communication is more efficient does not mean that this effect does not happen.
And as I’ve noted, I’ve seen over and over people say they prefer unvarnished communication but mean they want to be free to send it; when they receive it, particularly when they receive it back, they tend to lash out.
My rough heuristic: A certain amount of wrapping is required for your message to be received. No-one said this would be easy, but it’s still not optional. Hang out on a community for a bit before diving in. If you’re a newbie, use a bit more politeness wrapper than you would when comfortable, because n00b questions get closer inspection. Etc., etc. Try not to be a dick, even if that’s quite difficult.
(This reads back to me a bit like platitudes, but is actually bitterly-won heuristics.)
Yeah, unfortunately. That reads like “people are used to information: it’s what officials manipulate and deceive you with when they attempt to dominate you.” It’s the wrong understanding. Of course, as much as I’d like to justify my desire to be blunt, simply being blunt isn’t going to solve this problem.
I think that when one finds oneself writing this sentence, it is time to take a step back and think pretty hard about what one is saying.
We’re not talking about a mathematical fact that can be proven or disproven as correct; we’re not taking about people having the “wrong understanding” of, say, how Bayes’s Theorem works. What we are doing is describing a culture in which behaving in x way signals y, to wit, being blunt and direct signals rudeness. This is hard to stomach for people who are part of a subculture where that is not the case, but being part of that subculture and having that preference does not make that particular meme in the larger culture “wrong.” It’s not even meaningful to give it a value; it’s just an observation of the way it is. You can play along and be accepted/effective in that culture, or not. It’s your choice.
It’s the prescriptive/descriptive divide. When I say it’s the wrong understanding, I mean that if I were to prescribe what understandings people ought to have of communication protocols, I would be in error if I prescribed this one. This understanding is worse than another understanding they could have. There doesn’t seem to be any point being purely descriptive about anything.
False dilemma. I can agitate for change in that culture.
Oh hey, so it is. Well observed. (This is not sarcasm; I actually hadn’t noticed.)
The meaningfulness of these words relies on sharing the relevant parts of a value system, and we haven’t come anywhere near establishing that that’s the case. If you mean that it’s definitely more useful for people to behave in the way you prefer, you have not yet convinced me of that.
That depends on the goal, doesn’t it? If you’re a mapmaker, being purely descriptive rather than prescriptive is the whole point. When I’m setting about to choose my own behavior, I would like to have as good a descriptive map as possible of the way the world is now; if I find a part I dislike, I might then choose a behavior with which I intend to change it, but even while doing that I’m best served by having an accurate description in place.
Fair point, but as above, it’s useful to have a very good understanding of what you’re trying to go about changing; and even then, simply contradicting it or behaving as if social norms aren’t what they are may not be sufficient to convince anyone of the rightness of your position.
Indeed, excellent counter-example. I was wrong to say there is no point in being descriptive.
I am not sure that it is more useful. There appears, to me, to be some correlation between intelligence and blunt communication (nerds speak bluntly, mundanes politely) but that could be intelligence and contrarianism, or any other of many potential factors. I am not giving it any weight. However, I do think it’s the case that it is useful for “people who behave in this way” to congregate and continue to behave in this way with each other.
That is, when their value systems are sufficiently similar in relevant areas, I can say that being more polite is an error for them. And LessWrong is one place where the value systems sufficiently coincide.
My pet theory about this is that intelligence correlates with not fitting in socially, which then correlates with deliberately doing things differently to prove that not-fitting-in is a choice. If you hang out in any subcultures (goth, roleplaying, etc), you’ll tend to see a lot of that kind of countersignalling. I guess that’s a point in favour of your contrarianism argument.
Another alternative is that intelligence correlates with realising that communication styles are just styles and not the natural order, which then frees them up to switch between styles at will.
I’d really, really, really want to see any sort of numbers before presuming to make any such statement. You are talking about the nerd subculture, not about the world. I could just as well compare academics to stevedores and get the opposite plausible statement.
This comes across as wishful thinking on your part.
Since this dispute began, I have been trying to be more analytical in my reactions to comments—trying to determine what it is about them, in style or content, that I like.
I liked this comment, and upvoted it, partly because of its well-chosen counter-illustration, but also for reasons of style. It is relatively blunt, but the padding that it carries has a nice “rationalist” flavor. “I’d … want to see … numbers … before presuming …”. “This comes across as …” rather than simply “This is …”.
But in the course of making this analysis, it occurred to me that I am conducting the analysis as a bystander, rather than as the direct recipient of this feedback. I’m living in a forum where everything I write is perused by one recipient and ten bystanders. I know that the reaction of the recipient (and my reaction when I am in the recipient’s role) will be witnessed by these ubiquitous bystanders. The bystanders will judge—vote responses up or down. One reason we communicate differently here is that we are playing to an audience—not just conducting one-to-one communication.
You (David_Gerard) keep pointing out that we LessWrong denizens can cheerfully “dish out” bluntness, but we are not so happy about receiving it. True enough, but also a rather shallow observation. Surely, the ability to receive criticism without taking offense is a life skill every bit as important as the ability to dispense criticism without giving offense. One virtue of the culture of observed blunt communication that we cultivate here is that we get plenty of practice at receiving criticism, plus plenty of negative feedback if we respond by taking offense.
This may sound like more rationalization, but it is not. This environment has helped me to improve my own ability to “take coaching”, though I know I have a long way to go. Unfortunately, and this is the point you and Lionhearted have been consistently making, operating in this culture does not provide useful practice and feedback on the other important life-skill—offering criticism or correction without giving offense.
You have correctly reverse-engineered how I wrote it ;-)
I really don’t see it as a very blunt culture. (I suppose I should stress this more.) A frequently difficult one, but not blunt. Most comments are thoughtful and the commenters take due care. Some are indeed blunt to the point of rudeness, and you’ll see their good but blunt comments get lots of upvotes for content and downvotes for tone.
Hence why I said
Now.
Really? Can you give me some examples of groups that do share the same value systems? I feel like LessWrong is at the extreme end of ‘well established value system’, as it regards bluntness/politeness.
There are many places on the Internet that are less polite than here. For example, youtube comments.
Quick summary of some politeness protocols that most LW users employ:
Avoidance of ad-hom attacks and empty statements of emotion/opinion (“this is a bad idea” without any supporting evidence)
Being charitable—frequently people will write something along the lines of “It looks like you’re arguing X. X is bad because...” This is a very important aspect of politeness—it means taking the potential status hit to yourself of having misinterpreted the other person and provides them a line of retreat in case they really did mean X and now want to distance themselves from it.
In the same vein, clarifying what it is that you’re disagreeing about and why before descending with the walls of text.
Acknowledging when someone has made a particularly good post/argument even if you disagree with many of their points.
Apart from 2, they seem more like being rational than being polite. Possibly there is some overlap between politeness protocols in normal experience and rationality protocols on LessWrong. Possibly there is also some overlap between rudeness indicators in normal experience and rationality protocols on LessWrong.
I think the name of the overlap in rationality and politeness is called “not responding emotionally when someone has a different opinion” ;)
I’d say that that part (the bolded section) is bad if true, whether or not it is a “polite” thing to do. People should get used to being able to say “I was wrong” when they find out they were wrong. If someone’s post is genuinely ambiguous, then it’s fine to say “That sounds like you’re saying X, if so then I think that’s wrong, here’s why”, but if I say something that’s actually wrong and not particularly open to misinterpretation, and someone corrects me, then I wouldn’t consider them to be doing me a favour by giving me an out to allow me to change my mind while claiming that I didn’t mean it that way in the first place.
Why is it my responsibility to force you to admit your mistakes? Whether you take that line of retreat is more a reflection of your character than mine.
But one of the funny things about being polite is that by leaving them a graceful way out it’s actually easier for them to admit that they were wrong. Attack their status by making it clear that they were wrong and all you do is encourage status-saving behaviour. Now maybe you might say that this is a good thing because people need to learn how to admit to their mistakes even when they feel under attack, but most people are very very bad at that kind of graciousness. It’s much easier for someone to admit that they’re wrong if they don’t feel like it would lead to further attacks.
Good points; what you say (“by leaving them a graceful way out it’s actually easier for them to admit that they were wrong”) sounds quite plausible. (And I will admit that when I wrote the “I wouldn’t consider them to be doing me a favour...” bit, I was thinking ”...and neither should anyone else”, which neglects the fact that getting to that point can be a difficult process and that saying that everyone should do it isn’t helpful.) Though I would still say that I’d support a norm of encouraging newer users to get used to acknowledging mistakes, not taking disagreements/counterarguments/corrections as personal attacks, not taking unembellished corrections as meanness, etc.
Upvoted for complete agreement—although this community is already far better at it than anywhere else I’ve ever been.
You seem to be asserting that if I give you an out you will take it and change your mind without admitting you were wrong, but if I don’t give you an out you will change your mind and admit you were wrong.
Which, OK, good for you. Of course, even better would be to not take the out if offered, and admit you were wrong even when you aren’t forced to… but still, the willingness to admit error when you don’t have a line of retreat is admirable.
The problem arises if we’re dealing with people who lack that willingness… who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds.
Are you suggesting that such people don’t exist? That trying to change their minds isn’t worthwhile? Something else?
I do not assert that, and I’m not just saying that because you proved me wrong but gave me an out by saying “You seem to be asserting”. :)
If I, personally, have been convinced that I was wrong about something, then I’ll say so, whether or not I have the option of pretending I actually meant something else. And that’s certainly encouraged by LW’s atmosphere (and it’s been explicitly discussed and advocated here at times). What I was disagreeing with was erratio’s implication that giving people that option is a polite and desirable thing to do.
You say that there are people “who, given the choice between changing their minds and admitting they were wrong on the one hand, and not changing their minds on the other, will choose not to change their minds,” and you are correct, and I also don’t claim that trying to change their minds isn’t worthwhile. But on Less Wrong, if a commenter would prefer to (and would be able to) actually hold onto a mistaken belief rather than acknowledge having been mistaken, then they have bigger rationality problems than just being wrong about some particular question; helping them solve that (which requires allowing ourselves to notice it) seems more important. I can’t say I’ve actually seen much of this here, but if we observed that some user frequently abandoned debates that they seemed to be losing, and later expressed the same opinions without acknowledging the strong counterarguments that they had previously ignored… then I’d just say that Less Wrong may not be a good fit for them (or that they need to lurk more and/or read more of things like “How To Actually Change Your Mind”, etc.). I would not say that we should have been more accommodating to their aversion to admitting error.
(Also, we should stop using the phrase “line of retreat” as we’re using it here, because it will make people think of the post “Leave a Line of Retreat” even though we’re talking about something pretty different.)
Agreed that a commenter who chooses to hold onto a mistaken belief rather than admit error is being imperfectly rational, and agreed that we are under no obligation to be “accommodating to their aversion.”
I’m more confused about the rest of this. Perhaps a concrete example will clarify my confusion.
Suppose Sam says something that’s clearly wrong, and suppose I have a choice between two ways of framing the counterarguments. One frame (F1) gives Sam a way of changing their mind without having to admit they’re wrong. The other (F2) does not.
Suppose further that Sam is the sort of person who, given a choice between changing their mind and admitting they were wrong on the one hand, and not changing their mind on the other, will choose not to change their mind.
You seem to agree that Sam is possible, and that changing Sam’s mind is worthwhile. And it seems clear that F1 has a better chance of changing Sam’s mind than F2 does. (Confirm?) So I think we would agree that in general, using F1 rather than F2 is worthwhile.
But, you say, on Less Wrong things are different. Here, using F2 rather than F1 is more likely to help Sam solve their “bigger rationality problems,” and therefore the preferred choice. (Confirm?)
So… OK. If I’ve understood correctly thus far, then my question is why is F2 more likely to solve their rationality problems here? (And, relatedly, why isn’t it also more likely to do so elsewhere?)
You mention that to help Sam solve those problems, we have to “allow ourselves to notice” those problems. You also suggest that Sam just isn’t a good fit for the site at all, or that they need to lurk more, or that they need to read the appropriate posts. I can sort of see how some of those things might be part of an answer to my question, but it’s not really clear to me what that answer is.
Can you clarify that?
(Incidently, it seems to me that that post is all about the fact that people are more likely to change their minds when it’s emotionally acceptable to do so, which is precisely what I’m talking about. But sure, I’m happy to stop using the phrase if you think it’s misleading.)
F2 forces Sam to admit they were wrong. Not being able to admit you are wrong is a rationality problem, because not all truths are presented as F1 counterarguments—some, including experimental results, are F2 counterarguments to your state of mind. So F1 doesn’t attempt to solve the aversion to being wrong; F2 does.
The question of whether F2′s attempt succeeds often enough to be worth it is another question, one I don’t have any numbers or impressions on.
I said:
You said:
If the first statement is true, F2 doesn’t force Sam to admit they were wrong. What it does is force Sam not to change their mind.
If you’re rejecting my supposition… that is, if you’re asserting that Sam as described just doesn’t exist, or isn’t worth discussing… then I agree with you. But I explicitly asked you if that was what you meant, and you said it wasn’t.
If you’re accepting my supposition… well, that suggests that even though Sam won’t change his mind under F2, F2 is good because it makes Sam change his mind. That’s just nonsense.
If there’s a third possibility, I don’t see what it is.
Yeah, no, my idea was that F2 forces Sam to admit they were wrong, given that they change their mind. When considering the case of ‘on LessWrong’, I skipped the bit that says Sam that does not change their mind. Ooops. Yeah, I don’t think there are many Sams on LessWrong.
OK, glad we cleared that up.
Moving on… let me requote the line that started this thread:
That seems to me to be a pretty good summary of the strategy I used here… I summarized the position I saw you as arguing, then went on to explain what was wrong with that position.
Looking at the conversation, that strategy at least seems to have worked well… at least, it got us to a place where we could resolve the disagreement in a couple of short moves.
But you seem to be saying that, when dealing with people as rational as the typical LWer, it’s not a good strategy.
So, OK: what ought I have said instead, and how would saying that have worked better?
This by itself would have worked, and to the extent it could be described as working better, it would have punished me for not properly constructing your model in my head, something which I consider required for a response.
edit: the differences are very slight.
I’m just realising that our scales aren’t calibrated very similarly, and that you seem to think LessWrong is more “blunt”/less “nice” than I do.
My theory is that some folks here really value the perceived freedom to not spare people’s feelings with their posts/comments. I assume these folks experience the implied obligation in other social contexts to spare people’s feelings as onerous, though of course I don’t know.
Of course, that doesn’t mean they actually go around hurting people’s feelings all the time, no matter how much they may value the fact that they are free to do so.
Meanwhile, other folks carry on being kind/attentive/polite. I assume they don’t consider this an onerous obligation and behave here more or less as they do elsewhere along this axis.
And the sorts of folks who in most Internet channels create most of the emotional disturbance don’t seem to post much at all… either because karma works, or because they haven’t found the place, or because the admins are really good at filtering them out, or because the conversations here bore them, or some combination of those and other reasons.
The end result seems to be a “nice” level noticeably higher than most of the Internet, coupled with strong emotional support for not being “nice.” I found the dichotomy a little bewildering at first, but I’m kind of used to it now.
Yeah, why haven’t we attracted more trolls?
If you browse the −1 comments, you’ll see people being voted down for behaving dickishly. Some of these then get upvoted.
Is there a way to browse within a given karma tier? Or do you just mean browsing through the recent comments tier?
Yes, I just meant going through recent comments, and threads with collapsed posts. I don’t know of a way to browse “worst comments” … it’s not clear it’d even be a good idea to have one.
On reflection, I don’t think the blunt—nice scale is serving us very well at all. You see me endorsing some elements of “blunt” styles that are definitely negative, and I see you endorsing some elements of “nice” styles that are definitely negative. Neither of us are actually endorsing the negative elements, we’re just accidentally including them because our language is too imprecise. I think.
LessWrong has adopted some elements of bluntness because those elements serve the community well. Some of the elements would not serve us well in other social settings. When someone points out in a top-level post that this is the case, those who already know this instead see them suggesting that LessWrong should abandon these elements of bluntness.
For calibration purposes, where on that spectrum would you place the conversation we’re having right now? :)
On an arbitrary scale of 1 to 10 where 1 is Crocker’s Rules for everyone and 10 is horrifying, mincing politeness… 3. LessWrong on average is 3, but the good bits are 2.
Hmm. Getting an answer forced me to figure out exactly why I was asking. ;) I guess the followup question is, where on that scale would you put the threshold for everyday, out-in-public polite conversation between neurotypical adults? That is, the expected level, below which someone would come across as rude.
Between strangers, 7. Between acquaintances or friends, variation but it would congeal into two large groups hovering around 6 and 4.
If you want to see 9s and 10s you have to look for certain types of unstable power dynamics.
Basically, I like LessWrong’s approach because it feels more like ‘friendship group where politeness of 4-3 is okay’ and less like ‘strangers you should be polite to’.
Not enough information. Are the adults male, female or mixed? How much status do they have? What national background? Polite means a very different thing here (Australia) than it does in the US for example.
Yeah, but the scale we’re using isn’t very precise. The variables you mention will move the threshold around, certainly, but not so much that shokwave can’t at least give me a smallish range. We can limit it to modern, Western, and no significant status differences from each other.
Yeah, I can tell. ;)
This kind of statement is one of the reasons I consider ‘politeness’ to be an almost irrelevant metric to consider when evaluating people’s statements. The relationship between politeness and social ‘defection’ is utterly negligible.
On the subject of Wikileaks, I strongly recommend this blog post and the 2006 paper it analyses. Assange sets out in detail precisely what he’s trying to achieve and how he plans to do it. It’s the roadmap for Wikileaks. Casual commentators on the subject, particularly in the media, seem almost completely unaware of it.
On a personal note, I was somewhat perturbed to discover that Wikileaks is slightly my fault. Um, whoops.[/brag]