As of writing (November 9, 2021) this comment has 6 Karma across 11 votes. As a newbie to LessWrong with only a general understanding of LessWrong norms, I find it surprising that the comment is positive. I was wondering if those who voted on this comment (or who have an opinion on it) would be interested in explaining what Karma score this comment should have and why.
My view based on my own models of good discussion norms is that the comment is mildly toxic and should be hovering around zero karma or in slightly negative territory for the following reasons:
I would describe the tone as “sarcastic” in a way that makes it hard for me to distinguish between what the OP actually thinks and what they are saying or implying for effect.
The post doesn’t seem to engage with Geoff’s perspective in any serious way. Instead, I would describe it as casting aspersions on a straw model of Geoff.
The post seems most focused on generating applause lights via condemnation of Geoff than trying to explain why Geoff is part of the Rationality community despite his protestation to the contrary. (I could imagine the comment which tries to weigh the evidence about whether Geoff ought to be considered part of the Rationality community even today, but this comment isn’t it).
The comment repeatedly implies that Leverage was devoted to activities like “fighting evil spirits,” “using touch healing,” “exorcising demons,” etc. even though (1) the post where that is described only covers 2017-2019; (2) doesn’t specify that this kind of activity was common or typical even of her sub-group or of her overall experience; and (3) specifically notes that most people at Leverage didn’t have this experience.
I don’t think the comment is more than mildly toxic because it does raise the valid consideration that Geoff does appear to have positioned himself as at least Rationalist-adjacent early on and because none of the offenses listed above are particularly heinous. I’m sure others disagree with my assessment and I’d be interested in understanding why.
[Context: I work at Leverage now, but didn’t during Leverage 1.0 although I knew many of the people involved. I haven’t been engaging with LessWrong recently because the discussion has seemed quite toxic to me, but Speaking of Stag Hunts and in particular this comment made me a little bit more optimistic so I thought I’d try to get a clearer picture of LessWrong’s norms.]
“6 Karma across 11 votes” is, like, not good. It’s about what I’d expect from a comment that is “mildly toxic [but] does raise [a] valid consideration” and “none of the offenses … are particularly heinous”, as you put it. (For better or worse, comments here generally don’t get downvoted into the negative unless they’re pretty heinous; as I write this only one comment on this post has been voted to zero, and that comment’s only response describes it as “borderline-unintelligible”.) It sounds like you’re interpreting the score as something like qualified approval because it’s above zero, but taking into account the overall voting pattern I interpret the score more like “most people generally dislike the comment and want to push it to the back of the line, even if they don’t want to actively silence the voice”. This would explain Rob calibrating the strength of his downvote over time.
I can’t speak to either real-Viliam or the people upvoting or downvoting the comment, but here’s my best attempt to rewrite the comment in accordance with Duncan-norms (which overlap with but are not the same as LessWrong norms). Note that this is based off my best-guess interpretation of what real-Viliam was going for, which may not be what real-Viliam wanted or intended. Also please note that my attempt to “improve” Viliam’s comment should not be taken as a statement about whether or not it met some particular standard (even things that are already good enough can usually be made better). I’m not exactly answering Kerry’s question, just trying to be more clear about what I think good discourse norms are.
To address Geoff’s question about how to get out of the rationalist community...
I got the impression from Geoff’s commentary (particularly [quote] and [quote]) that he felt people were mistaken to typify him and Leverage as being substantially connected to, or straightforwardly a part of, the rationalist community.
This doesn’t make much sense to me, given my current level of exposure to all this. My understanding is that Geoff:
a) hangs out with lots of rationalist cultural leaders b) regularly communicates using lots of words, concepts, and norms common to the rationalist community c) actively recruits among rationalists, and d) runs Leverage, which seems very much typical of [the type of project a rationalist might launch].
People are free to set me straight on a, b, c, and d, if they think they’re wrong, but note that any alternate explanation would need to account for the impression I formed from just kind of looking around; it won’t be enough to just declare that I was wrong.
Given all that (and given that a lot of details known to Geoff and Leveragers will be opaque to the rest of us, thanks to the relatively closed nature of the project), I’m not sure how I or the median LWer was “supposed to know” that they weren’t closely related to the rationalist community.
But anyway. Setting that aside, and looking forward: if I were to offer advice, the advice would be to straightforwardly reduce Geoff’s/Leverage’s involvement with rationalists (move away from the Bay, change hiring practices) and/or to put some effort into injecting the epistemic and cultural differences into common knowledge. A little ironic to e.g. write a big post about this and put it on LessWrong (like a FB post about how you’re leaving FB), but that does seem like a start.
(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start. /snark)
I don’t know how to soften the following, but in the spirit of disclosure:
It’s my primary hypothesis that the confusion was not accidental. In a set of 100 people making the claims Geoff is making, I think a substantial fraction of them are being at least somewhat untruthful, and in a set of 100 people who had intentionally … parasitized? … the rationalist community, I think more than half of them would say the sorts of things Geoff is saying now.
I recognize this hypothesis is rude, but its rudeness doesn’t make it false. I’m trying to be clear about the fact that I know it could be wrong, that things aren’t always what they seem, etc.
But given what I know, there seem to be clear incentives to remaining close to the rationalist community in ways that match my impression of Geoff/Leverage’s actual closeness. e.g. it makes recruitment among rationalists much easier, makes it easier to find donors already willing to give to weird longtermist projects, etc. And if the cultural divide were really sharp, the way (it seems to me that) Geoff is saying, and the inferential gaps genuinely wide, then I’m not sure how Leverage would have been successful at attracting the interest of e.g. multiple junior CFAR staff. I’m reaching for a metaphor, here; what I’ve got is “I don’t think people in seminary school often become rabbis or imams.”
To be clear, I’m not saying that there isn’t a big gap. If I understand correctly, habryka was “shocked” to discover how far from central rationalist epistemics Leverage was, after already working there for a time [link]. I’m more saying “for there to be such a big gap and for it to have been so hard to spot at a casual glance is more likely to be explained by intent than by accident.”
Or so it seems to me, at least. Open to alternate explanations. Just skeptical on priors.
(And given e.g. habryka’s confusion, even with all of his local insider knowledge, it seems unreasonable to expect the median LWer or rationalist to have been less confused.)
In any event, the situation has changed. I’m actually in support of Geoff’s desire to part ways; personally I’d rather not spend much more time thinking about Leverage ever again. But I think it requires some steps that my admittedly-sketchy model of Geoff is loath to take. I think that “we get to divorce from the rationalists without leaving the Bay and changing the name of the org and changing our recruitment and donor pools and so on and so forth” might be a fabricated option.
Separately, but still pretty relevantly: this conversation didn’t touch much on what seems to me to be the actual core issue, which is the experience of Zoe and others. Understanding what happened, making sure it doesn’t happen again, trying to achieve justice (or at least closure), etc. I am curious, given that the conversation is largely here on LW, now, when LW can expect updates on all that.
Disclaimer: just as authors are not their characters, so too is “Duncan trying to show how X would be expressed under a particular set of norms” not the same as “Duncan asserting X.” I have not, in the above, represented my stance on all of this, just tried to meet Kerry’s curiosity/hope about norms of discourse.
My apologies to Viliam for the presumption, especially if I somehow strawmanned or misrepresented Viliam’s points. Viliam is not (exactly) to blame for my own interpretations and projections based on reading the above comment.
For the record, real-Viliam approves that this version mostly correctly (see below) captures the spirit of the original comment, with mixed opinion (slightly more positive than negative) on the style.
Nitpicking:
A little ironic to e.g. write a big post about this and put it on LessWrong (like a FB post about how you’re leaving FB), but that does seem like a start.
This thought never crossed my mind. If LW comments on Leverage, it makes perfect sense for Leverage to post a response on LW.
I think that “we get to divorce from the rationalists without leaving the Bay and changing the name of the org and changing our recruitment and donor pools and so on and so forth” might be a fabricated option.
This might be true per se, but is not what I tried to say. By “also, burn down the old website, and rename the organization” I tried (and apparently failed) to say that in my opinion, actions of Geoff/Leverage make more sense when interpreted as “hide the evidence of past behavior” rather than “make it obvious that we are not rationalists”.
In my opinion (sorry if this is too blunt), Geoff may be the kind of actor who creates good impressions in short term and bad impressions in long term, and some of his actions make sense as an attempt to disconnect his reputation from his past actions. (This could start another long debate. In general, I support the “right to be forgotten” when it refers to distant past, or there is good evidence that the person has changed substantially; but it can also be used as a too convenient get-out-of-jail-free card. Humans gossip for a reason. Past behavior is the best predictor of future behavior.)
Thanks a lot for taking the time to write this. The revised version makes it clearer to me what I disagree with and how I might go about responding.
An area of overlap that I notice between Duncan-norms and LW norms are sentences like this:
(This is not me being super charitable, but: it seems to me that the whole demons-and-crystals thing, which so far has not been refuted, to my knowledge, is also a start. /snark)
Where the pattern is something like: “I know this is uncharitable/rude, but [uncharitable/rude thing]. Where I come from the caveat isn’t understood to do any work. If I say “I know this is rude, but [rude thing]” I expect the recipient to take offense to roughly the same degree as if there was no caveat at all, and I expect the rudeness to derail the recipient’s ability to think about the topic to roughly the same degree.
If you’re interested, I’d appreciate the brief argument for thinking that it’s better to have norms that allow for saying the rude/uncharitable thing with a caveat instead of having norms that encourage making a similar point with non-rude/charitable comments instead.
There are sort of two parts to this, but they overlap and I haven’t really teased them apart, so sorry if this is a bit muddled.
I think there’s a tension between information and adherence-to-norms.
Sometimes we have a rude thought. Like, it’s not just that its easiest expression is rude, it’s that the thought itself is fundamentally rude. The most central example imo is when you genuinely think that somebody is wrong about themselves/their own thought processes/engaging in self-deception/in the grips of a blind spot. When your best hypothesis is that you actually understand them better than they understand themselves.
It’s not really possible to say that in a way that doesn’t contain the core sentiment “I think I know better than you,” here. You can do a lot of softening the blow, you can do a lot of hedging, but in the end, you’re either going to share your rude information, or you are going to hide your rude information.
Both LW culture and Duncan culture have a strong, endorsed bias toward making as much information shareable as possible.
Duncan culture, at least (no longer speaking for LW) also has a strong bias toward doing things which preserve and strengthen the social fabric.
(Now we’re into part two.)
If I express a fundamentally rude thought, but I do so in a super careful hedged and cautious way with all the right phrases and apologies, then what often happens is that the other person feels like they cannot be angry.
They’ve still been struck, but they were struck in a way that causes everyone else to think the striking was measured and reasonable, and so if they respond with hurt and defensiveness, they’ll be the one to lose points.
Even though they were the one who was “attacked,” so to speak.
A relevant snippet from another recent comment of mine:
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
So, as I see it, the value in “I admit this is bad but I’m going to do the bad thing” is sort of twofold.
One, it allows people to share information that they would otherwise be prevented from sharing, including “prevented by not having the available time and energy to do all of the careful softening and hedging.” Not everyone has the skill of modeling the audience and speaking diplomatically, and there’s value in giving those people a path to saying their piece, but we don’t want to abandon norms of politeness and so an accepting-of-the-costs and a taking-of-lumps is one way to allow that data in.
And two, it removes barriers in the way of appropriate pushback. By acknowledging the rudeness up front, you embolden the people who were offended to be offended in a way that will tend to delegitimize them less. You’re sort of disentangling your action from the norms. If you just say a rude thing and defend it because “whatev, it’s true and justified,” then you’re also incrementally weakening a bunch of structures that are in place to protect people, and protect cooperation. But if you say something like “I am going to say a thing that deserves punishment because it’s important to say, but then also I will accept the punishment,” you can do less damage to the idea that it’s important to be polite and charitable in the first place.
tension between information and adherence-to-norms
This mostly holds for information pertaining to norms. Math doesn’t need controversial norms, there is no tension there. Beliefs/claims that influence transmission of norms are themselves targeted by norms, to ensure systematic transmission. This is what anti-epistemology is, it’s doing valuable work in instilling norms, including norms for perpetuating anti-epistemology.
So the soft taboo on politics is about not getting into a subject matter that norms care about. And the same holds for interpersonal stuff.
For both my own thought and in high-trust conversations I have a norm that’s something like “idea generation before content filter” which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don’t have this norm for “things I say on the public internet” (or any equivalent norm). I’ll have to think a bit about what norms actually seem good to me here.
I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they’re (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the “I know this is uncharitable/rude, but [uncharitable/rude thing]” is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven’t read such comments carefully.
In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.
If I tried to make it explicit, I guess the rudeness disclaimer means that the speaker believed there was a politeness-clarity tradeoff, and decided to sacrifice politeness in order to maximize clarity.
If the observer appreciates the extra clarity, and thinks the sacrifice was worth it, the rudeness disclaimer serves as a reminder that they might want to correspondingly reduce the penalty they typically assign for rudeness.
Depending on context, the actually observer may be the addressee and/or third party. So, if the disclaimer has no effect on you, maybe you were not its intended audience. For example, people typically don’t feel grateful for being attacked more clearly.
.
That said, my speech norms are not Duncan’s speech norms. From my perspective, if the tone of the message is incongruent with its meaning, it feels like a form of lying. Strong emotions correspond to strong words; writing like a lawyer/diplomat is an equivalent of talking like a robot. (And I don’t believe that talking like robots is the proper way for rationalists to communicate.) Gestures and tone of voice are also in theory not necessary to deliver the message.
From my perspective, Duncan-speech is more difficult to read; it feels like if I don’t pay sufficient attention to some words between the numerous disclaimers, I may miss the entire point. It’s like the text is “no no no no (yes), no no no no (yes), no no no no (yes)”, and if you pay enough attention, you may decipher that the intended meaning is “(yes, yes, yes)”, but if the repeated disclaimers make you doze off, you might skip the important parts and conclude that he was just saying “no no no no”. But, dunno, perhaps if you practice this often, the encoding and decoding happens automatically. I mean, this is not just about Duncan, I also know other people who talk like this, and they seem to understand each other with no problem, it’s just me who sometimes needs a translator.
I am trying to be more polite than what is my natural style, but it costs me some mental energy, and sometimes I am just like fuck this. I prefer to imagine that I am making a politeness-clarity tradeoff, but maybe I’m just rationalizing, and using a convenient excuse to indulge in my baser instincts. Upvote or downvote at your own discretion. I am not even arguing in favor of my style; perhaps I am wrong and shouldn’t be doing this; I am probably defecting at some kind of Prisonner’s Dilemma. I am just making it clear that not only I do not follow Duncan’s speech norms, but I also disagree with them. (That is, I disagree with the idea that I should follow them. I am okay with Duncan following his own norms.)
.
EDIT: I am extremely impressed by Duncan’s comment, which I didn’t read before writing this. On reflection, this feels weird, because it makes me feel that I should take Duncan’s arguments more seriously… potentially including his speech norms… oh my god… I probably need to sleep on this.
This comment is excellent. I really appreciate it.
I probably share some of your views on the “no no no no (yes), no no no no (yes), no no no no (yes)” thing, and we don’t want to go too far with it, but I’ve come to like it more over time.
(Semi-relatedly: I think I rejected the sequences unfairly when I first encountered them early on for something like this kind of stylistic objection. Coming from a philosophical background I was like “Where are the premises? What is the argument? Why isn’t this stated more precisely?” Over time I’ve come to appreciate the psychological effect of these kinds of writing styles and value that more than raw precision.)
I, on the other hand, strong-upvoted it (and while I didn’t downvote Kerry’s reply, I must say that I find such “why aren’t you downvoting this comment, guys? doesn’t it break the rules??” comments to be obnoxious in general).
I find this kind of question really valuable. The karma system has massive benefits, but it can also be emotionally tough, and especially so for people with status regulating emotions. In my experience, discussing reasons for voting explicitly usually makes me feel better about it, even though I don’t have a gears model of why that is, I’m just reporting on observed data points. Maybe because it provides affirmation that we’re basically all trying to do the right thing rather than fight some kind of zero sum game.
An unendorsed part of my intention is to complain about the comment since I found it annoying. Depending on how loudly that reads as being my goal, my comment might deserve to be downvoted to discourage focusing the conversation on complaints of this type.
The endorsed part of my intention is that the LW conversations about Leverage 1.0 would likely benefit from commentary by people who know what actually went on in Leverage 1.0. Unfortunately, the set of “people who have knowledge of Leverage 1.0 and are also comfortable on LW” is really small. I’m trying to see if I am in this set by trying to understand LW norms more explicitly. This is admittedly a rather personal goal, and perhaps it ought to be discouraged for that reason, but I think indulging me a little bit is consonant with the goals of the community as I understand them.
Also, to render an implicit thing I’m doing explicit, I think I keep identifying myself as an outsider to LW as a request for something like hospitality. It occurs to me that this might not be a social form that LW endorses! If so, then my comment probably deserves to be downvoted from the LW perspective.
I hope you will feel comfortable here. I think you are following the LW norms quite okay. You seem to take the karma too seriously, but that’s what new users are sometimes prone to do; karma is an important signal, but it also inevitably contains noise; in long term it usually seems to work okay. If that means something for you, your comments are upvoted a lot.
I apologize for the annoying style of my comment. I will try to avoid doing so in the future, though I cannot in good faith make a promise to do so; sorry about that.
I sincerely believe that Geoff is a dangerous person, and I view his actions with great suspicion. This is not meant as an attack on you. Feel free to correct me whenever I am factually wrong; I prefer being corrected to staying mistaken. (Also, thanks to both Rob and Said for doing what they believed was the right thing.)
Unfortunately, the set of “people who have knowledge of Leverage 1.0 and are also comfortable on LW” is really small.
[Biting my tongue hard to avoid a sarcastic response. Trying to channel my inner Duncan. Realizing that I am actually trying to write a sarcastic response using mock-Duncan’s voice. Sheesh, this stuff is difficult… Am I being meta-sarcastic now? By the way, Wikipedia says that sarcasm is illegal in North Korea; I am not making this up...]
I am under impression that (some) Leverage members signed non-disclosure agreements. Therefore, when I observe the lack of Leverage supporters on LW, there are at least two competing explanations matching the known data, and I am not sure how to decide which one is closer to reality:
rationalist community and LW express negative attitude towards people supporting Leverage, so they avoid the environment they perceive as unfriendly;
people involved with Leverage cannot speak openly about Leverage… maybe only about some aspects of it, but not discussing Leverage at all helps them stay on the safe side;
and perhaps, also some kind of “null hypothesis” is worth considering, such as:
LW only attracts a small fraction of the population; only a few people have insider knowledge of Leverage; it is not unlikely that the intersection of these two sets just happens to be empty.
Do I understand you correctly as suggesting that the negative attitude of LW towards Leverage is the actual reason why we do not have more conversations about Leverage here? I am aware of some criticism of the Connection Theory on LW; is this what you have in mind, or something else? (Well, obviously the Zoe’s article, but that only happened recently so it can’t explain the absence of Leverage supporters before that.)
To me it seems that the combination of “Geoff prefers some level of secrecy about Leverage activities” + “Connection Theory was not well received on LW” + “there are only a few people in Leverage anyway” is a sufficient explanation of why the Leverage voices have been missing on LW. Do you have some evidence that contradicts this?
As of writing (November 9, 2021) this comment has 6 Karma across 11 votes. As a newbie to LessWrong with only a general understanding of LessWrong norms, I find it surprising that the comment is positive. I was wondering if those who voted on this comment (or who have an opinion on it) would be interested in explaining what Karma score this comment should have and why.
My view based on my own models of good discussion norms is that the comment is mildly toxic and should be hovering around zero karma or in slightly negative territory for the following reasons:
I would describe the tone as “sarcastic” in a way that makes it hard for me to distinguish between what the OP actually thinks and what they are saying or implying for effect.
The post doesn’t seem to engage with Geoff’s perspective in any serious way. Instead, I would describe it as casting aspersions on a straw model of Geoff.
The post seems most focused on generating applause lights via condemnation of Geoff than trying to explain why Geoff is part of the Rationality community despite his protestation to the contrary. (I could imagine the comment which tries to weigh the evidence about whether Geoff ought to be considered part of the Rationality community even today, but this comment isn’t it).
The comment repeatedly implies that Leverage was devoted to activities like “fighting evil spirits,” “using touch healing,” “exorcising demons,” etc. even though (1) the post where that is described only covers 2017-2019; (2) doesn’t specify that this kind of activity was common or typical even of her sub-group or of her overall experience; and (3) specifically notes that most people at Leverage didn’t have this experience.
I don’t think the comment is more than mildly toxic because it does raise the valid consideration that Geoff does appear to have positioned himself as at least Rationalist-adjacent early on and because none of the offenses listed above are particularly heinous. I’m sure others disagree with my assessment and I’d be interested in understanding why.
[Context: I work at Leverage now, but didn’t during Leverage 1.0 although I knew many of the people involved. I haven’t been engaging with LessWrong recently because the discussion has seemed quite toxic to me, but Speaking of Stag Hunts and in particular this comment made me a little bit more optimistic so I thought I’d try to get a clearer picture of LessWrong’s norms.]
“6 Karma across 11 votes” is, like, not good. It’s about what I’d expect from a comment that is “mildly toxic [but] does raise [a] valid consideration” and “none of the offenses … are particularly heinous”, as you put it. (For better or worse, comments here generally don’t get downvoted into the negative unless they’re pretty heinous; as I write this only one comment on this post has been voted to zero, and that comment’s only response describes it as “borderline-unintelligible”.) It sounds like you’re interpreting the score as something like qualified approval because it’s above zero, but taking into account the overall voting pattern I interpret the score more like “most people generally dislike the comment and want to push it to the back of the line, even if they don’t want to actively silence the voice”. This would explain Rob calibrating the strength of his downvote over time.
This is really helpful. Thanks!
I can’t speak to either real-Viliam or the people upvoting or downvoting the comment, but here’s my best attempt to rewrite the comment in accordance with Duncan-norms (which overlap with but are not the same as LessWrong norms). Note that this is based off my best-guess interpretation of what real-Viliam was going for, which may not be what real-Viliam wanted or intended. Also please note that my attempt to “improve” Viliam’s comment should not be taken as a statement about whether or not it met some particular standard (even things that are already good enough can usually be made better). I’m not exactly answering Kerry’s question, just trying to be more clear about what I think good discourse norms are.
Disclaimer: just as authors are not their characters, so too is “Duncan trying to show how X would be expressed under a particular set of norms” not the same as “Duncan asserting X.” I have not, in the above, represented my stance on all of this, just tried to meet Kerry’s curiosity/hope about norms of discourse.
My apologies to Viliam for the presumption, especially if I somehow strawmanned or misrepresented Viliam’s points. Viliam is not (exactly) to blame for my own interpretations and projections based on reading the above comment.
For the record, real-Viliam approves that this version mostly correctly (see below) captures the spirit of the original comment, with mixed opinion (slightly more positive than negative) on the style.
Nitpicking:
This thought never crossed my mind. If LW comments on Leverage, it makes perfect sense for Leverage to post a response on LW.
This might be true per se, but is not what I tried to say. By “also, burn down the old website, and rename the organization” I tried (and apparently failed) to say that in my opinion, actions of Geoff/Leverage make more sense when interpreted as “hide the evidence of past behavior” rather than “make it obvious that we are not rationalists”.
In my opinion (sorry if this is too blunt), Geoff may be the kind of actor who creates good impressions in short term and bad impressions in long term, and some of his actions make sense as an attempt to disconnect his reputation from his past actions. (This could start another long debate. In general, I support the “right to be forgotten” when it refers to distant past, or there is good evidence that the person has changed substantially; but it can also be used as a too convenient get-out-of-jail-free card. Humans gossip for a reason. Past behavior is the best predictor of future behavior.)
Thanks a lot for taking the time to write this. The revised version makes it clearer to me what I disagree with and how I might go about responding.
An area of overlap that I notice between Duncan-norms and LW norms are sentences like this:
Where the pattern is something like: “I know this is uncharitable/rude, but [uncharitable/rude thing]. Where I come from the caveat isn’t understood to do any work. If I say “I know this is rude, but [rude thing]” I expect the recipient to take offense to roughly the same degree as if there was no caveat at all, and I expect the rudeness to derail the recipient’s ability to think about the topic to roughly the same degree.
If you’re interested, I’d appreciate the brief argument for thinking that it’s better to have norms that allow for saying the rude/uncharitable thing with a caveat instead of having norms that encourage making a similar point with non-rude/charitable comments instead.
Happy to try.
There are sort of two parts to this, but they overlap and I haven’t really teased them apart, so sorry if this is a bit muddled.
I think there’s a tension between information and adherence-to-norms.
Sometimes we have a rude thought. Like, it’s not just that its easiest expression is rude, it’s that the thought itself is fundamentally rude. The most central example imo is when you genuinely think that somebody is wrong about themselves/their own thought processes/engaging in self-deception/in the grips of a blind spot. When your best hypothesis is that you actually understand them better than they understand themselves.
It’s not really possible to say that in a way that doesn’t contain the core sentiment “I think I know better than you,” here. You can do a lot of softening the blow, you can do a lot of hedging, but in the end, you’re either going to share your rude information, or you are going to hide your rude information.
Both LW culture and Duncan culture have a strong, endorsed bias toward making as much information shareable as possible.
Duncan culture, at least (no longer speaking for LW) also has a strong bias toward doing things which preserve and strengthen the social fabric.
(Now we’re into part two.)
If I express a fundamentally rude thought, but I do so in a super careful hedged and cautious way with all the right phrases and apologies, then what often happens is that the other person feels like they cannot be angry.
They’ve still been struck, but they were struck in a way that causes everyone else to think the striking was measured and reasonable, and so if they respond with hurt and defensiveness, they’ll be the one to lose points.
Even though they were the one who was “attacked,” so to speak.
A relevant snippet from another recent comment of mine:
So, as I see it, the value in “I admit this is bad but I’m going to do the bad thing” is sort of twofold.
One, it allows people to share information that they would otherwise be prevented from sharing, including “prevented by not having the available time and energy to do all of the careful softening and hedging.” Not everyone has the skill of modeling the audience and speaking diplomatically, and there’s value in giving those people a path to saying their piece, but we don’t want to abandon norms of politeness and so an accepting-of-the-costs and a taking-of-lumps is one way to allow that data in.
And two, it removes barriers in the way of appropriate pushback. By acknowledging the rudeness up front, you embolden the people who were offended to be offended in a way that will tend to delegitimize them less. You’re sort of disentangling your action from the norms. If you just say a rude thing and defend it because “whatev, it’s true and justified,” then you’re also incrementally weakening a bunch of structures that are in place to protect people, and protect cooperation. But if you say something like “I am going to say a thing that deserves punishment because it’s important to say, but then also I will accept the punishment,” you can do less damage to the idea that it’s important to be polite and charitable in the first place.
This mostly holds for information pertaining to norms. Math doesn’t need controversial norms, there is no tension there. Beliefs/claims that influence transmission of norms are themselves targeted by norms, to ensure systematic transmission. This is what anti-epistemology is, it’s doing valuable work in instilling norms, including norms for perpetuating anti-epistemology.
So the soft taboo on politics is about not getting into a subject matter that norms care about. And the same holds for interpersonal stuff.
OK, excellent this is also quite helpful.
For both my own thought and in high-trust conversations I have a norm that’s something like “idea generation before content filter” which is designed to allow one to think uncomfortable thoughts (and sometimes say them) before filtering things out. I don’t have this norm for “things I say on the public internet” (or any equivalent norm). I’ll have to think a bit about what norms actually seem good to me here.
I think I can be on board with a norm where one is willing to say rude or uncomfortable things provided they’re (1) valuable to communicate and (2) one makes reasonable efforts to nevertheless protect the social fabric and render the statement receivable to the person to whom it is directed. My vague sense of comments with the “I know this is uncharitable/rude, but [uncharitable/rude thing]” is that more than half of the time I think the caveat insulates the poster from criticism and does not meaningfully protect the social fabric or help the person to whom the comments are directed, but I haven’t read such comments carefully.
In any case, I now think there is at least a good and valid version of this norm that should be distinguished from abuses of the norm.
If I tried to make it explicit, I guess the rudeness disclaimer means that the speaker believed there was a politeness-clarity tradeoff, and decided to sacrifice politeness in order to maximize clarity.
If the observer appreciates the extra clarity, and thinks the sacrifice was worth it, the rudeness disclaimer serves as a reminder that they might want to correspondingly reduce the penalty they typically assign for rudeness.
Depending on context, the actually observer may be the addressee and/or third party. So, if the disclaimer has no effect on you, maybe you were not its intended audience. For example, people typically don’t feel grateful for being attacked more clearly.
.
That said, my speech norms are not Duncan’s speech norms. From my perspective, if the tone of the message is incongruent with its meaning, it feels like a form of lying. Strong emotions correspond to strong words; writing like a lawyer/diplomat is an equivalent of talking like a robot. (And I don’t believe that talking like robots is the proper way for rationalists to communicate.) Gestures and tone of voice are also in theory not necessary to deliver the message.
From my perspective, Duncan-speech is more difficult to read; it feels like if I don’t pay sufficient attention to some words between the numerous disclaimers, I may miss the entire point. It’s like the text is “no no no no (yes), no no no no (yes), no no no no (yes)”, and if you pay enough attention, you may decipher that the intended meaning is “(yes, yes, yes)”, but if the repeated disclaimers make you doze off, you might skip the important parts and conclude that he was just saying “no no no no”. But, dunno, perhaps if you practice this often, the encoding and decoding happens automatically. I mean, this is not just about Duncan, I also know other people who talk like this, and they seem to understand each other with no problem, it’s just me who sometimes needs a translator.
I am trying to be more polite than what is my natural style, but it costs me some mental energy, and sometimes I am just like fuck this. I prefer to imagine that I am making a politeness-clarity tradeoff, but maybe I’m just rationalizing, and using a convenient excuse to indulge in my baser instincts. Upvote or downvote at your own discretion. I am not even arguing in favor of my style; perhaps I am wrong and shouldn’t be doing this; I am probably defecting at some kind of Prisonner’s Dilemma. I am just making it clear that not only I do not follow Duncan’s speech norms, but I also disagree with them. (That is, I disagree with the idea that I should follow them. I am okay with Duncan following his own norms.)
.
EDIT: I am extremely impressed by Duncan’s comment, which I didn’t read before writing this. On reflection, this feels weird, because it makes me feel that I should take Duncan’s arguments more seriously… potentially including his speech norms… oh my god… I probably need to sleep on this.
This comment is excellent. I really appreciate it.
I probably share some of your views on the “no no no no (yes), no no no no (yes), no no no no (yes)” thing, and we don’t want to go too far with it, but I’ve come to like it more over time.
(Semi-relatedly: I think I rejected the sequences unfairly when I first encountered them early on for something like this kind of stylistic objection. Coming from a philosophical background I was like “Where are the premises? What is the argument? Why isn’t this stated more precisely?” Over time I’ve come to appreciate the psychological effect of these kinds of writing styles and value that more than raw precision.)
FWIW I downvoted Viliam’s comment soon after he posted it, and have strong-downvoted it now that it has more karma.
I, on the other hand, strong-upvoted it (and while I didn’t downvote Kerry’s reply, I must say that I find such “why aren’t you downvoting this comment, guys? doesn’t it break the rules??” comments to be obnoxious in general).
I find this kind of question really valuable. The karma system has massive benefits, but it can also be emotionally tough, and especially so for people with status regulating emotions. In my experience, discussing reasons for voting explicitly usually makes me feel better about it, even though I don’t have a gears model of why that is, I’m just reporting on observed data points. Maybe because it provides affirmation that we’re basically all trying to do the right thing rather than fight some kind of zero sum game.
That seems basically fair.
An unendorsed part of my intention is to complain about the comment since I found it annoying. Depending on how loudly that reads as being my goal, my comment might deserve to be downvoted to discourage focusing the conversation on complaints of this type.
The endorsed part of my intention is that the LW conversations about Leverage 1.0 would likely benefit from commentary by people who know what actually went on in Leverage 1.0. Unfortunately, the set of “people who have knowledge of Leverage 1.0 and are also comfortable on LW” is really small. I’m trying to see if I am in this set by trying to understand LW norms more explicitly. This is admittedly a rather personal goal, and perhaps it ought to be discouraged for that reason, but I think indulging me a little bit is consonant with the goals of the community as I understand them.
Also, to render an implicit thing I’m doing explicit, I think I keep identifying myself as an outsider to LW as a request for something like hospitality. It occurs to me that this might not be a social form that LW endorses! If so, then my comment probably deserves to be downvoted from the LW perspective.
I hope you will feel comfortable here. I think you are following the LW norms quite okay. You seem to take the karma too seriously, but that’s what new users are sometimes prone to do; karma is an important signal, but it also inevitably contains noise; in long term it usually seems to work okay. If that means something for you, your comments are upvoted a lot.
I apologize for the annoying style of my comment. I will try to avoid doing so in the future, though I cannot in good faith make a promise to do so; sorry about that.
I sincerely believe that Geoff is a dangerous person, and I view his actions with great suspicion. This is not meant as an attack on you. Feel free to correct me whenever I am factually wrong; I prefer being corrected to staying mistaken. (Also, thanks to both Rob and Said for doing what they believed was the right thing.)
[Biting my tongue hard to avoid a sarcastic response. Trying to channel my inner Duncan. Realizing that I am actually trying to write a sarcastic response using mock-Duncan’s voice. Sheesh, this stuff is difficult… Am I being meta-sarcastic now? By the way, Wikipedia says that sarcasm is illegal in North Korea; I am not making this up...]
I am under impression that (some) Leverage members signed non-disclosure agreements. Therefore, when I observe the lack of Leverage supporters on LW, there are at least two competing explanations matching the known data, and I am not sure how to decide which one is closer to reality:
rationalist community and LW express negative attitude towards people supporting Leverage, so they avoid the environment they perceive as unfriendly;
people involved with Leverage cannot speak openly about Leverage… maybe only about some aspects of it, but not discussing Leverage at all helps them stay on the safe side;
and perhaps, also some kind of “null hypothesis” is worth considering, such as:
LW only attracts a small fraction of the population; only a few people have insider knowledge of Leverage; it is not unlikely that the intersection of these two sets just happens to be empty.
Do I understand you correctly as suggesting that the negative attitude of LW towards Leverage is the actual reason why we do not have more conversations about Leverage here? I am aware of some criticism of the Connection Theory on LW; is this what you have in mind, or something else? (Well, obviously the Zoe’s article, but that only happened recently so it can’t explain the absence of Leverage supporters before that.)
To me it seems that the combination of “Geoff prefers some level of secrecy about Leverage activities” + “Connection Theory was not well received on LW” + “there are only a few people in Leverage anyway” is a sufficient explanation of why the Leverage voices have been missing on LW. Do you have some evidence that contradicts this?