Executive summary: I have no idea what you’re talking about.
Standards are not really popular. Most people don’t like them. Half the people here, I think, don’t even see the problem that I’m trying to point at. Or they see it, but they don’t see it as a problem.
I gather that you’re upset about how the Leverage conversation went, and also Cancel Culture, so I assume your chief proposition is that LessWrong is canceling Geoff Anders; but you haven’t actually made that case, just vaguely gestured with words.
I think that a certain kind of person is becoming less prevalent on LessWrong, and a certain other kind of person is becoming more prevalent, and while I have nothing against the other kind, I really thought LessWrong was for the first group.
What are the two kinds of persons? Really, I honestly do not know what you are claiming here. Repeat: I don’t have even a foggy guess as to what your two kinds of person are. Am I “a certain other kind of person”? How can I know?
Distinguish feeling from fact.
This post has virtually no facts. It has short, punchy sentences with italics for emphasis. It is written like a hortatory sermon. Its primary tool is rhetoric. The first quarter of it is essentially restating parts of the sequences. Then you point to some comments and are upset that they got upvoted. Others, you’re upset they haven’t been upvoted. I have no idea whatsoever why you feel these things, and you don’t elaborate. I am apparently one of the people who “don’t even see the problem that [you’re] trying to point at.”
This comment was much longer in draft, but I’ve deleted the remainder because I don’t want to seem “impatient” or “sneering”. I’m just confused: You wrote all these words intending to convince people of something, but you don’t specify what it is, and you don’t use the tools we typically use to convince (facts, reliable sources, syllogistic reasoning, math, game theory...). Am I just not part of the intended audience? If so, who are they?
Yikes, despite Duncan’s best attempts at disclaimers and clarity and ruling out what he doesn’t mean, he apparently still didn’t manage to communicate the thing he was gesturing at. That’s unfortunate. (And also worries me whether I have understood him correctly.)
I will try to explain some of how I understand Duncan.
I have not read the first Leverage post and so cannot comment on those examples, but I have read jessicata’s MIRI post.
and this still not having incorporated the extremely relevant context provided in this, and therefore still being misleading to anyone who doesn’t get around to the comments, and the lack of concrete substantiation of the most radioactive parts of this, and so on and so forth.
As I understand it: This post criticized MIRI and CFAR by drawing parallels to Zoe Curzi’s experience of Leverage. Having read the former but not the latter, the former seemed… not very substantive? Making vague parallels rather than object-level arguments? Merely mirroring the structure of the other post? In any case, there’s a reason why the post sits at 61 karma with 171 votes and 925 comments, and that’s not because it was considered uncontroversially true. Similarly, there’s a reason why Scott Alexander’s comment in response has 362 karma (6x that of the original post; I don’t recall ever seeing anything remotely like that on the site): the information in the original post is incomplete or misleading without this clarification.
The problem at this point is that this ultra-controversial post on LW does not have something like a disclaimer at the top, nor would a casual reader notice that it has lots of downvotes. All the nuance is in the impenetrable comments. So anyone who just reads that post without wading into the comments will get misinformed.
As for the third link in Duncan’s quote, it’s pointing at an anonymous comment supposedly by a former CFAR employee, which was strongly negative of CFAR. But multiple CFAR employees replied and did not have the same impressions of their employer. Which would have been a chance for dialogue and truthseeking, except… that anonymous commenter never followed up to reply, so we ended up with a comment thread of 41 comments which started with those anonymous and unsubstantiated claims and never got a proper resolution (and yet that original comment is strongly upvoted).
Does that make things a bit clearer? In all those cases Duncan (as I understand him) is pointing at things where the LW culture fell far short of optimal; he expects us to do better. (EDIT: Specifically, and to circle back on the Leverage stuff: He expects us to be truthseeking period, to have the same standards of rigor both for critics and defenders, etc. I think he worries that the culture here is currently too happy to upvote anything that’s critical (e.g. to encourage the brave act of speaking out), without extending the same courtesy to those who would speak out in defense of the thing being criticized. Solve for the equilibrium, and the consequences are not good.)
Personally I’m not so sure to which extent “better culture” is the solution (as I am skeptical of the feasibility of anything which requires time and energy and willpower), but have posted several suggestions for how “better software” could help in specific situations (e.g. mods being able to put a separate disclaimer above sufficiently controversial / disputed posts).
This comment was much longer in draft, but I’ve deleted the remainder because I don’t want to seem “impatient” or “sneering”. I’m just confused: You wrote all these words intending to convince people of something, but you don’t specify what it is, and you don’t use the tools we typically use to convince (facts, reliable sources, syllogistic reasoning, math, game theory...). Am I just not part of the intended audience? If so, who are they?
Thanks very much for taking the time to include this paragraph; it’s doing precisely the good thing. It helps my brain not e.g. slide into a useless and unnecessary defensiveness or round you off to something you’re not trying to convey.
I gather that you’re upset about how the Leverage conversation went, and also Cancel Culture, so I assume your chief proposition is that LessWrong is canceling Geoff Anders; but you haven’t actually made that case, just vaguely gestured with words.
That’s not, in fact, my chief proposition. I do claim that something-like-the-mass-of-users is doing something-resembling-canceling-Leverage (such that e.g. if I were to propose porting over some specific piece of Leverage tech to LW or an EA org’s internal culture, people would panic in roughly the same way people panic about the concept of eugenics).
But that’s an instance of what I was hoping to talk about, not the main point, which is why I decided not to spend a ton of time digging into all of the specific examples.
What are the two kinds of persons?
In short: people who think that it’s important to stick to the rationality 101 basics even when it’s inconvenient, versus those willing to abandon them (and upvote others abandoning them).
This post has virtually no facts. It has short, punchy sentences with italics for emphasis. It is written like a hortatory sermon. Its primary tool is rhetoric.
Yes. I’m trying to remind people why they should care. Note, though, that in combination with Concentration of Force, it’s saying a much more tightly defined and specific thing—”here’s a concept, and I’d like to apply that concept to this social domain.”
EDIT: in the discussion below, some people have seemed to take this as an admission of some sorts, as opposed to a “sure, close enough.” The words “exhortatory” and “rhetoric” are labels, each of which can cover a wide range of space; something can be a valid match for one of those labels yet not at all central.
I was acknowledging “sure, there’s some degree to which this post could be fairly described as exhortatory or rhetoric.” I was not agreeing with ”...and therefore any and all complaints one has about ‘exhortation’ or ‘rhetoric’ are fair to apply here.” I don’t think supposedlyfun was trying to pull a motte-and-bailey or a fallacy-of-the-grey; that’s why I replied cooperatively. Others, though, do seem to me like they are trying to, and I am not a fan.
I have no idea whatsoever why you feel these things, and you don’t elaborate.
I did elaborate on one. Would you be willing to choose another from the linked examples? The one that’s the most confusing or least apparently objectionable? I don’t want to take hours and hours, but I’m certainly willing to go deep on at least a couple.
I spent 15 minutes re-reading the thread underneath orthonormal’s comment to try to put myself in your head. I think maybe I succeeded, so here goes, but from a person whose job involves persuading people, it’s Not Optimal For Your Argument that I had to do this to engage with your model here, and it’s potentially wasteful if I’ve failed at modeling you.
I read both of the comments discussed below, at the time I was following the original post and comments, but did not vote on either.
***
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others
orthonormal P2 [which I inferred using the Principle of Charity]: Most of the time, people who immediately come across as cult leaders are trying to start a cult
Duncan P1: It’s bad when LW upvotes comments with very thin epistemic rigor
Duncan P2: This comment has very thin epistemic rigor because it’s based on a few brief conversations
Gloss: I don’t necessarily agree with your P2. It’s not robust, but nor is it thin; if true, it’s one person’s statement that, based on admittedly limited evidence, they had a high degree of confidence that Anders wanted to be a cult leader. I can review orthonormal’s post history to conclude that ze is a generally sensible person who writes as though ze buys into LW epistemics, and is also probably known by name to various people on the site, meaning if Anders wanted to sue zir for defamation, Anders could (another social and financial cost that orthonormal is incurring). Conditional on Anders not being a cult leader, I would be mildly surprised if orthonormal thought Anders was a cult leader/wannabe.
Also, this comment—which meets your epistemic standards, right? If so, did it cause you to update on the “Leverage is being canceled unfairly” idea?
***
Matt P1: I spent hundreds of hours talking to Anders
Matt P2: If he were a cult leader/wannabe, I would have noticed
Duncan P1: It’s bad when LW doesn’t upvote comments with good epistemic rigor
Duncan P2: This comment has good epistemic rigor because Matt has way more evidence than orthonormal
Gloss: [Edit: Upon reflection, I have deleted this paragraph. My commentary is not germane to the issue that Duncan and I are debating.]
***
The karma score disparity is currently 48 on 39 votes, to 5 on 26 votes.
Given my thought process above, which of the comments should I have strongly upvoted, weakly upvoted, done nothing to, weakly downvoted, or strongly downvoted, on your vision of LW?
Or: which parts of my thought process are inimical to your vision of LW?
***
If it helps you calibrate your response, if any, I spent about 45 minutes researching, conceptualizing, drafting, and editing this comment.
Quick point to get out of the way: re: the comment that you thought would likely meet my standards, yes, it does; when I hovered over it I saw that I had already (weak) upvoted it.
Here’s my attempt to rewrite orthonormal’s first comment; what I would have said in orthonormal’s shoes, if I were trying to say what I think orthonormal is trying to say.
All right, here comes some subjective experience. I’m offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.
I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was “this person wants to be a cult leader.” This was based on [specific number of minutes] of conversation.
The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations. I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010′s.
I feel like this is important and relevant because it seems like yet again we’re in a situation where a bunch of people are going “gosh, such shock, how could we have known?” The delta between my wannabe-cult-leader-detectors and everyone else’s is large, and I don’t know its source, but the same thing happened with [don’t name him, don’t summon him], who was booted from the Berkeley community for good reason.
I don’t think opaque intuition should be blindly followed, but as everyone is reeling from Zoe’s account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?
Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1⁄5, and that means there’s probably something to be learned from me and people like me.
If you’re actually looking for ways to make this better in the future, anyway.
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others (i.e. this wasn’t just idle hostility)
orthonormal P2 [which I inferred using the Principle of Charity]: This is relevant because, separate from the question of whether my detectors are accurate in an absolute sense, they’re more accurate than whatever it is all of you are doing
Duncan P1: It’s bad when LW upvotes comments that aren’t transparent about what they’re trying to accomplish and via what channels they’re trying to accomplish it
Duncan P2: orthonormal’s original comment is somewhat bad in this way; it’s owning its content on the surface but the implicature is where most of the power lies; the comment does not on its face say why it exists or what it’s trying to do in a way that an autistic ten-year-old could parse (evidence: I felt myself becoming sort of fuzzy/foggy and confused, reading it). As written, I think its main goal is to say “I told you so and also I’m a better judge of things than all of you”? But it doesn’t just come right out and say that and then pay for it, the way that I say in the OP above that I’m often smarter than other people in the room (along with an acknowledgement that there’s a cost for saying that sort of thing).
I do think that the original version obfuscated some important stuff (e.g. there’s a kind of motte-bailey at the heart of “we met at our CFAR workshop”; that could easily imply “we spent fifteen intensely intimate hours in one another’s company over four days” or “we spoke for five minutes and then were in the same room for a couple of classes”). That’s part of it.
But my concern is more about the delta between the comments’ reception. I honestly don’t know how to cause individuals voting in a mass to get comments in the right relative positions, but I think orthonormal’s being at 48 while Matt’s is at 5 is a sign of something wrong.
I think orthonormal’s belongs at something like 20, and Matt’s belongs at something like 40. I voted according to a policy that attempts to cause that outcome, rather than weak upvoting orthonormal’s, as I otherwise would have (its strengths outweigh its flaws and I do think it was a positive contribution).
In a world where lots of LessWrongers are tracking the fuzziness and obfuscation thing, orthonormal’s comment gets mostly a bunch of small upvotes, and Matt’s gets mostly a bunch of strong upvotes, and they both end up in positive territory but with a clear (ugh) “status differential” that signals what types of contributions we want to more strongly reward.
As for Matt’s comment:
Matt’s comment in part deserves the strong upvote because it’s a high-effort, lengthy comment that tries pretty hard to go slowly and tease apart subtle distinctions and own where it’s making guesses and so forth; agnostic of its content my prior a third of the way through was “this will ultimately deserve strong approval.”
I don’t think most of Matt’s comment was on the object level, i.e. comments about Anders and his likelihood of being a cult leader, wannabe or otherwise.
I think that it was misconstrued as just trying to say “pshhh, no!” which is why it hovers so close to zero.
My read of Matt’s comment:
Matt P1: It’s hard to tell what Ryan and orthonormal are doing
Matt P2: There’s a difference between how I infer LWers are reading these comments based on the votes, and how I think LWers ought to interpret them
Matt P3: Here’s how I interpret them
Matt P4: Here’s a bunch of pre-validation of reasons why I might be wrong about orthonormal, both because I actually might and because I’m worried about being misinterpreted and want to signal some uncertainty/humility here.
Matt P5: Ryan’s anecdote seems consistent, to me, with a joke of a form that Geoff Anders makes frequently.
Matt P6: My own personal take is that Geoff is not a cult leader and that the evidence provided by orthonormal and Ryan should be considered lesser than mine (and here’s why)
Matt P7-9: [various disclaimers and hedges]
Duncan P1: This comment is good because of the information it presents
Duncan P2: This comment is good because of the way it presents that information, and the way it attempts to make space for and treat well the previous comments in the chain
Duncan P3: This comment is good because it was constructed with substantial effort
Duncan P4: It’s bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws (the unclear motive thing).
I don’t disagree with either of your glosses, but most notably they missed the above axes. Like, based on your good-faith best-guess as to what I was thinking, I agree with your disagreements with that; your pushback against hypothesized-Duncan who’s dinging orthonormal for epistemic thinness is good pushback.
But I think my version of orthonormal’s comment is stronger, and while I don’t think their original comment was not-worth-writing, such that I’d say “don’t contribute if you’re not going to put forth as much effort as I did in my rewrite.” I think it was less worth writing than the rewrite. I think the rewrite gives a lot more, and … hypnotizes? … a lot less.
As for your gloss on Matt’s comment specifically, I just straightforwardly like it; if it were its own reply and I saw it when revisiting the thread I would weak or strong upvote it. I think it does exactly the sane-itizing light-shining that I’m pulling for, and that feels to me was only sporadically (and not reliably) present throughout the discussions.
I took however many minutes it’s been since you posted your reply to write this. 30-60?
Thanks, supposedlyfun, for pointing me to this thread.
I think it’s important to distinguish my behavior in writing the comment (which was emotive rather than optimized—it would even have been in my own case’s favor to point out that the 2012 workshop was a weeklong experiment with lots of unstructured time, rather than the weekend that CFAR later settled on, or to explain that his CoZE idea was to recruit teens to meddle with the other participants’ CoZE) from the behavior of people upvoting the comment.
I expect that many of the upvotes were not of the form “this is a good comment on the meta level” so much as “SOMEBODY ELSE SAW THE THING ALL ALONG, I WORRIED IT WAS JUST ME”.
This seems true to me. I’m also feeling a little bit insecure or something and wanting to reiterate that I think that particular comment was a net-positive addition and in my vision of LessWrong would have been positively upvoted.
Just as it’s important to separate the author of a comment from the votes that comment gets (which they have no control over), I want to separate a claim like “this being in positive territory is bad” (which I do not believe) from “the contrast between the total popularity of this and that is bad.”
I’m curious whether I actually passed your ITT with the rewrite attempt.
I think that if I put a more measured version of myself back into that comment, it has one key difference from your version.
“Pay attention to me and people like me” is a status claim rather than a useful model.
I’d have said “pay attention to a person who incurred social costs by loudly predicting one later-confirmed bad actor, when they incur social costs by loudly predicting another”.
(My denouncing of Geoff drove a wedge between me and several friends, including my then-best friend; my denouncing of the other one drove a wedge between me and my then-wife. Obviously those rifts had much to do with how I handled those relationships, but clearly it wasn’t idle talk from me.)
Otherwise, I think the content of your ITT is about right.
(The emotional tone is off, even after translating from Duncan-speak to me-speak, but that may not be worth going into.)
For the record, I personally count myself 2 for 2.5 on precision. (I got a bad vibe from a third person, but didn’t go around loudly making it known; and they’ve proven to be not a trustworthy person but not nearly as dangerous as I view the other two. I’ll accordingly not name them.)
Whether it’s better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they’re doing.
Out of all the things you would have added to orthonormal’s comment, the only one that I didn’t read at the time as explicit or implicit in zir comment was, “Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5”. I agree it would be nice if people gave more information about their own calibration where available. I don’t know whether it was available to orthonormal.
As for the rest, I’m sticking that at the end of this comment as a sort of appendix.
If I’m right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don’t see how I could have figured out that this is what our actual disagreement was.
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don’t feel like we need to fork anything based on the distance between our positions.
Why do you think the intensity scalars are so different between us?
***
All right, here comes some subjective experience. I’m offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.
The comment makes it clear that it is subjective experience. I wouldn’t expect ortho to add it if ze didn’t think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.
I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was “this person wants to be a cult leader.” This was based on [specific number of minutes] of conversation.
I don’t expect ortho to remember the number of minutes from nine years ago.
The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations.
I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010′s.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
I feel like this is important and relevant because it seems like yet again we’re in a situation where a bunch of people are going “gosh, such shock, how could we have known?”
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
The delta between my wannabe-cult-leader-detectors and everyone else’s is large, and I don’t know its source, but the same thing happened with [don’t name him, don’t summon him], who was booted from the Berkeley community for good reason.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
I don’t think opaque intuition should be blindly followed, but as everyone is reeling from Zoe’s account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience. Crux?
Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1⁄5, and that means there’s probably something to be learned from me and people like me.
See above.
If you’re actually looking for ways to make this better in the future, anyway.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post.
Hmmm, something has gone wrong. This is not the case, and I’m not sure what caused you to think it was the case.
“How explicit comments need to be regarding their own epistemic status” is a single star in the constellation of considerations that caused me to write the post. It’s one of the many ways in which I see people doing things that slightly decrease our collective ability to see what’s true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.
But it’s in no way the central casus belli of the OP. The constellation is. So my answer to “Why do you think the intensity scalars are so different between us?” is “maybe they aren’t? I didn’t mean the thing you were surprised by.”
I don’t expect ortho to remember the number of minutes from nine years ago...I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I’m reminded of the time that some researchers investigated what various people meant by the phrase “a very real chance,” and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).
It’s true that numbers aren’t super reliable, but even estimated/ballpark numbers (you’ll note I wrote the phrase “as many as” and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations. The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation. To provide data that distinguishes between possibilities.
The comment makes it clear that it is subjective experience.
True. (I reiterate, feeling a smidge defensive, that I’ve said more than once that the comment was net-positive as written, and so don’t wish to have to defend a claim like “it absolutely should have been different in this way!” That’s not a claim I’m making. I’m making the much weaker claim that my rewrite was better. Not that the original was insufficient.)
The thing that I’m pulling for, with the greater explicitness about its subjectivity …
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to do this. But I think it’s clearly stronger if it does.
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
I validate that. But I suspect you would not claim that their reason doesn’t matter at all, to anyone. And I suspect you would not claim that a substantial chunk of LWers aren’t guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds. The rewrite included more attempts to rule out everything else than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what’s going on, and reduces the fog and confusion and rate of misunderstandings.
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience.
I don’t think that’s true at all. I think that there are several different implications compatible with the act of posting ortho’s comment, and that “I’m suggesting that you weight my opinion more heavily based on me being right in this case” is only one such implication, and that it’s valuable to be specific about what you’re doing and why because other people don’t actually just “get” it. The illusion of transparency is a hell of a drug, and so is the typical mind fallacy. Both when you’re writing, and assume that people will just magically know what you’re trying to accomplish, and when you’re reading, and assume that everyone else’s interpretation will be pretty close to your own.
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to head off that sort of misunderstanding at the pass. But I think it’s clearly better if it does so.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me.
I actually included that sentence because I felt like ortho’s original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement. So I think we’re not in disagreement on that.
Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation.
Also understood: you and I have different preferences for explicitly stating underlying claims. I don’t think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement. Striking that balance is Hard.
I think we’ve drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.
(Something genuinely amusing, given the context, about the above being at 3 points out of 2 votes after four hours, compared to its parent being at 30 points out of 7 votes after five.)
It’s bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws
I have an alternative and almost orthogonal interpretation for why the karma scores are the way they are.
Both in your orthonormal-Matt example, and now in this meta-example, the shorter original comments require less context to understand and got more upvotes, while the long meandering detail-oriented high-context responses were hardly even read by anyone.
This makes perfect sense to me—there’s a maximum comment length after which I get a strong urge to just ignore / skim a comment (which I initially did with your response here; and I never took the time to read Matt’s comments, though I also didn’t vote on orthonormal’s comment one way or another, nor vote in the jessicata post much at all), and I would be astonished if that only happened to me.
Also think about how people see these comments in the first place. Probably a significant chunk comes from people browsing the comment feed on the LW front page, and it makes perfect sense to scroll past a long sub-sub-sub-comment that might not even be relevant, and that you can’t understand without context, anyway.
So from my perspective, high-effort, high-context, lengthy sub-comments intrinsically incur a large attention / visibility (and therefore karma) penalty. Things like conciseness are also virtues, and if you don’t consider that it in your model of “good along three different axes, and bad along none as far as I can see”, then that model is incomplete.
(Also consider things like: How much time do you think the average reader spends on LW; what would be a good amount of time, relative to their other options; would you prefer a culture where hundreds of people take the opportunity cost to read sub-sub-sub-comments over one where they don’t; also people vary enormously in their reading speed; etc.)
Somewhat related: My post in this thread on some of the effects of the existing LW karma system: If we grant the above, one remaining problem is that the original orthonormal comment was highly upvoted but looked worse over time:
What if a comment looks correct and receives lots of upvotes, but over time new info indicates that it’s substantially incorrect? Past readers might no longer endorse their upvote, but you can’t exactly ask them to rescind their upvotes, when they might have long since moved on from the discussion.
First, some off the cuff impressions of matt’s post (in the interest of data gathering):
In the initial thread I believe that I read the first paragraph of matt’s comment, decided I would not get much out of it, and stopped reading without voting.
Upon revisiting the thread and reading matt’s comment in full, I find it difficult to understand and do not believe I would be able to summarize or remember its main points now, about 15 minutes after the fact.
This seems somewhat interesting to test, so here is my summary from memory. After this I’ll reread matt’s post and compare what I thought it said upon first reading with what I think it says upon a second closer reading:
[person who met geoff] is making anecdotal claims about geoff’s cult-leader-ish nature based on little data. People who have much more data are making contrary claims, so it is surprising that [person]’s post has so many upvotes. [commenter to person] is using deadpan in a particular way, which could mean multiple things depending on context but I lack that context. I believe that they are using it to communicate that geoff said so in a non-joking manner, but that is also hearsay.
Commentary before re-reading: I expect that I missed a lot, since it was a long post and it did not stick in my mind particularly well. I also remember a lot of hedging that confused me, and points that went into parentheticals within parentheticals. These parentheticals were long enough that I remember losing track of what point was being made. I also may have confabulated arguments in this thread about upvotes and some from matt’s post.
I wanted to keep the summary “pure” in the sense that it is a genuine recollection without re-reading, but for clarity [person] is othonormal and [commenter to person] is RyanCarey.
Second attempt at summarizing while flipping back and forth between editor and matt’s comment:
RyanCarey is either mocking orthonormal or providing further weak evidence, but I don’t know which.
One reading of orthonormal’s comment is that he had a strong first impression, has been engaging in hostile gossip about Geoff, and has failed to update since in the presence of further evidence. Some people might have different readings. Orthonormal’s post has lots of karma, they have 15k+ karma in general, and their post is of poor quality, therefore the karma system may be broken.
RyanCarey used deadpan in an unclear way, I believe the best reading of their comment is that Geoff made a joke about being a cult leader. Several other commenters and I, all of whom have much more contact with Geoff than orthonormal, do not think he is or wants to be a cult leader. It is out of character for Geoff to make a deadpan joke about wanting to be a cult leader and RyanCarey didn’t give confidence in their recollection of their memory, therefore people should be unimpressed with the anecdote.
I am explicitly calling out orthonormal’s comment as hostile gossip, which I will not back up here but will back up in a later post. You are welcome to downvote me because of this, but if you do it means that the discussion norms of LessWrong have corroded. Other reasons for downvotes might be appropriate, such as the length.
How about we ask Geoff? I hereby ask Geoff if he’s a cult leader, or if he has any other comment.
I talked with Geoff recently, which some might see as evidence of a conspiracy.
Editing that summary to be much more concise:
Orthnonormal has had little contact with Geoff, but is and continues to engage in hostile gossip. I and others with more substantive contact do not believe he is a cult leader. The people orthonormal has talked with, alluded to by the conversations that have incurred orthonormal reputational costs, have had much more contact with Geoff. Despite all of this, orthonormal refuses to believe that Geoff is not a cult leader. I believe we should base the likelihood of Geoff being a cult leader on those who have had more contact with Geoff, or even based on Geoff’s words on their own.
I notice that as I am re-reading matt’s post, I expect that the potential reading of orthonormal’s that he presents at the beginning (a reading that I find uncharitable) is in fact matt’s reading. But he doesn’t actually say this outright. Instead he says “An available interpretation of orthonormal’s comment is...”. Indeed, I initially had an author’s note in the summary that reflected the point that I was unsure if “an available interpretation” was matt’s interpretation. It is only much later (inside a parenthetical) that he says “I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”...” to indicate that the uncharitable reading is in fact Matt’s reading.
Matt’s comment also included some comments that I read as sneering:
I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they’ve been right all along.
I would have preferred his comment to start small with some questions about orthonormal’s experience rather than immediately accuse them of hostile gossip. For instance, matt might have asked about the extent of orthonormal’s contact with Geoff, how confident orthonormal is that Geoff is a cult leader, and whether orthonormal updated against Geoff being a cult leader in light of their friends believing Geoff wasn’t a cult leader, etc. Instead, those questions are assumed to have answers that are unsupportive of orthonormal’s original point (the answers assumed in matt’s comment in order: very little contact, extremely confident, anti-updates in the direction of higher confidence). This seems like a central example of an uncharitable comment.
Overall I find matt’s comment difficult to understand after multiple readings and uncharitable of those he is conversing with, although I do value the data it adds to the conversation. I believe this lack of charity is part of why matt’s comment has not done well in terms of karma. I still have not voted on matt’s comment and do not believe I will. There are parts of it that are valuable, but it is uncharitable and that is a value I hold above most others. In cases like these, where parts of a comment are valuable and other parts are the sort of thing that I would rather pruned from the gardens I spend my time in, I tend to withhold judgment.
How do my two summaries compare? I’m surprised by how close the first summary I gave was to the “much more concise” summary I gave later. I expected to have missed more, largely due to matt’s comment’s length. I also remember finding it distasteful, which I omitted from my summaries but likely stemmed from the lack of charity extended to orthonormal.
Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt’s comment? How would they react to that much more concise comment, as compared to matt’s comment?
Strong upvote for doing this process/experiment; this is outstanding and I separately appreciate the effort required.
Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt’s comment? How would they react to that much more concise comment, as compared to matt’s comment?
I find your summary at least within-bounds, i.e. not fully ruled out by the words on the page. I obviously had a different impression, but I don’t think that it’s invalid to hold the interpretations and hypotheses that you do.
I particularly like and want to upvote the fact that you’re being clear and explicit about them being your interpretations and hypotheses; this is another LW-ish norm that is half-reliable and I would like to see fully reliable. Thanks for doing it.
When it comes to assessing whether a long comment or post is hard to read, quality and style of writing matters, too. SSC’s Nonfiction Writing Advice endlessly hammers home the point of dividing text into increasingly smaller chunks, and e.g. here’s one very long post by Eliezer that elicited multiple comments of the form “this was too boring to finish” (e.g. this one), some of which got alleviated merely by adding chapter breaks.
And since LW makes it trivial to add headings even to comments (e.g. I used headings here), I guess that’s one more criterion for me to judge long comments by.
(One could even imagine the LW site nudging long comments towards including stuff like headings. One could imagine a good version of a prompt like this: “This comment / post is >3k chars but consists of only 3 paragraphs and uses no headings. Consider adding some level of hierarchy, e.g. via headings.”)
Yes. I’m trying to remind people why they should care.
You’re fighting fire with fire. It’s hard for me to imagine a single standard that would permit this post as acceptably LessWrongian and also deem the posts you linked to as unacceptable.
Here’s an outline of the tactic that I see as common to both.
You have a goal X.
To achieve X, you need to coordinate people to do Y.
The easiest way to coordinate people to do Y is to use exhortatory rhetoric and pull social strings, while complaining when your opponent does the same thing.
You can justify (3) by appealing to a combination of the importance of X and of your lack of energy or desire not to be perfectionistic, while insisting that your opponents rise to a higher standard, and denying that you’re doing any of this—or introspecting for a while and then shrugging and doing it anyway.
If you can convince others to agree with you on the overriding importance of X (using rhetoric and social strings), then suddenly the possibly offensive moral odor associated with the tactic disappears. After all, everybody (who counts) agrees with you, and it’s not manipulative to just say what everybody (who counts) was thinking anyway, right?
“Trying to remind people why they should care” is an example of step (3).
This isn’t straightforwaredly wrong. It’s just a way to coordinate people, one with certain advantages and disadvantages relative to other coordination mechanisms, and one that is especially tractable for certain goals in certain contexts.
In this case, it seems like one of your goals is to effect a site culture in which this tactic self-destructs. The site’s culture is just so stinkin’ rational that step (3) gets nipped in the bud, every time.
This is the tension I feel in reading your post. On the one hand, I recognize that it’s allowing itself an exception to the ban it advocates on this 5-step tactic in the service of expunging the 5-step tactic from LessWrong. On the other hand, it’s not clear to me whether, if I agreed with you, I would criticize this post, or join forces with it.
A successful characterization of a problem generally suggests a solution. My confusion about the correct response to your characterization therefore leads me to fear your characterization is incorrect. Let me offer an alternative characterization.
Perhaps we are dealing with a problem of market size.
In a very small market, there is little ability to specialize. Poverty is therefore rampant. Everybody has to focus on providing themselves with the basics, and has to do most things themselves. Trade is also rare because the economy lacks the infrastructure to facilitate trades. So nobody has much of anything, and it’s very hard to invest.
What if we think about a movement and online community like this as a market? In a nice big rationality market, we’d have plenty of attention to allocate to all the many things that need doing. We’d have proofreaders galore, and lots of post writers. There’d be lots of money sloshing around for bounties on posts, and plenty of people thinking about how to get this just right. There’d be plenty of comments, critical, supportive, creative, and extensive. Comments would be such an important feature of the discourse surrounding a post that there’d be heavy demand for improved commenting infrastructure, for moderation and value-extraction from the comments. There’d be all kinds of curation going on, and ways to allocate rewards and support the development of writers on the website.
In theory, all we’d need to generate a thriving rationality market like this is plenty of time, and a genuine (though not necessarily exclusive) demand for rationality. It would self-organize pretty naturally through some combination of barter, social exchange, and literal cash payments for various research, writing, editing, teaching, and moderation services.
The problem is the slow pace at which this is emerging on its own, and the threat of starvation in the meantime. Let’s even get a little bit ecological. A small LW will go through random fluctuations in activity and participation. If it gets too small, it could easily dip into an irrecoverable lack of participation. And the smaller the site is, the harder it will be to attain the market size necessary to permit specialization, since any participant will have to do most everything for themselves.
Under this frame, then, your post is advocating for some things that seem useful and some that seem harmful. You give lots of ideas for jobs that seem helpful (in some form) in a LW economy big enough to support such specialized labor.
On the other hand, you advocate an increase in regulation, which will come with an inevitable shrinking of the population. I fear that this will have the opposite of the effect you intend. Rather than making the site hospitable for a resurgence of “true rationalists,” you will create the conditions for starvation by reducing our already-small market still further. Even the truest of rationalists will have a hard time taking care of their rationality requirements when the population of the website has shrunk to that extent.
Posts just won’t get written. Comments won’t be posted. People won’t take risks. People won’t improve. They’ll find themselves frustrated by nitpicks, and stop participating. A handful of people will remain for a while, glorying in the victory of their purge, and then they’ll quit too after a few months or a few years once that gets boring.
I advocate instead that you trust that everybody on this website is an imperfect rationalist with a genuine preference for this elusive thing called “rationality.” Allow a thousand seeds to be planted. Some will bloom. Gradually, the rationalist economy will grow, and you’ll see the results you desire without needing much in the way of governance or intervention. And when we have need of governance, we’ll be able to support it better.
It’s always hard, I think, for activists to accept that the people and goals they care about can and will largely take care of themselves without the activist’s help.
I am not fighting fire with fire. I request that you explicitly retract the assertion, given that it is both a) objectively false, and b) part of a class of utterances that are in general false far more often than they are true, and which tend to make it harder to think and see clearly in exactly the way I’m gesturing at with the OP.
Some statements that would not have been false:
“This seems to me like it’s basically fighting fire with fire.”
“I believe that, in practice, this ends up being fighting fire with fire.”
“I’m having a hard time summing this up as anything other than ‘fighting fire with fire.’”
...and I reiterate that those subtle differences make a substantial difference in people’s general ability to do the collaborative truth-seeking thing, and are in many ways precisely what I’m arguing for above.
I clearly outline what I am identifying as “fire” in the above post. I have one list which is things brains do wrong, and another list which lays out some “don’ts” that roughly correspond to those problems.
I am violating none of those don’ts, and, in my post, exhibiting none of those wrongbrains. I in fact worked quite hard to make sure that the wrongbrains did not creep in, and abandoned a draft that was three-quarters complete because it was based on one.
In many ways, the above essay is an explicit appeal that people not fight fire with fire. It identifies places where people abandon their principles in pursuit of some goal or other, and says “please don’t, even if this leads to local victory.”
You’re fighting fire with fire. It’s hard for me to imagine a single standard that would permit this post as acceptably LessWrongian and also deem the posts you linked to as unacceptable.
It’s the one that I laid out in my post. If you find it confusing, you can ask a clarifying question. If one of the examples seems wrong or backwards, you can challenge it. I appreciate the fact that you hedged your statement by saying that you have a hard time imagining, which is better than in the previous sentence, where you simply declared that I was doing a thing (which I wasn’t), rather than saying that it seemed to you like X or felt like X or you thought it was X for Y and Z reasons.
The standard is: don’t violate the straightforward list of rationality 101 principles and practices that we have a giant canon of knowledge and agreement upon. There’s a separate substandard that goes something like “don’t use dark-artsy persuasion; don’t yank people around by their emotions in ways they can’t see and interact with; don’t deceive them by saying technically true things which you know will result in a false interpretation, etc.”
I’m adhering to that standard, above.
There’s fallacy-of-the-grey in your rounding-off of “here’s a post where the author acknowledged in their end notes that they weren’t quite up to the standard they are advocating” and “you’re fighting fire with fire.” There’s also fallacy-of-the-grey in pretending that there’s only one kind of “fire.”
I strongly claim that I am, in general, not frequently in violation of any of the principles that I have explicitly endorsed, and that if it seems I’m holding others to a higher standard than I’m holding myself, it’s likely that the standard I’m holding has been misunderstood. I also believe that people who are trying to catch me when I’m actually failing to live up are on my side and doing me a favor, and though I’m not perfect and sometimes it takes me a second to get past the flinch and access the gratitude, I think I’m credible about acting in accordance with that overall.
You have a goal X.
To achieve X, you need to coordinate people to do Y.
The easiest way to coordinate people to do Y is to use exhortatory rhetoric and pull social strings, while complaining when your opponent does the same thing.
You can justify (3) by appealing to a combination of the importance of X and of your lack of energy or desire not to be perfectionistic, while insisting that your opponents rise to a higher standard, and denying that you’re doing any of this—or introspecting for a while and then shrugging and doing it anyway.
If you can convince others to agree with you on the overriding importance of X (using rhetoric and social strings), then suddenly the possibly offensive moral odor associated with the tactic disappears. After all, everybody (who counts) agrees with you, and it’s not manipulative to just say what everybody (who counts) was thinking anyway, right?
I did not “use exhortatory rhetoric and pull social strings.” I should walk back my mild “yeah fair” in response to the earlier comment, since you’re taking it and adversarially running with it.
If you read the OP and do not choose to let your brain project all over it, what you see is, straightforwardly, a mass of claims about how I feel,how I think,what I believe, and what I think should be the case.
I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you’re going to dismiss all of the “I” statements as being mere window dressing or something (I’m not sure that’s what you’re doing, but it seems like something like that is necessary, to pretend that they weren’t omnipresent in what I wrote), you need to do so explicitly. You need to argue for them not-mattering; you can’t just jump straight to ignoring them, and pretending that I was propagandizing.
I also did not complain about other people using exhortatory rhetoric and pulling social strings. That’s a strawman of my point. I complained about people a) letting their standards on what’s sufficiently justified to say slip, when it was convenient, and b) en-masse upvoting and otherwise tolerating other people doing so.
I gave specifics; I gave a model. Where that model wasn’t clear, I offered to go in-depth on more examples (an offer that I haven’t yet seen anyone take me up on, though I’m postponing looking at some other comments while I reply to this one).
I thoroughly and categorically reject (3) as being anywhere near a summary of what I’m doing above, and (4) is … well, I would say “you’re being an uncharitable asshole, here,” except that what’s actually true and defensible and prosocial is to note that I am having a strongly negative emotional reaction to it, and to separately note that you’re not passing my ITT and you’re impugning my motives and in general you’re hand-waving away the part where you have actual reasons for the attempt to delegitimize and undermine both me and my points.
In this case, it seems like one of your goals is to effect a site culture in which this tactic self-destructs. The site’s culture is just so stinkin’ rational that step (3) gets nipped in the bud, every time.
I recognize that it’s allowing itself an exception to the ban it advocates on this 5-step tactic in the service of expunging the 5-step tactic from LessWrong.
No. You’ve failed to pass my ITT, you’ve failed to understand my point, and as you drift further and further from what I was actually trying to say, it gets harder and harder to address it line-by-line because I keep being unable to bring things back around.
I’m not trying to cause appeals-to-emotion to disappear. I’m not trying to cause strong feelings oriented on one’s values to be outlawed. I’m trying to cause people to run checks, and to not sacrifice their long-term goals for the sake of short-term point-scoring.
I definitely do not believe that this post, as written, would not survive or belong on the better version of LessWrong I’m envisioning (setting aside the fact that it wouldn’t be necessary there). I’m not trying to effect a site culture where the tactic of the OP self-destructs, and I’m not sure where that belief came from. I just believe that, in the steel LW, this post would qualify as mediocre, instead of decent.
The place where I’m most able to engage with you is:
On the other hand, you advocate an increase in regulation, which will come with an inevitable shrinking of the population. I fear that this will have the opposite of the effect you intend. Rather than making the site hospitable for a resurgence of “true rationalists,” you will create the conditions for starvation by reducing our already-small market still further. Even the truest of rationalists will have a hard time taking care of their rationality requirements when the population of the website has shrunk to that extent.
Posts just won’t get written. Comments won’t be posted. People won’t take risks. People won’t improve. They’ll find themselves frustrated by nitpicks, and stop participating. A handful of people will remain for a while, glorying in the victory of their purge, and then they’ll quit too after a few months or a few years once that gets boring.
Here, you assert some things that are, in fact, only hypotheses. They’re certainly valid hypotheses, to be clear. But it seems to me that you’re trying to shift the conversation onto the level of competing stories, as if what’s true is either “Duncan’s optimistic frame, in which the bad people leave and the good people stay” or “the pessimistic frame, in which the optimistic frame is naive and the site just dies.”
This is an antisocial move, on my post where I’m specifically trying to get people to stop pulling this kind of crap.
Raise your hypothesis. Argue that it’s another possible outcome. Propose tests or lines of reasoning that help us to start figuring out which model is a better match for the territory, and what each is made of, and how we might synthesize them.
I wrote several hundred words on a model of evaporative cooling, and how it drives social change. Your response boils down to “no u.” It’s full of bald assertions. It’s lacking in epistemic humility. It’s exhausting in all the ways that you seem to be referring to when you point at “frustrated by nitpicks, and stop participating.” The only reason I engaged with it to this degree is that it’s an excellent example of the problem.
I would like to register that I think this is an excellent comment, and in fact caused me to downvote the grandparent where I would otherwise have neutral or upvoted. (This is not the sort of observation I would ordinarily feel the need to point out, but in this case it seemed rather appropriate to do so, given the context.)
I had literally the exact same experience before I read your comment dxu.
I imagine it’s likely that Duncan could sort of burn out on being able to do this [1] since it’s pretty thankless difficult cognitive work. [2]
But it’s really insightful to watch. I do think he could potentially tune up [3] the diplomatic savvy a bit [4] since I think while his arguments are quite sound [5] I think he probably is sometimes making people feel a little bit stupid via his tone. [6]
Nevertheless, it’s really fascinating to read and observe. I feel vaguely like I’m getting smarter.
###
Rigor for the hell of it [7]:
[1] Hedged hypothesis.
[2] Two-premise assertion with a slightly subjective basis, but I think a true one.
[3] Elaborated on a slightly different but related point further in my comment below to him with an example.
[4] Vague but I think acceptably so. To elaborate, I mean making one’s ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I’m aware it doesn’t acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.
[5] There’s some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.
[6] Hedged hypothesis. Note: doesn’t make a judgment either way as to whether it’s worth it or not.
[7] Added after writing to double-check I’m playing by the rules and clear up ambiguity. “For the hell of it” is just random stylishness and can be safely mentally deleted.
(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I’m always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)
Nope. False, and furthermore Kafkaesque; there is no defensible reading of either the post or my subsequent commentary that justifies this line, and that alone being up-front and framing the rest of what you have to say is extremely bad, and a straightforward example of the problem.
It is a nuance-destroying move, a rounding-off move, a making-it-harder-for-people-to-see-and-think-clearly move, an implanting-falsehoods move. Strong downvote as I compose a response to the rest.
Given that there is lots of “let’s comment on what things about a comment are good and which things are bad” going on in this thread, I will make more explicit a thing that I would have usually left implicit:
My current sense is that this comment maybe was better to write than no comment, given the dynamics of the situation, but I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch, and while I think that was better than just leaving things unresponded, my sense is the discussion overall would have gone better if you had just written your longer comment.
In response to this, I’ll bow out (from this subthread) for a minimum period of 3 days. (This is in accordance with a generally wise policy I’m trying to adopt.)
EDIT: I thought Oli was responding to a different thing (I replied to this from the sidebar). I was already planning not to add anything substantive here for a few days. I do note, though, that even if two people both unproductively turn up the heat, one after the other, in my culture it still makes a difference which one broke peace first.
The first 20 pages or so are almost a must-read in my opinion.
Highly recommended, for you in particular.
A Google search with filetype:pdf will find you a copy. You can skim it fast — not needed to close read it — and you’ll get the gems.
Edit for exhortation: I think you’ll get a whole lot out of it such that I’d stake some “Sebastian has good judgment” points on it that you can subtract from my good judgment rep if I’m wrong. Seriously please check it out. It’s fast and worth it.
This response I would characterize as steps (3) and (4) of the 5-step tactic I described. You are using more firey rhetoric (“Kafkaesque,” “extremely bad,” “implanting falsehoods,”), while denying that this is what you are doing.
I am not going to up-vote or down-vote you. I will read and consider your next response here, but only that response, and only once. I will read no other comments on this post, and will not re-read the post itself unless it becomes necessary.
I infer from your response that from your perspective, my comment here, and me by extension, are in the bin of content and participants you’d like to see less or none of on this website. I want to assure you that your response here in no way will affect my participation on the rest of this website.
Your strategy of concentration of force only works if other people are impacted by that force. As far as your critical comment here, as the Black Knight said, I’ve known worse.
If you should continue this project and attack me outside of this post, I am precommitting now to simply ignoring you, while also not engaging in any sort of comment or attack on your character to others. I will evaluate your non-activist posts the same way I evaluate anything else on this website. So just be aware that from now on, any comment of yours that strikes me as having a tone similar to this one of yours will meet with stony silence from me. I will take steps to mitigate any effect it might have on my participation via its emotional effect. Once I notice that it has a similar rhetorical character, I will stop reading it. I am specifically neutralizing the effect of this particular activist campaign of yours on my thoughts and behavior.
Jumping in here in what i hope is a prosocial way. I assert as hypothesis that the two of you currently disagree about what level of meta the conversation is/should-be at, and each feels that the other has an obligation to meet them at their level, and this has turned up the heat a lot.
maybe there is a more oblique angle then this currently heated one?
It’s prosocial. For starters, AllAmericanBreakfast’s “let’s not engage,” though itself stated in a kind of hot way, is good advice for me, too. I’m going to step aside from this thread for at least three days, and if there’s something good to come back to, I will try to do so.
BTW this inspired an edit in the “two kinds of persons” spot specifically, and I think the essay is much stronger for it, and I strongly appreciate you for highlighting your confusion there.
EDIT: and also in the author’s note at the bottom.
Executive summary: I have no idea what you’re talking about.
I gather that you’re upset about how the Leverage conversation went, and also Cancel Culture, so I assume your chief proposition is that LessWrong is canceling Geoff Anders; but you haven’t actually made that case, just vaguely gestured with words.
What are the two kinds of persons? Really, I honestly do not know what you are claiming here. Repeat: I don’t have even a foggy guess as to what your two kinds of person are. Am I “a certain other kind of person”? How can I know?
This post has virtually no facts. It has short, punchy sentences with italics for emphasis. It is written like a hortatory sermon. Its primary tool is rhetoric. The first quarter of it is essentially restating parts of the sequences. Then you point to some comments and are upset that they got upvoted. Others, you’re upset they haven’t been upvoted. I have no idea whatsoever why you feel these things, and you don’t elaborate. I am apparently one of the people who “don’t even see the problem that [you’re] trying to point at.”
This comment was much longer in draft, but I’ve deleted the remainder because I don’t want to seem “impatient” or “sneering”. I’m just confused: You wrote all these words intending to convince people of something, but you don’t specify what it is, and you don’t use the tools we typically use to convince (facts, reliable sources, syllogistic reasoning, math, game theory...). Am I just not part of the intended audience? If so, who are they?
Yikes, despite Duncan’s best attempts at disclaimers and clarity and ruling out what he doesn’t mean, he apparently still didn’t manage to communicate the thing he was gesturing at. That’s unfortunate. (And also worries me whether I have understood him correctly.)
I will try to explain some of how I understand Duncan.
I have not read the first Leverage post and so cannot comment on those examples, but I have read jessicata’s MIRI post.
As I understand it: This post criticized MIRI and CFAR by drawing parallels to Zoe Curzi’s experience of Leverage. Having read the former but not the latter, the former seemed… not very substantive? Making vague parallels rather than object-level arguments? Merely mirroring the structure of the other post? In any case, there’s a reason why the post sits at 61 karma with 171 votes and 925 comments, and that’s not because it was considered uncontroversially true. Similarly, there’s a reason why Scott Alexander’s comment in response has 362 karma (6x that of the original post; I don’t recall ever seeing anything remotely like that on the site): the information in the original post is incomplete or misleading without this clarification.
The problem at this point is that this ultra-controversial post on LW does not have something like a disclaimer at the top, nor would a casual reader notice that it has lots of downvotes. All the nuance is in the impenetrable comments. So anyone who just reads that post without wading into the comments will get misinformed.
As for the third link in Duncan’s quote, it’s pointing at an anonymous comment supposedly by a former CFAR employee, which was strongly negative of CFAR. But multiple CFAR employees replied and did not have the same impressions of their employer. Which would have been a chance for dialogue and truthseeking, except… that anonymous commenter never followed up to reply, so we ended up with a comment thread of 41 comments which started with those anonymous and unsubstantiated claims and never got a proper resolution (and yet that original comment is strongly upvoted).
Does that make things a bit clearer? In all those cases Duncan (as I understand him) is pointing at things where the LW culture fell far short of optimal; he expects us to do better. (EDIT: Specifically, and to circle back on the Leverage stuff: He expects us to be truthseeking period, to have the same standards of rigor both for critics and defenders, etc. I think he worries that the culture here is currently too happy to upvote anything that’s critical (e.g. to encourage the brave act of speaking out), without extending the same courtesy to those who would speak out in defense of the thing being criticized. Solve for the equilibrium, and the consequences are not good.)
Personally I’m not so sure to which extent “better culture” is the solution (as I am skeptical of the feasibility of anything which requires time and energy and willpower), but have posted several suggestions for how “better software” could help in specific situations (e.g. mods being able to put a separate disclaimer above sufficiently controversial / disputed posts).
Thanks very much for taking the time to include this paragraph; it’s doing precisely the good thing. It helps my brain not e.g. slide into a useless and unnecessary defensiveness or round you off to something you’re not trying to convey.
That’s not, in fact, my chief proposition. I do claim that something-like-the-mass-of-users is doing something-resembling-canceling-Leverage (such that e.g. if I were to propose porting over some specific piece of Leverage tech to LW or an EA org’s internal culture, people would panic in roughly the same way people panic about the concept of eugenics).
But that’s an instance of what I was hoping to talk about, not the main point, which is why I decided not to spend a ton of time digging into all of the specific examples.
In short: people who think that it’s important to stick to the rationality 101 basics even when it’s inconvenient, versus those willing to abandon them (and upvote others abandoning them).
Yes. I’m trying to remind people why they should care. Note, though, that in combination with Concentration of Force, it’s saying a much more tightly defined and specific thing—”here’s a concept, and I’d like to apply that concept to this social domain.”
EDIT: in the discussion below, some people have seemed to take this as an admission of some sorts, as opposed to a “sure, close enough.” The words “exhortatory” and “rhetoric” are labels, each of which can cover a wide range of space; something can be a valid match for one of those labels yet not at all central.
I was acknowledging “sure, there’s some degree to which this post could be fairly described as exhortatory or rhetoric.” I was not agreeing with ”...and therefore any and all complaints one has about ‘exhortation’ or ‘rhetoric’ are fair to apply here.” I don’t think supposedlyfun was trying to pull a motte-and-bailey or a fallacy-of-the-grey; that’s why I replied cooperatively. Others, though, do seem to me like they are trying to, and I am not a fan.
I did elaborate on one. Would you be willing to choose another from the linked examples? The one that’s the most confusing or least apparently objectionable? I don’t want to take hours and hours, but I’m certainly willing to go deep on at least a couple.
I spent 15 minutes re-reading the thread underneath orthonormal’s comment to try to put myself in your head. I think maybe I succeeded, so here goes, but from a person whose job involves persuading people, it’s Not Optimal For Your Argument that I had to do this to engage with your model here, and it’s potentially wasteful if I’ve failed at modeling you.
I read both of the comments discussed below, at the time I was following the original post and comments, but did not vote on either.
***
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others
orthonormal P2 [which I inferred using the Principle of Charity]: Most of the time, people who immediately come across as cult leaders are trying to start a cult
Duncan P1: It’s bad when LW upvotes comments with very thin epistemic rigor
Duncan P2: This comment has very thin epistemic rigor because it’s based on a few brief conversations
Gloss: I don’t necessarily agree with your P2. It’s not robust, but nor is it thin; if true, it’s one person’s statement that, based on admittedly limited evidence, they had a high degree of confidence that Anders wanted to be a cult leader. I can review orthonormal’s post history to conclude that ze is a generally sensible person who writes as though ze buys into LW epistemics, and is also probably known by name to various people on the site, meaning if Anders wanted to sue zir for defamation, Anders could (another social and financial cost that orthonormal is incurring). Conditional on Anders not being a cult leader, I would be mildly surprised if orthonormal thought Anders was a cult leader/wannabe.
Also, this comment—which meets your epistemic standards, right? If so, did it cause you to update on the “Leverage is being canceled unfairly” idea?
***
Matt P1: I spent hundreds of hours talking to Anders
Matt P2: If he were a cult leader/wannabe, I would have noticed
Duncan P1: It’s bad when LW doesn’t upvote comments with good epistemic rigor
Duncan P2: This comment has good epistemic rigor because Matt has way more evidence than orthonormal
Gloss: [Edit: Upon reflection, I have deleted this paragraph. My commentary is not germane to the issue that Duncan and I are debating.]
***
The karma score disparity is currently 48 on 39 votes, to 5 on 26 votes.
Given my thought process above, which of the comments should I have strongly upvoted, weakly upvoted, done nothing to, weakly downvoted, or strongly downvoted, on your vision of LW?
Or: which parts of my thought process are inimical to your vision of LW?
***
If it helps you calibrate your response, if any, I spent about 45 minutes researching, conceptualizing, drafting, and editing this comment.
Thank you for the effort! Strong upvoted.
Quick point to get out of the way: re: the comment that you thought would likely meet my standards, yes, it does; when I hovered over it I saw that I had already (weak) upvoted it.
Here’s my attempt to rewrite orthonormal’s first comment; what I would have said in orthonormal’s shoes, if I were trying to say what I think orthonormal is trying to say.
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others (i.e. this wasn’t just idle hostility)
orthonormal P2 [which I inferred using the Principle of Charity]: This is relevant because, separate from the question of whether my detectors are accurate in an absolute sense, they’re more accurate than whatever it is all of you are doing
Duncan P1: It’s bad when LW upvotes comments that aren’t transparent about what they’re trying to accomplish and via what channels they’re trying to accomplish it
Duncan P2: orthonormal’s original comment is somewhat bad in this way; it’s owning its content on the surface but the implicature is where most of the power lies; the comment does not on its face say why it exists or what it’s trying to do in a way that an autistic ten-year-old could parse (evidence: I felt myself becoming sort of fuzzy/foggy and confused, reading it). As written, I think its main goal is to say “I told you so and also I’m a better judge of things than all of you”? But it doesn’t just come right out and say that and then pay for it, the way that I say in the OP above that I’m often smarter than other people in the room (along with an acknowledgement that there’s a cost for saying that sort of thing).
I do think that the original version obfuscated some important stuff (e.g. there’s a kind of motte-bailey at the heart of “we met at our CFAR workshop”; that could easily imply “we spent fifteen intensely intimate hours in one another’s company over four days” or “we spoke for five minutes and then were in the same room for a couple of classes”). That’s part of it.
But my concern is more about the delta between the comments’ reception. I honestly don’t know how to cause individuals voting in a mass to get comments in the right relative positions, but I think orthonormal’s being at 48 while Matt’s is at 5 is a sign of something wrong.
I think orthonormal’s belongs at something like 20, and Matt’s belongs at something like 40. I voted according to a policy that attempts to cause that outcome, rather than weak upvoting orthonormal’s, as I otherwise would have (its strengths outweigh its flaws and I do think it was a positive contribution).
In a world where lots of LessWrongers are tracking the fuzziness and obfuscation thing, orthonormal’s comment gets mostly a bunch of small upvotes, and Matt’s gets mostly a bunch of strong upvotes, and they both end up in positive territory but with a clear (ugh) “status differential” that signals what types of contributions we want to more strongly reward.
As for Matt’s comment:
Matt’s comment in part deserves the strong upvote because it’s a high-effort, lengthy comment that tries pretty hard to go slowly and tease apart subtle distinctions and own where it’s making guesses and so forth; agnostic of its content my prior a third of the way through was “this will ultimately deserve strong approval.”
I don’t think most of Matt’s comment was on the object level, i.e. comments about Anders and his likelihood of being a cult leader, wannabe or otherwise.
I think that it was misconstrued as just trying to say “pshhh, no!” which is why it hovers so close to zero.
My read of Matt’s comment:
Matt P1: It’s hard to tell what Ryan and orthonormal are doing
Matt P2: There’s a difference between how I infer LWers are reading these comments based on the votes, and how I think LWers ought to interpret them
Matt P3: Here’s how I interpret them
Matt P4: Here’s a bunch of pre-validation of reasons why I might be wrong about orthonormal, both because I actually might and because I’m worried about being misinterpreted and want to signal some uncertainty/humility here.
Matt P5: Ryan’s anecdote seems consistent, to me, with a joke of a form that Geoff Anders makes frequently.
Matt P6: My own personal take is that Geoff is not a cult leader and that the evidence provided by orthonormal and Ryan should be considered lesser than mine (and here’s why)
Matt P7-9: [various disclaimers and hedges]
Duncan P1: This comment is good because of the information it presents
Duncan P2: This comment is good because of the way it presents that information, and the way it attempts to make space for and treat well the previous comments in the chain
Duncan P3: This comment is good because it was constructed with substantial effort
Duncan P4: It’s bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws (the unclear motive thing).
I don’t disagree with either of your glosses, but most notably they missed the above axes. Like, based on your good-faith best-guess as to what I was thinking, I agree with your disagreements with that; your pushback against hypothesized-Duncan who’s dinging orthonormal for epistemic thinness is good pushback.
But I think my version of orthonormal’s comment is stronger, and while I don’t think their original comment was not-worth-writing, such that I’d say “don’t contribute if you’re not going to put forth as much effort as I did in my rewrite.” I think it was less worth writing than the rewrite. I think the rewrite gives a lot more, and … hypnotizes? … a lot less.
As for your gloss on Matt’s comment specifically, I just straightforwardly like it; if it were its own reply and I saw it when revisiting the thread I would weak or strong upvote it. I think it does exactly the sane-itizing light-shining that I’m pulling for, and that feels to me was only sporadically (and not reliably) present throughout the discussions.
I took however many minutes it’s been since you posted your reply to write this. 30-60?
Thanks, supposedlyfun, for pointing me to this thread.
I think it’s important to distinguish my behavior in writing the comment (which was emotive rather than optimized—it would even have been in my own case’s favor to point out that the 2012 workshop was a weeklong experiment with lots of unstructured time, rather than the weekend that CFAR later settled on, or to explain that his CoZE idea was to recruit teens to meddle with the other participants’ CoZE) from the behavior of people upvoting the comment.
I expect that many of the upvotes were not of the form “this is a good comment on the meta level” so much as “SOMEBODY ELSE SAW THE THING ALL ALONG, I WORRIED IT WAS JUST ME”.
This seems true to me. I’m also feeling a little bit insecure or something and wanting to reiterate that I think that particular comment was a net-positive addition and in my vision of LessWrong would have been positively upvoted.
Just as it’s important to separate the author of a comment from the votes that comment gets (which they have no control over), I want to separate a claim like “this being in positive territory is bad” (which I do not believe) from “the contrast between the total popularity of this and that is bad.”
I’m curious whether I actually passed your ITT with the rewrite attempt.
Thanks for asking about the ITT.
I think that if I put a more measured version of myself back into that comment, it has one key difference from your version.
“Pay attention to me and people like me” is a status claim rather than a useful model.
I’d have said “pay attention to a person who incurred social costs by loudly predicting one later-confirmed bad actor, when they incur social costs by loudly predicting another”.
(My denouncing of Geoff drove a wedge between me and several friends, including my then-best friend; my denouncing of the other one drove a wedge between me and my then-wife. Obviously those rifts had much to do with how I handled those relationships, but clearly it wasn’t idle talk from me.)
Otherwise, I think the content of your ITT is about right.
(The emotional tone is off, even after translating from Duncan-speak to me-speak, but that may not be worth going into.)
For the record, I personally count myself 2 for 2.5 on precision. (I got a bad vibe from a third person, but didn’t go around loudly making it known; and they’ve proven to be not a trustworthy person but not nearly as dangerous as I view the other two. I’ll accordingly not name them.)
I’m going to take a stab at cruxing here.
Whether it’s better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they’re doing.
Out of all the things you would have added to orthonormal’s comment, the only one that I didn’t read at the time as explicit or implicit in zir comment was, “Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5”. I agree it would be nice if people gave more information about their own calibration where available. I don’t know whether it was available to orthonormal.
As for the rest, I’m sticking that at the end of this comment as a sort of appendix.
If I’m right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don’t see how I could have figured out that this is what our actual disagreement was.
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don’t feel like we need to fork anything based on the distance between our positions.
Why do you think the intensity scalars are so different between us?
***
The comment makes it clear that it is subjective experience. I wouldn’t expect ortho to add it if ze didn’t think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.
I don’t expect ortho to remember the number of minutes from nine years ago.
I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience. Crux?
See above.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?
Hmmm, something has gone wrong. This is not the case, and I’m not sure what caused you to think it was the case.
“How explicit comments need to be regarding their own epistemic status” is a single star in the constellation of considerations that caused me to write the post. It’s one of the many ways in which I see people doing things that slightly decrease our collective ability to see what’s true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.
But it’s in no way the central casus belli of the OP. The constellation is. So my answer to “Why do you think the intensity scalars are so different between us?” is “maybe they aren’t? I didn’t mean the thing you were surprised by.”
Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I’m reminded of the time that some researchers investigated what various people meant by the phrase “a very real chance,” and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).
It’s true that numbers aren’t super reliable, but even estimated/ballpark numbers (you’ll note I wrote the phrase “as many as” and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations. The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation. To provide data that distinguishes between possibilities.
True. (I reiterate, feeling a smidge defensive, that I’ve said more than once that the comment was net-positive as written, and so don’t wish to have to defend a claim like “it absolutely should have been different in this way!” That’s not a claim I’m making. I’m making the much weaker claim that my rewrite was better. Not that the original was insufficient.)
The thing that I’m pulling for, with the greater explicitness about its subjectivity …
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to do this. But I think it’s clearly stronger if it does.
I validate that. But I suspect you would not claim that their reason doesn’t matter at all, to anyone. And I suspect you would not claim that a substantial chunk of LWers aren’t guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds. The rewrite included more attempts to rule out everything else than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what’s going on, and reduces the fog and confusion and rate of misunderstandings.
I don’t think that’s true at all. I think that there are several different implications compatible with the act of posting ortho’s comment, and that “I’m suggesting that you weight my opinion more heavily based on me being right in this case” is only one such implication, and that it’s valuable to be specific about what you’re doing and why because other people don’t actually just “get” it. The illusion of transparency is a hell of a drug, and so is the typical mind fallacy. Both when you’re writing, and assume that people will just magically know what you’re trying to accomplish, and when you’re reading, and assume that everyone else’s interpretation will be pretty close to your own.
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to head off that sort of misunderstanding at the pass. But I think it’s clearly better if it does so.
I actually included that sentence because I felt like ortho’s original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement. So I think we’re not in disagreement on that.
Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation.
Also understood: you and I have different preferences for explicitly stating underlying claims. I don’t think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement. Striking that balance is Hard.
I think we’ve drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.
(Something genuinely amusing, given the context, about the above being at 3 points out of 2 votes after four hours, compared to its parent being at 30 points out of 7 votes after five.)
I have an alternative and almost orthogonal interpretation for why the karma scores are the way they are.
Both in your orthonormal-Matt example, and now in this meta-example, the shorter original comments require less context to understand and got more upvotes, while the long meandering detail-oriented high-context responses were hardly even read by anyone.
This makes perfect sense to me—there’s a maximum comment length after which I get a strong urge to just ignore / skim a comment (which I initially did with your response here; and I never took the time to read Matt’s comments, though I also didn’t vote on orthonormal’s comment one way or another, nor vote in the jessicata post much at all), and I would be astonished if that only happened to me.
Also think about how people see these comments in the first place. Probably a significant chunk comes from people browsing the comment feed on the LW front page, and it makes perfect sense to scroll past a long sub-sub-sub-comment that might not even be relevant, and that you can’t understand without context, anyway.
So from my perspective, high-effort, high-context, lengthy sub-comments intrinsically incur a large attention / visibility (and therefore karma) penalty. Things like conciseness are also virtues, and if you don’t consider that it in your model of “good along three different axes, and bad along none as far as I can see”, then that model is incomplete.
(Also consider things like: How much time do you think the average reader spends on LW; what would be a good amount of time, relative to their other options; would you prefer a culture where hundreds of people take the opportunity cost to read sub-sub-sub-comments over one where they don’t; also people vary enormously in their reading speed; etc.)
Somewhat related: My post in this thread on some of the effects of the existing LW karma system: If we grant the above, one remaining problem is that the original orthonormal comment was highly upvoted but looked worse over time:
First, some off the cuff impressions of matt’s post (in the interest of data gathering):
In the initial thread I believe that I read the first paragraph of matt’s comment, decided I would not get much out of it, and stopped reading without voting.
Upon revisiting the thread and reading matt’s comment in full, I find it difficult to understand and do not believe I would be able to summarize or remember its main points now, about 15 minutes after the fact.
This seems somewhat interesting to test, so here is my summary from memory. After this I’ll reread matt’s post and compare what I thought it said upon first reading with what I think it says upon a second closer reading:
Commentary before re-reading: I expect that I missed a lot, since it was a long post and it did not stick in my mind particularly well. I also remember a lot of hedging that confused me, and points that went into parentheticals within parentheticals. These parentheticals were long enough that I remember losing track of what point was being made. I also may have confabulated arguments in this thread about upvotes and some from matt’s post.
I wanted to keep the summary “pure” in the sense that it is a genuine recollection without re-reading, but for clarity [person] is othonormal and [commenter to person] is RyanCarey.
Second attempt at summarizing while flipping back and forth between editor and matt’s comment:
Editing that summary to be much more concise:
I notice that as I am re-reading matt’s post, I expect that the potential reading of orthonormal’s that he presents at the beginning (a reading that I find uncharitable) is in fact matt’s reading. But he doesn’t actually say this outright. Instead he says “An available interpretation of orthonormal’s comment is...”. Indeed, I initially had an author’s note in the summary that reflected the point that I was unsure if “an available interpretation” was matt’s interpretation. It is only much later (inside a parenthetical) that he says “I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”...” to indicate that the uncharitable reading is in fact Matt’s reading.
Matt’s comment also included some comments that I read as sneering:
I would have preferred his comment to start small with some questions about orthonormal’s experience rather than immediately accuse them of hostile gossip. For instance, matt might have asked about the extent of orthonormal’s contact with Geoff, how confident orthonormal is that Geoff is a cult leader, and whether orthonormal updated against Geoff being a cult leader in light of their friends believing Geoff wasn’t a cult leader, etc. Instead, those questions are assumed to have answers that are unsupportive of orthonormal’s original point (the answers assumed in matt’s comment in order: very little contact, extremely confident, anti-updates in the direction of higher confidence). This seems like a central example of an uncharitable comment.
Overall I find matt’s comment difficult to understand after multiple readings and uncharitable of those he is conversing with, although I do value the data it adds to the conversation. I believe this lack of charity is part of why matt’s comment has not done well in terms of karma. I still have not voted on matt’s comment and do not believe I will. There are parts of it that are valuable, but it is uncharitable and that is a value I hold above most others. In cases like these, where parts of a comment are valuable and other parts are the sort of thing that I would rather pruned from the gardens I spend my time in, I tend to withhold judgment.
How do my two summaries compare? I’m surprised by how close the first summary I gave was to the “much more concise” summary I gave later. I expected to have missed more, largely due to matt’s comment’s length. I also remember finding it distasteful, which I omitted from my summaries but likely stemmed from the lack of charity extended to orthonormal.
Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt’s comment? How would they react to that much more concise comment, as compared to matt’s comment?
Strong upvote for doing this process/experiment; this is outstanding and I separately appreciate the effort required.
I find your summary at least within-bounds, i.e. not fully ruled out by the words on the page. I obviously had a different impression, but I don’t think that it’s invalid to hold the interpretations and hypotheses that you do.
I particularly like and want to upvote the fact that you’re being clear and explicit about them being your interpretations and hypotheses; this is another LW-ish norm that is half-reliable and I would like to see fully reliable. Thanks for doing it.
To add one point:
When it comes to assessing whether a long comment or post is hard to read, quality and style of writing matters, too. SSC’s Nonfiction Writing Advice endlessly hammers home the point of dividing text into increasingly smaller chunks, and e.g. here’s one very long post by Eliezer that elicited multiple comments of the form “this was too boring to finish” (e.g. this one), some of which got alleviated merely by adding chapter breaks.
And since LW makes it trivial to add headings even to comments (e.g. I used headings here), I guess that’s one more criterion for me to judge long comments by.
(One could even imagine the LW site nudging long comments towards including stuff like headings. One could imagine a good version of a prompt like this: “This comment / post is >3k chars but consists of only 3 paragraphs and uses no headings. Consider adding some level of hierarchy, e.g. via headings.”)
You’re fighting fire with fire. It’s hard for me to imagine a single standard that would permit this post as acceptably LessWrongian and also deem the posts you linked to as unacceptable.
Here’s an outline of the tactic that I see as common to both.
You have a goal X.
To achieve X, you need to coordinate people to do Y.
The easiest way to coordinate people to do Y is to use exhortatory rhetoric and pull social strings, while complaining when your opponent does the same thing.
You can justify (3) by appealing to a combination of the importance of X and of your lack of energy or desire not to be perfectionistic, while insisting that your opponents rise to a higher standard, and denying that you’re doing any of this—or introspecting for a while and then shrugging and doing it anyway.
If you can convince others to agree with you on the overriding importance of X (using rhetoric and social strings), then suddenly the possibly offensive moral odor associated with the tactic disappears. After all, everybody (who counts) agrees with you, and it’s not manipulative to just say what everybody (who counts) was thinking anyway, right?
“Trying to remind people why they should care” is an example of step (3).
This isn’t straightforwaredly wrong. It’s just a way to coordinate people, one with certain advantages and disadvantages relative to other coordination mechanisms, and one that is especially tractable for certain goals in certain contexts.
In this case, it seems like one of your goals is to effect a site culture in which this tactic self-destructs. The site’s culture is just so stinkin’ rational that step (3) gets nipped in the bud, every time.
This is the tension I feel in reading your post. On the one hand, I recognize that it’s allowing itself an exception to the ban it advocates on this 5-step tactic in the service of expunging the 5-step tactic from LessWrong. On the other hand, it’s not clear to me whether, if I agreed with you, I would criticize this post, or join forces with it.
A successful characterization of a problem generally suggests a solution. My confusion about the correct response to your characterization therefore leads me to fear your characterization is incorrect. Let me offer an alternative characterization.
Perhaps we are dealing with a problem of market size.
In a very small market, there is little ability to specialize. Poverty is therefore rampant. Everybody has to focus on providing themselves with the basics, and has to do most things themselves. Trade is also rare because the economy lacks the infrastructure to facilitate trades. So nobody has much of anything, and it’s very hard to invest.
What if we think about a movement and online community like this as a market? In a nice big rationality market, we’d have plenty of attention to allocate to all the many things that need doing. We’d have proofreaders galore, and lots of post writers. There’d be lots of money sloshing around for bounties on posts, and plenty of people thinking about how to get this just right. There’d be plenty of comments, critical, supportive, creative, and extensive. Comments would be such an important feature of the discourse surrounding a post that there’d be heavy demand for improved commenting infrastructure, for moderation and value-extraction from the comments. There’d be all kinds of curation going on, and ways to allocate rewards and support the development of writers on the website.
In theory, all we’d need to generate a thriving rationality market like this is plenty of time, and a genuine (though not necessarily exclusive) demand for rationality. It would self-organize pretty naturally through some combination of barter, social exchange, and literal cash payments for various research, writing, editing, teaching, and moderation services.
The problem is the slow pace at which this is emerging on its own, and the threat of starvation in the meantime. Let’s even get a little bit ecological. A small LW will go through random fluctuations in activity and participation. If it gets too small, it could easily dip into an irrecoverable lack of participation. And the smaller the site is, the harder it will be to attain the market size necessary to permit specialization, since any participant will have to do most everything for themselves.
Under this frame, then, your post is advocating for some things that seem useful and some that seem harmful. You give lots of ideas for jobs that seem helpful (in some form) in a LW economy big enough to support such specialized labor.
On the other hand, you advocate an increase in regulation, which will come with an inevitable shrinking of the population. I fear that this will have the opposite of the effect you intend. Rather than making the site hospitable for a resurgence of “true rationalists,” you will create the conditions for starvation by reducing our already-small market still further. Even the truest of rationalists will have a hard time taking care of their rationality requirements when the population of the website has shrunk to that extent.
Posts just won’t get written. Comments won’t be posted. People won’t take risks. People won’t improve. They’ll find themselves frustrated by nitpicks, and stop participating. A handful of people will remain for a while, glorying in the victory of their purge, and then they’ll quit too after a few months or a few years once that gets boring.
I advocate instead that you trust that everybody on this website is an imperfect rationalist with a genuine preference for this elusive thing called “rationality.” Allow a thousand seeds to be planted. Some will bloom. Gradually, the rationalist economy will grow, and you’ll see the results you desire without needing much in the way of governance or intervention. And when we have need of governance, we’ll be able to support it better.
It’s always hard, I think, for activists to accept that the people and goals they care about can and will largely take care of themselves without the activist’s help.
All right, a more detailed response.
I am not fighting fire with fire. I request that you explicitly retract the assertion, given that it is both a) objectively false, and b) part of a class of utterances that are in general false far more often than they are true, and which tend to make it harder to think and see clearly in exactly the way I’m gesturing at with the OP.
Some statements that would not have been false:
“This seems to me like it’s basically fighting fire with fire.”
“I believe that, in practice, this ends up being fighting fire with fire.”
“I’m having a hard time summing this up as anything other than ‘fighting fire with fire.’”
...and I reiterate that those subtle differences make a substantial difference in people’s general ability to do the collaborative truth-seeking thing, and are in many ways precisely what I’m arguing for above.
I clearly outline what I am identifying as “fire” in the above post. I have one list which is things brains do wrong, and another list which lays out some “don’ts” that roughly correspond to those problems.
I am violating none of those don’ts, and, in my post, exhibiting none of those wrongbrains. I in fact worked quite hard to make sure that the wrongbrains did not creep in, and abandoned a draft that was three-quarters complete because it was based on one.
In many ways, the above essay is an explicit appeal that people not fight fire with fire. It identifies places where people abandon their principles in pursuit of some goal or other, and says “please don’t, even if this leads to local victory.”
It’s the one that I laid out in my post. If you find it confusing, you can ask a clarifying question. If one of the examples seems wrong or backwards, you can challenge it. I appreciate the fact that you hedged your statement by saying that you have a hard time imagining, which is better than in the previous sentence, where you simply declared that I was doing a thing (which I wasn’t), rather than saying that it seemed to you like X or felt like X or you thought it was X for Y and Z reasons.
The standard is: don’t violate the straightforward list of rationality 101 principles and practices that we have a giant canon of knowledge and agreement upon. There’s a separate substandard that goes something like “don’t use dark-artsy persuasion; don’t yank people around by their emotions in ways they can’t see and interact with; don’t deceive them by saying technically true things which you know will result in a false interpretation, etc.”
I’m adhering to that standard, above.
There’s fallacy-of-the-grey in your rounding-off of “here’s a post where the author acknowledged in their end notes that they weren’t quite up to the standard they are advocating” and “you’re fighting fire with fire.” There’s also fallacy-of-the-grey in pretending that there’s only one kind of “fire.”
I strongly claim that I am, in general, not frequently in violation of any of the principles that I have explicitly endorsed, and that if it seems I’m holding others to a higher standard than I’m holding myself, it’s likely that the standard I’m holding has been misunderstood. I also believe that people who are trying to catch me when I’m actually failing to live up are on my side and doing me a favor, and though I’m not perfect and sometimes it takes me a second to get past the flinch and access the gratitude, I think I’m credible about acting in accordance with that overall.
I did not “use exhortatory rhetoric and pull social strings.” I should walk back my mild “yeah fair” in response to the earlier comment, since you’re taking it and adversarially running with it.
If you read the OP and do not choose to let your brain project all over it, what you see is, straightforwardly, a mass of claims about how I feel, how I think, what I believe, and what I think should be the case.
I explicitly underscore that I think little details matter, and second-to-second stuff counts, so if you’re going to dismiss all of the “I” statements as being mere window dressing or something (I’m not sure that’s what you’re doing, but it seems like something like that is necessary, to pretend that they weren’t omnipresent in what I wrote), you need to do so explicitly. You need to argue for them not-mattering; you can’t just jump straight to ignoring them, and pretending that I was propagandizing.
I also did not complain about other people using exhortatory rhetoric and pulling social strings. That’s a strawman of my point. I complained about people a) letting their standards on what’s sufficiently justified to say slip, when it was convenient, and b) en-masse upvoting and otherwise tolerating other people doing so.
I gave specifics; I gave a model. Where that model wasn’t clear, I offered to go in-depth on more examples (an offer that I haven’t yet seen anyone take me up on, though I’m postponing looking at some other comments while I reply to this one).
I thoroughly and categorically reject (3) as being anywhere near a summary of what I’m doing above, and (4) is … well, I would say “you’re being an uncharitable asshole, here,” except that what’s actually true and defensible and prosocial is to note that I am having a strongly negative emotional reaction to it, and to separately note that you’re not passing my ITT and you’re impugning my motives and in general you’re hand-waving away the part where you have actual reasons for the attempt to delegitimize and undermine both me and my points.
No. You’ve failed to pass my ITT, you’ve failed to understand my point, and as you drift further and further from what I was actually trying to say, it gets harder and harder to address it line-by-line because I keep being unable to bring things back around.
I’m not trying to cause appeals-to-emotion to disappear. I’m not trying to cause strong feelings oriented on one’s values to be outlawed. I’m trying to cause people to run checks, and to not sacrifice their long-term goals for the sake of short-term point-scoring.
I definitely do not believe that this post, as written, would not survive or belong on the better version of LessWrong I’m envisioning (setting aside the fact that it wouldn’t be necessary there). I’m not trying to effect a site culture where the tactic of the OP self-destructs, and I’m not sure where that belief came from. I just believe that, in the steel LW, this post would qualify as mediocre, instead of decent.
The place where I’m most able to engage with you is:
Here, you assert some things that are, in fact, only hypotheses. They’re certainly valid hypotheses, to be clear. But it seems to me that you’re trying to shift the conversation onto the level of competing stories, as if what’s true is either “Duncan’s optimistic frame, in which the bad people leave and the good people stay” or “the pessimistic frame, in which the optimistic frame is naive and the site just dies.”
This is an antisocial move, on my post where I’m specifically trying to get people to stop pulling this kind of crap.
Raise your hypothesis. Argue that it’s another possible outcome. Propose tests or lines of reasoning that help us to start figuring out which model is a better match for the territory, and what each is made of, and how we might synthesize them.
I wrote several hundred words on a model of evaporative cooling, and how it drives social change. Your response boils down to “no u.” It’s full of bald assertions. It’s lacking in epistemic humility. It’s exhausting in all the ways that you seem to be referring to when you point at “frustrated by nitpicks, and stop participating.” The only reason I engaged with it to this degree is that it’s an excellent example of the problem.
I would like to register that I think this is an excellent comment, and in fact caused me to downvote the grandparent where I would otherwise have neutral or upvoted. (This is not the sort of observation I would ordinarily feel the need to point out, but in this case it seemed rather appropriate to do so, given the context.)
Huh. Interesting.
I had literally the exact same experience before I read your comment dxu.
I imagine it’s likely that Duncan could sort of burn out on being able to do this [1] since it’s pretty thankless difficult cognitive work. [2]
But it’s really insightful to watch. I do think he could potentially tune up [3] the diplomatic savvy a bit [4] since I think while his arguments are quite sound [5] I think he probably is sometimes making people feel a little bit stupid via his tone. [6]
Nevertheless, it’s really fascinating to read and observe. I feel vaguely like I’m getting smarter.
###
Rigor for the hell of it [7]:
[1] Hedged hypothesis.
[2] Two-premise assertion with a slightly subjective basis, but I think a true one.
[3] Elaborated on a slightly different but related point further in my comment below to him with an example.
[4] Vague but I think acceptably so. To elaborate, I mean making one’s ideas even when in disagreement with a person palatable to the person one is disagreeing with. Note: I’m aware it doesn’t acknowledge the cost of doing so and running that filter. Note also: I think, with skill and practice, this can be done without sacrificing the content of the message. It is almost always more time-consuming though, in my experience.
[5] There’s some subjective judgments and utility function stuff going on, which is subjective naturally, but his core factual arguments, premises, and analyses basically all look correct to me.
[6] Hedged hypothesis. Note: doesn’t make a judgment either way as to whether it’s worth it or not.
[7] Added after writing to double-check I’m playing by the rules and clear up ambiguity. “For the hell of it” is just random stylishness and can be safely mentally deleted.
(Or perhaps, if I introspect closely, a way to not be committed to this level of rigor all the time. As stated below though, minor stylistic details aside, I’m always grateful whenever a member of a community attempts to encourage raising and preserving high standards.)
Upvoted for the market analogy.
(Thanks for being specific; this is a micro-norm I want to applaud.)
Nope. False, and furthermore Kafkaesque; there is no defensible reading of either the post or my subsequent commentary that justifies this line, and that alone being up-front and framing the rest of what you have to say is extremely bad, and a straightforward example of the problem.
It is a nuance-destroying move, a rounding-off move, a making-it-harder-for-people-to-see-and-think-clearly move, an implanting-falsehoods move. Strong downvote as I compose a response to the rest.
Given that there is lots of “let’s comment on what things about a comment are good and which things are bad” going on in this thread, I will make more explicit a thing that I would have usually left implicit:
My current sense is that this comment maybe was better to write than no comment, given the dynamics of the situation, but I think the outcome would have been better if you had waited to write your long comment. This comment felt like it kicked up the heat a bunch, and while I think that was better than just leaving things unresponded, my sense is the discussion overall would have gone better if you had just written your longer comment.
In response to this, I’ll bow out (from this subthread) for a minimum period of 3 days. (This is in accordance with a generally wise policy I’m trying to adopt.)EDIT: I thought Oli was responding to a different thing (I replied to this from the sidebar). I was already planning not to add anything substantive here for a few days. I do note, though, that even if two people both unproductively turn up the heat, one after the other, in my culture it still makes a difference which one broke peace first.
Have you read “Metaphors We Live By” by Lakoff?
The first 20 pages or so are almost a must-read in my opinion.
Highly recommended, for you in particular.
A Google search with filetype:pdf will find you a copy. You can skim it fast — not needed to close read it — and you’ll get the gems.
Edit for exhortation: I think you’ll get a whole lot out of it such that I’d stake some “Sebastian has good judgment” points on it that you can subtract from my good judgment rep if I’m wrong. Seriously please check it out. It’s fast and worth it.
This response I would characterize as steps (3) and (4) of the 5-step tactic I described. You are using more firey rhetoric (“Kafkaesque,” “extremely bad,” “implanting falsehoods,”), while denying that this is what you are doing.
I am not going to up-vote or down-vote you. I will read and consider your next response here, but only that response, and only once. I will read no other comments on this post, and will not re-read the post itself unless it becomes necessary.
I infer from your response that from your perspective, my comment here, and me by extension, are in the bin of content and participants you’d like to see less or none of on this website. I want to assure you that your response here in no way will affect my participation on the rest of this website.
Your strategy of concentration of force only works if other people are impacted by that force. As far as your critical comment here, as the Black Knight said, I’ve known worse.
If you should continue this project and attack me outside of this post, I am precommitting now to simply ignoring you, while also not engaging in any sort of comment or attack on your character to others. I will evaluate your non-activist posts the same way I evaluate anything else on this website. So just be aware that from now on, any comment of yours that strikes me as having a tone similar to this one of yours will meet with stony silence from me. I will take steps to mitigate any effect it might have on my participation via its emotional effect. Once I notice that it has a similar rhetorical character, I will stop reading it. I am specifically neutralizing the effect of this particular activist campaign of yours on my thoughts and behavior.
Jumping in here in what i hope is a prosocial way. I assert as hypothesis that the two of you currently disagree about what level of meta the conversation is/should-be at, and each feels that the other has an obligation to meet them at their level, and this has turned up the heat a lot.
maybe there is a more oblique angle then this currently heated one?
It’s prosocial. For starters, AllAmericanBreakfast’s “let’s not engage,” though itself stated in a kind of hot way, is good advice for me, too. I’m going to step aside from this thread for at least three days, and if there’s something good to come back to, I will try to do so.
BTW this inspired an edit in the “two kinds of persons” spot specifically, and I think the essay is much stronger for it, and I strongly appreciate you for highlighting your confusion there.
EDIT: and also in the author’s note at the bottom.