I spent 15 minutes re-reading the thread underneath orthonormal’s comment to try to put myself in your head. I think maybe I succeeded, so here goes, but from a person whose job involves persuading people, it’s Not Optimal For Your Argument that I had to do this to engage with your model here, and it’s potentially wasteful if I’ve failed at modeling you.
I read both of the comments discussed below, at the time I was following the original post and comments, but did not vote on either.
***
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others
orthonormal P2 [which I inferred using the Principle of Charity]: Most of the time, people who immediately come across as cult leaders are trying to start a cult
Duncan P1: It’s bad when LW upvotes comments with very thin epistemic rigor
Duncan P2: This comment has very thin epistemic rigor because it’s based on a few brief conversations
Gloss: I don’t necessarily agree with your P2. It’s not robust, but nor is it thin; if true, it’s one person’s statement that, based on admittedly limited evidence, they had a high degree of confidence that Anders wanted to be a cult leader. I can review orthonormal’s post history to conclude that ze is a generally sensible person who writes as though ze buys into LW epistemics, and is also probably known by name to various people on the site, meaning if Anders wanted to sue zir for defamation, Anders could (another social and financial cost that orthonormal is incurring). Conditional on Anders not being a cult leader, I would be mildly surprised if orthonormal thought Anders was a cult leader/wannabe.
Also, this comment—which meets your epistemic standards, right? If so, did it cause you to update on the “Leverage is being canceled unfairly” idea?
***
Matt P1: I spent hundreds of hours talking to Anders
Matt P2: If he were a cult leader/wannabe, I would have noticed
Duncan P1: It’s bad when LW doesn’t upvote comments with good epistemic rigor
Duncan P2: This comment has good epistemic rigor because Matt has way more evidence than orthonormal
Gloss: [Edit: Upon reflection, I have deleted this paragraph. My commentary is not germane to the issue that Duncan and I are debating.]
***
The karma score disparity is currently 48 on 39 votes, to 5 on 26 votes.
Given my thought process above, which of the comments should I have strongly upvoted, weakly upvoted, done nothing to, weakly downvoted, or strongly downvoted, on your vision of LW?
Or: which parts of my thought process are inimical to your vision of LW?
***
If it helps you calibrate your response, if any, I spent about 45 minutes researching, conceptualizing, drafting, and editing this comment.
Quick point to get out of the way: re: the comment that you thought would likely meet my standards, yes, it does; when I hovered over it I saw that I had already (weak) upvoted it.
Here’s my attempt to rewrite orthonormal’s first comment; what I would have said in orthonormal’s shoes, if I were trying to say what I think orthonormal is trying to say.
All right, here comes some subjective experience. I’m offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.
I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was “this person wants to be a cult leader.” This was based on [specific number of minutes] of conversation.
The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations. I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010′s.
I feel like this is important and relevant because it seems like yet again we’re in a situation where a bunch of people are going “gosh, such shock, how could we have known?” The delta between my wannabe-cult-leader-detectors and everyone else’s is large, and I don’t know its source, but the same thing happened with [don’t name him, don’t summon him], who was booted from the Berkeley community for good reason.
I don’t think opaque intuition should be blindly followed, but as everyone is reeling from Zoe’s account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?
Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1⁄5, and that means there’s probably something to be learned from me and people like me.
If you’re actually looking for ways to make this better in the future, anyway.
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others (i.e. this wasn’t just idle hostility)
orthonormal P2 [which I inferred using the Principle of Charity]: This is relevant because, separate from the question of whether my detectors are accurate in an absolute sense, they’re more accurate than whatever it is all of you are doing
Duncan P1: It’s bad when LW upvotes comments that aren’t transparent about what they’re trying to accomplish and via what channels they’re trying to accomplish it
Duncan P2: orthonormal’s original comment is somewhat bad in this way; it’s owning its content on the surface but the implicature is where most of the power lies; the comment does not on its face say why it exists or what it’s trying to do in a way that an autistic ten-year-old could parse (evidence: I felt myself becoming sort of fuzzy/foggy and confused, reading it). As written, I think its main goal is to say “I told you so and also I’m a better judge of things than all of you”? But it doesn’t just come right out and say that and then pay for it, the way that I say in the OP above that I’m often smarter than other people in the room (along with an acknowledgement that there’s a cost for saying that sort of thing).
I do think that the original version obfuscated some important stuff (e.g. there’s a kind of motte-bailey at the heart of “we met at our CFAR workshop”; that could easily imply “we spent fifteen intensely intimate hours in one another’s company over four days” or “we spoke for five minutes and then were in the same room for a couple of classes”). That’s part of it.
But my concern is more about the delta between the comments’ reception. I honestly don’t know how to cause individuals voting in a mass to get comments in the right relative positions, but I think orthonormal’s being at 48 while Matt’s is at 5 is a sign of something wrong.
I think orthonormal’s belongs at something like 20, and Matt’s belongs at something like 40. I voted according to a policy that attempts to cause that outcome, rather than weak upvoting orthonormal’s, as I otherwise would have (its strengths outweigh its flaws and I do think it was a positive contribution).
In a world where lots of LessWrongers are tracking the fuzziness and obfuscation thing, orthonormal’s comment gets mostly a bunch of small upvotes, and Matt’s gets mostly a bunch of strong upvotes, and they both end up in positive territory but with a clear (ugh) “status differential” that signals what types of contributions we want to more strongly reward.
As for Matt’s comment:
Matt’s comment in part deserves the strong upvote because it’s a high-effort, lengthy comment that tries pretty hard to go slowly and tease apart subtle distinctions and own where it’s making guesses and so forth; agnostic of its content my prior a third of the way through was “this will ultimately deserve strong approval.”
I don’t think most of Matt’s comment was on the object level, i.e. comments about Anders and his likelihood of being a cult leader, wannabe or otherwise.
I think that it was misconstrued as just trying to say “pshhh, no!” which is why it hovers so close to zero.
My read of Matt’s comment:
Matt P1: It’s hard to tell what Ryan and orthonormal are doing
Matt P2: There’s a difference between how I infer LWers are reading these comments based on the votes, and how I think LWers ought to interpret them
Matt P3: Here’s how I interpret them
Matt P4: Here’s a bunch of pre-validation of reasons why I might be wrong about orthonormal, both because I actually might and because I’m worried about being misinterpreted and want to signal some uncertainty/humility here.
Matt P5: Ryan’s anecdote seems consistent, to me, with a joke of a form that Geoff Anders makes frequently.
Matt P6: My own personal take is that Geoff is not a cult leader and that the evidence provided by orthonormal and Ryan should be considered lesser than mine (and here’s why)
Matt P7-9: [various disclaimers and hedges]
Duncan P1: This comment is good because of the information it presents
Duncan P2: This comment is good because of the way it presents that information, and the way it attempts to make space for and treat well the previous comments in the chain
Duncan P3: This comment is good because it was constructed with substantial effort
Duncan P4: It’s bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws (the unclear motive thing).
I don’t disagree with either of your glosses, but most notably they missed the above axes. Like, based on your good-faith best-guess as to what I was thinking, I agree with your disagreements with that; your pushback against hypothesized-Duncan who’s dinging orthonormal for epistemic thinness is good pushback.
But I think my version of orthonormal’s comment is stronger, and while I don’t think their original comment was not-worth-writing, such that I’d say “don’t contribute if you’re not going to put forth as much effort as I did in my rewrite.” I think it was less worth writing than the rewrite. I think the rewrite gives a lot more, and … hypnotizes? … a lot less.
As for your gloss on Matt’s comment specifically, I just straightforwardly like it; if it were its own reply and I saw it when revisiting the thread I would weak or strong upvote it. I think it does exactly the sane-itizing light-shining that I’m pulling for, and that feels to me was only sporadically (and not reliably) present throughout the discussions.
I took however many minutes it’s been since you posted your reply to write this. 30-60?
Thanks, supposedlyfun, for pointing me to this thread.
I think it’s important to distinguish my behavior in writing the comment (which was emotive rather than optimized—it would even have been in my own case’s favor to point out that the 2012 workshop was a weeklong experiment with lots of unstructured time, rather than the weekend that CFAR later settled on, or to explain that his CoZE idea was to recruit teens to meddle with the other participants’ CoZE) from the behavior of people upvoting the comment.
I expect that many of the upvotes were not of the form “this is a good comment on the meta level” so much as “SOMEBODY ELSE SAW THE THING ALL ALONG, I WORRIED IT WAS JUST ME”.
This seems true to me. I’m also feeling a little bit insecure or something and wanting to reiterate that I think that particular comment was a net-positive addition and in my vision of LessWrong would have been positively upvoted.
Just as it’s important to separate the author of a comment from the votes that comment gets (which they have no control over), I want to separate a claim like “this being in positive territory is bad” (which I do not believe) from “the contrast between the total popularity of this and that is bad.”
I’m curious whether I actually passed your ITT with the rewrite attempt.
I think that if I put a more measured version of myself back into that comment, it has one key difference from your version.
“Pay attention to me and people like me” is a status claim rather than a useful model.
I’d have said “pay attention to a person who incurred social costs by loudly predicting one later-confirmed bad actor, when they incur social costs by loudly predicting another”.
(My denouncing of Geoff drove a wedge between me and several friends, including my then-best friend; my denouncing of the other one drove a wedge between me and my then-wife. Obviously those rifts had much to do with how I handled those relationships, but clearly it wasn’t idle talk from me.)
Otherwise, I think the content of your ITT is about right.
(The emotional tone is off, even after translating from Duncan-speak to me-speak, but that may not be worth going into.)
For the record, I personally count myself 2 for 2.5 on precision. (I got a bad vibe from a third person, but didn’t go around loudly making it known; and they’ve proven to be not a trustworthy person but not nearly as dangerous as I view the other two. I’ll accordingly not name them.)
Whether it’s better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they’re doing.
Out of all the things you would have added to orthonormal’s comment, the only one that I didn’t read at the time as explicit or implicit in zir comment was, “Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5”. I agree it would be nice if people gave more information about their own calibration where available. I don’t know whether it was available to orthonormal.
As for the rest, I’m sticking that at the end of this comment as a sort of appendix.
If I’m right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don’t see how I could have figured out that this is what our actual disagreement was.
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don’t feel like we need to fork anything based on the distance between our positions.
Why do you think the intensity scalars are so different between us?
***
All right, here comes some subjective experience. I’m offering this up because it seems relevant, and it seems like we should be in wide-net data gathering mode.
The comment makes it clear that it is subjective experience. I wouldn’t expect ortho to add it if ze didn’t think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.
I met Geoff Anders at our 2012 CFAR workshop, and my overwhelming impression was “this person wants to be a cult leader.” This was based on [specific number of minutes] of conversation.
I don’t expect ortho to remember the number of minutes from nine years ago.
The impression stuck with me strongly enough that I felt like mentioning it maybe as many as [specific number] of times over the years since, in various conversations.
I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
I was motivated enough on this point that it actually somewhat drove a wedge between me and two increasingly-Leverage-enmeshed friends, in the mid-2010′s.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
I feel like this is important and relevant because it seems like yet again we’re in a situation where a bunch of people are going “gosh, such shock, how could we have known?”
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
The delta between my wannabe-cult-leader-detectors and everyone else’s is large, and I don’t know its source, but the same thing happened with [don’t name him, don’t summon him], who was booted from the Berkeley community for good reason.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
I don’t think opaque intuition should be blindly followed, but as everyone is reeling from Zoe’s account and trying to figure out how to respond, one possibility I want to promote to attention is hey, maybe take a minute to listen to people like me?
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience. Crux?
Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1⁄5, and that means there’s probably something to be learned from me and people like me.
See above.
If you’re actually looking for ways to make this better in the future, anyway.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post.
Hmmm, something has gone wrong. This is not the case, and I’m not sure what caused you to think it was the case.
“How explicit comments need to be regarding their own epistemic status” is a single star in the constellation of considerations that caused me to write the post. It’s one of the many ways in which I see people doing things that slightly decrease our collective ability to see what’s true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.
But it’s in no way the central casus belli of the OP. The constellation is. So my answer to “Why do you think the intensity scalars are so different between us?” is “maybe they aren’t? I didn’t mean the thing you were surprised by.”
I don’t expect ortho to remember the number of minutes from nine years ago...I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I’m reminded of the time that some researchers investigated what various people meant by the phrase “a very real chance,” and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).
It’s true that numbers aren’t super reliable, but even estimated/ballpark numbers (you’ll note I wrote the phrase “as many as” and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations. The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation. To provide data that distinguishes between possibilities.
The comment makes it clear that it is subjective experience.
True. (I reiterate, feeling a smidge defensive, that I’ve said more than once that the comment was net-positive as written, and so don’t wish to have to defend a claim like “it absolutely should have been different in this way!” That’s not a claim I’m making. I’m making the much weaker claim that my rewrite was better. Not that the original was insufficient.)
The thing that I’m pulling for, with the greater explicitness about its subjectivity …
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to do this. But I think it’s clearly stronger if it does.
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
I validate that. But I suspect you would not claim that their reason doesn’t matter at all, to anyone. And I suspect you would not claim that a substantial chunk of LWers aren’t guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds. The rewrite included more attempts to rule out everything else than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what’s going on, and reduces the fog and confusion and rate of misunderstandings.
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience.
I don’t think that’s true at all. I think that there are several different implications compatible with the act of posting ortho’s comment, and that “I’m suggesting that you weight my opinion more heavily based on me being right in this case” is only one such implication, and that it’s valuable to be specific about what you’re doing and why because other people don’t actually just “get” it. The illusion of transparency is a hell of a drug, and so is the typical mind fallacy. Both when you’re writing, and assume that people will just magically know what you’re trying to accomplish, and when you’re reading, and assume that everyone else’s interpretation will be pretty close to your own.
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to head off that sort of misunderstanding at the pass. But I think it’s clearly better if it does so.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me.
I actually included that sentence because I felt like ortho’s original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement. So I think we’re not in disagreement on that.
Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation.
Also understood: you and I have different preferences for explicitly stating underlying claims. I don’t think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement. Striking that balance is Hard.
I think we’ve drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.
(Something genuinely amusing, given the context, about the above being at 3 points out of 2 votes after four hours, compared to its parent being at 30 points out of 7 votes after five.)
It’s bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws
I have an alternative and almost orthogonal interpretation for why the karma scores are the way they are.
Both in your orthonormal-Matt example, and now in this meta-example, the shorter original comments require less context to understand and got more upvotes, while the long meandering detail-oriented high-context responses were hardly even read by anyone.
This makes perfect sense to me—there’s a maximum comment length after which I get a strong urge to just ignore / skim a comment (which I initially did with your response here; and I never took the time to read Matt’s comments, though I also didn’t vote on orthonormal’s comment one way or another, nor vote in the jessicata post much at all), and I would be astonished if that only happened to me.
Also think about how people see these comments in the first place. Probably a significant chunk comes from people browsing the comment feed on the LW front page, and it makes perfect sense to scroll past a long sub-sub-sub-comment that might not even be relevant, and that you can’t understand without context, anyway.
So from my perspective, high-effort, high-context, lengthy sub-comments intrinsically incur a large attention / visibility (and therefore karma) penalty. Things like conciseness are also virtues, and if you don’t consider that it in your model of “good along three different axes, and bad along none as far as I can see”, then that model is incomplete.
(Also consider things like: How much time do you think the average reader spends on LW; what would be a good amount of time, relative to their other options; would you prefer a culture where hundreds of people take the opportunity cost to read sub-sub-sub-comments over one where they don’t; also people vary enormously in their reading speed; etc.)
Somewhat related: My post in this thread on some of the effects of the existing LW karma system: If we grant the above, one remaining problem is that the original orthonormal comment was highly upvoted but looked worse over time:
What if a comment looks correct and receives lots of upvotes, but over time new info indicates that it’s substantially incorrect? Past readers might no longer endorse their upvote, but you can’t exactly ask them to rescind their upvotes, when they might have long since moved on from the discussion.
First, some off the cuff impressions of matt’s post (in the interest of data gathering):
In the initial thread I believe that I read the first paragraph of matt’s comment, decided I would not get much out of it, and stopped reading without voting.
Upon revisiting the thread and reading matt’s comment in full, I find it difficult to understand and do not believe I would be able to summarize or remember its main points now, about 15 minutes after the fact.
This seems somewhat interesting to test, so here is my summary from memory. After this I’ll reread matt’s post and compare what I thought it said upon first reading with what I think it says upon a second closer reading:
[person who met geoff] is making anecdotal claims about geoff’s cult-leader-ish nature based on little data. People who have much more data are making contrary claims, so it is surprising that [person]’s post has so many upvotes. [commenter to person] is using deadpan in a particular way, which could mean multiple things depending on context but I lack that context. I believe that they are using it to communicate that geoff said so in a non-joking manner, but that is also hearsay.
Commentary before re-reading: I expect that I missed a lot, since it was a long post and it did not stick in my mind particularly well. I also remember a lot of hedging that confused me, and points that went into parentheticals within parentheticals. These parentheticals were long enough that I remember losing track of what point was being made. I also may have confabulated arguments in this thread about upvotes and some from matt’s post.
I wanted to keep the summary “pure” in the sense that it is a genuine recollection without re-reading, but for clarity [person] is othonormal and [commenter to person] is RyanCarey.
Second attempt at summarizing while flipping back and forth between editor and matt’s comment:
RyanCarey is either mocking orthonormal or providing further weak evidence, but I don’t know which.
One reading of orthonormal’s comment is that he had a strong first impression, has been engaging in hostile gossip about Geoff, and has failed to update since in the presence of further evidence. Some people might have different readings. Orthonormal’s post has lots of karma, they have 15k+ karma in general, and their post is of poor quality, therefore the karma system may be broken.
RyanCarey used deadpan in an unclear way, I believe the best reading of their comment is that Geoff made a joke about being a cult leader. Several other commenters and I, all of whom have much more contact with Geoff than orthonormal, do not think he is or wants to be a cult leader. It is out of character for Geoff to make a deadpan joke about wanting to be a cult leader and RyanCarey didn’t give confidence in their recollection of their memory, therefore people should be unimpressed with the anecdote.
I am explicitly calling out orthonormal’s comment as hostile gossip, which I will not back up here but will back up in a later post. You are welcome to downvote me because of this, but if you do it means that the discussion norms of LessWrong have corroded. Other reasons for downvotes might be appropriate, such as the length.
How about we ask Geoff? I hereby ask Geoff if he’s a cult leader, or if he has any other comment.
I talked with Geoff recently, which some might see as evidence of a conspiracy.
Editing that summary to be much more concise:
Orthnonormal has had little contact with Geoff, but is and continues to engage in hostile gossip. I and others with more substantive contact do not believe he is a cult leader. The people orthonormal has talked with, alluded to by the conversations that have incurred orthonormal reputational costs, have had much more contact with Geoff. Despite all of this, orthonormal refuses to believe that Geoff is not a cult leader. I believe we should base the likelihood of Geoff being a cult leader on those who have had more contact with Geoff, or even based on Geoff’s words on their own.
I notice that as I am re-reading matt’s post, I expect that the potential reading of orthonormal’s that he presents at the beginning (a reading that I find uncharitable) is in fact matt’s reading. But he doesn’t actually say this outright. Instead he says “An available interpretation of orthonormal’s comment is...”. Indeed, I initially had an author’s note in the summary that reflected the point that I was unsure if “an available interpretation” was matt’s interpretation. It is only much later (inside a parenthetical) that he says “I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”...” to indicate that the uncharitable reading is in fact Matt’s reading.
Matt’s comment also included some comments that I read as sneering:
I wonder whether orthonormal has other evidence, or whether orthonormal will take this opportunity to reduce their confidence in their first impression… or whether orthonormal will continue to be spectacularly confident that they’ve been right all along.
I would have preferred his comment to start small with some questions about orthonormal’s experience rather than immediately accuse them of hostile gossip. For instance, matt might have asked about the extent of orthonormal’s contact with Geoff, how confident orthonormal is that Geoff is a cult leader, and whether orthonormal updated against Geoff being a cult leader in light of their friends believing Geoff wasn’t a cult leader, etc. Instead, those questions are assumed to have answers that are unsupportive of orthonormal’s original point (the answers assumed in matt’s comment in order: very little contact, extremely confident, anti-updates in the direction of higher confidence). This seems like a central example of an uncharitable comment.
Overall I find matt’s comment difficult to understand after multiple readings and uncharitable of those he is conversing with, although I do value the data it adds to the conversation. I believe this lack of charity is part of why matt’s comment has not done well in terms of karma. I still have not voted on matt’s comment and do not believe I will. There are parts of it that are valuable, but it is uncharitable and that is a value I hold above most others. In cases like these, where parts of a comment are valuable and other parts are the sort of thing that I would rather pruned from the gardens I spend my time in, I tend to withhold judgment.
How do my two summaries compare? I’m surprised by how close the first summary I gave was to the “much more concise” summary I gave later. I expected to have missed more, largely due to matt’s comment’s length. I also remember finding it distasteful, which I omitted from my summaries but likely stemmed from the lack of charity extended to orthonormal.
Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt’s comment? How would they react to that much more concise comment, as compared to matt’s comment?
Strong upvote for doing this process/experiment; this is outstanding and I separately appreciate the effort required.
Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt’s comment? How would they react to that much more concise comment, as compared to matt’s comment?
I find your summary at least within-bounds, i.e. not fully ruled out by the words on the page. I obviously had a different impression, but I don’t think that it’s invalid to hold the interpretations and hypotheses that you do.
I particularly like and want to upvote the fact that you’re being clear and explicit about them being your interpretations and hypotheses; this is another LW-ish norm that is half-reliable and I would like to see fully reliable. Thanks for doing it.
When it comes to assessing whether a long comment or post is hard to read, quality and style of writing matters, too. SSC’s Nonfiction Writing Advice endlessly hammers home the point of dividing text into increasingly smaller chunks, and e.g. here’s one very long post by Eliezer that elicited multiple comments of the form “this was too boring to finish” (e.g. this one), some of which got alleviated merely by adding chapter breaks.
And since LW makes it trivial to add headings even to comments (e.g. I used headings here), I guess that’s one more criterion for me to judge long comments by.
(One could even imagine the LW site nudging long comments towards including stuff like headings. One could imagine a good version of a prompt like this: “This comment / post is >3k chars but consists of only 3 paragraphs and uses no headings. Consider adding some level of hierarchy, e.g. via headings.”)
I spent 15 minutes re-reading the thread underneath orthonormal’s comment to try to put myself in your head. I think maybe I succeeded, so here goes, but from a person whose job involves persuading people, it’s Not Optimal For Your Argument that I had to do this to engage with your model here, and it’s potentially wasteful if I’ve failed at modeling you.
I read both of the comments discussed below, at the time I was following the original post and comments, but did not vote on either.
***
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others
orthonormal P2 [which I inferred using the Principle of Charity]: Most of the time, people who immediately come across as cult leaders are trying to start a cult
Duncan P1: It’s bad when LW upvotes comments with very thin epistemic rigor
Duncan P2: This comment has very thin epistemic rigor because it’s based on a few brief conversations
Gloss: I don’t necessarily agree with your P2. It’s not robust, but nor is it thin; if true, it’s one person’s statement that, based on admittedly limited evidence, they had a high degree of confidence that Anders wanted to be a cult leader. I can review orthonormal’s post history to conclude that ze is a generally sensible person who writes as though ze buys into LW epistemics, and is also probably known by name to various people on the site, meaning if Anders wanted to sue zir for defamation, Anders could (another social and financial cost that orthonormal is incurring). Conditional on Anders not being a cult leader, I would be mildly surprised if orthonormal thought Anders was a cult leader/wannabe.
Also, this comment—which meets your epistemic standards, right? If so, did it cause you to update on the “Leverage is being canceled unfairly” idea?
***
Matt P1: I spent hundreds of hours talking to Anders
Matt P2: If he were a cult leader/wannabe, I would have noticed
Duncan P1: It’s bad when LW doesn’t upvote comments with good epistemic rigor
Duncan P2: This comment has good epistemic rigor because Matt has way more evidence than orthonormal
Gloss: [Edit: Upon reflection, I have deleted this paragraph. My commentary is not germane to the issue that Duncan and I are debating.]
***
The karma score disparity is currently 48 on 39 votes, to 5 on 26 votes.
Given my thought process above, which of the comments should I have strongly upvoted, weakly upvoted, done nothing to, weakly downvoted, or strongly downvoted, on your vision of LW?
Or: which parts of my thought process are inimical to your vision of LW?
***
If it helps you calibrate your response, if any, I spent about 45 minutes researching, conceptualizing, drafting, and editing this comment.
Thank you for the effort! Strong upvoted.
Quick point to get out of the way: re: the comment that you thought would likely meet my standards, yes, it does; when I hovered over it I saw that I had already (weak) upvoted it.
Here’s my attempt to rewrite orthonormal’s first comment; what I would have said in orthonormal’s shoes, if I were trying to say what I think orthonormal is trying to say.
orthonormal P1: Anders seemed like a cult leader/wannabe based on my first impressions, and I willingly incurred social cost to communicate this to others (i.e. this wasn’t just idle hostility)
orthonormal P2 [which I inferred using the Principle of Charity]: This is relevant because, separate from the question of whether my detectors are accurate in an absolute sense, they’re more accurate than whatever it is all of you are doing
Duncan P1: It’s bad when LW upvotes comments that aren’t transparent about what they’re trying to accomplish and via what channels they’re trying to accomplish it
Duncan P2: orthonormal’s original comment is somewhat bad in this way; it’s owning its content on the surface but the implicature is where most of the power lies; the comment does not on its face say why it exists or what it’s trying to do in a way that an autistic ten-year-old could parse (evidence: I felt myself becoming sort of fuzzy/foggy and confused, reading it). As written, I think its main goal is to say “I told you so and also I’m a better judge of things than all of you”? But it doesn’t just come right out and say that and then pay for it, the way that I say in the OP above that I’m often smarter than other people in the room (along with an acknowledgement that there’s a cost for saying that sort of thing).
I do think that the original version obfuscated some important stuff (e.g. there’s a kind of motte-bailey at the heart of “we met at our CFAR workshop”; that could easily imply “we spent fifteen intensely intimate hours in one another’s company over four days” or “we spoke for five minutes and then were in the same room for a couple of classes”). That’s part of it.
But my concern is more about the delta between the comments’ reception. I honestly don’t know how to cause individuals voting in a mass to get comments in the right relative positions, but I think orthonormal’s being at 48 while Matt’s is at 5 is a sign of something wrong.
I think orthonormal’s belongs at something like 20, and Matt’s belongs at something like 40. I voted according to a policy that attempts to cause that outcome, rather than weak upvoting orthonormal’s, as I otherwise would have (its strengths outweigh its flaws and I do think it was a positive contribution).
In a world where lots of LessWrongers are tracking the fuzziness and obfuscation thing, orthonormal’s comment gets mostly a bunch of small upvotes, and Matt’s gets mostly a bunch of strong upvotes, and they both end up in positive territory but with a clear (ugh) “status differential” that signals what types of contributions we want to more strongly reward.
As for Matt’s comment:
Matt’s comment in part deserves the strong upvote because it’s a high-effort, lengthy comment that tries pretty hard to go slowly and tease apart subtle distinctions and own where it’s making guesses and so forth; agnostic of its content my prior a third of the way through was “this will ultimately deserve strong approval.”
I don’t think most of Matt’s comment was on the object level, i.e. comments about Anders and his likelihood of being a cult leader, wannabe or otherwise.
I think that it was misconstrued as just trying to say “pshhh, no!” which is why it hovers so close to zero.
My read of Matt’s comment:
Matt P1: It’s hard to tell what Ryan and orthonormal are doing
Matt P2: There’s a difference between how I infer LWers are reading these comments based on the votes, and how I think LWers ought to interpret them
Matt P3: Here’s how I interpret them
Matt P4: Here’s a bunch of pre-validation of reasons why I might be wrong about orthonormal, both because I actually might and because I’m worried about being misinterpreted and want to signal some uncertainty/humility here.
Matt P5: Ryan’s anecdote seems consistent, to me, with a joke of a form that Geoff Anders makes frequently.
Matt P6: My own personal take is that Geoff is not a cult leader and that the evidence provided by orthonormal and Ryan should be considered lesser than mine (and here’s why)
Matt P7-9: [various disclaimers and hedges]
Duncan P1: This comment is good because of the information it presents
Duncan P2: This comment is good because of the way it presents that information, and the way it attempts to make space for and treat well the previous comments in the chain
Duncan P3: This comment is good because it was constructed with substantial effort
Duncan P4: It’s bad that comments which are good along three different axes, and bad along none as far as I can see, are ranked way below comments that are much worse along those three axes and also have other flaws (the unclear motive thing).
I don’t disagree with either of your glosses, but most notably they missed the above axes. Like, based on your good-faith best-guess as to what I was thinking, I agree with your disagreements with that; your pushback against hypothesized-Duncan who’s dinging orthonormal for epistemic thinness is good pushback.
But I think my version of orthonormal’s comment is stronger, and while I don’t think their original comment was not-worth-writing, such that I’d say “don’t contribute if you’re not going to put forth as much effort as I did in my rewrite.” I think it was less worth writing than the rewrite. I think the rewrite gives a lot more, and … hypnotizes? … a lot less.
As for your gloss on Matt’s comment specifically, I just straightforwardly like it; if it were its own reply and I saw it when revisiting the thread I would weak or strong upvote it. I think it does exactly the sane-itizing light-shining that I’m pulling for, and that feels to me was only sporadically (and not reliably) present throughout the discussions.
I took however many minutes it’s been since you posted your reply to write this. 30-60?
Thanks, supposedlyfun, for pointing me to this thread.
I think it’s important to distinguish my behavior in writing the comment (which was emotive rather than optimized—it would even have been in my own case’s favor to point out that the 2012 workshop was a weeklong experiment with lots of unstructured time, rather than the weekend that CFAR later settled on, or to explain that his CoZE idea was to recruit teens to meddle with the other participants’ CoZE) from the behavior of people upvoting the comment.
I expect that many of the upvotes were not of the form “this is a good comment on the meta level” so much as “SOMEBODY ELSE SAW THE THING ALL ALONG, I WORRIED IT WAS JUST ME”.
This seems true to me. I’m also feeling a little bit insecure or something and wanting to reiterate that I think that particular comment was a net-positive addition and in my vision of LessWrong would have been positively upvoted.
Just as it’s important to separate the author of a comment from the votes that comment gets (which they have no control over), I want to separate a claim like “this being in positive territory is bad” (which I do not believe) from “the contrast between the total popularity of this and that is bad.”
I’m curious whether I actually passed your ITT with the rewrite attempt.
Thanks for asking about the ITT.
I think that if I put a more measured version of myself back into that comment, it has one key difference from your version.
“Pay attention to me and people like me” is a status claim rather than a useful model.
I’d have said “pay attention to a person who incurred social costs by loudly predicting one later-confirmed bad actor, when they incur social costs by loudly predicting another”.
(My denouncing of Geoff drove a wedge between me and several friends, including my then-best friend; my denouncing of the other one drove a wedge between me and my then-wife. Obviously those rifts had much to do with how I handled those relationships, but clearly it wasn’t idle talk from me.)
Otherwise, I think the content of your ITT is about right.
(The emotional tone is off, even after translating from Duncan-speak to me-speak, but that may not be worth going into.)
For the record, I personally count myself 2 for 2.5 on precision. (I got a bad vibe from a third person, but didn’t go around loudly making it known; and they’ve proven to be not a trustworthy person but not nearly as dangerous as I view the other two. I’ll accordingly not name them.)
I’m going to take a stab at cruxing here.
Whether it’s better for the LW community when comments explicitly state a reasonable amount of the epistemic hedging that they’re doing.
Out of all the things you would have added to orthonormal’s comment, the only one that I didn’t read at the time as explicit or implicit in zir comment was, “Not as anything definitive, but if I do an honest scan over the past decade, I feel like I’m batting … 3⁄5, maybe, with 2 more that are undecided, and the community consensus is doing more like 1/5”. I agree it would be nice if people gave more information about their own calibration where available. I don’t know whether it was available to orthonormal.
As for the rest, I’m sticking that at the end of this comment as a sort of appendix.
If I’m right about the crux, that is totally not in the set of Things That I Thought You Might Have Been Saying after reading the original post. Re-reading the original post now, I don’t see how I could have figured out that this is what our actual disagreement was.
I notice that I am surprised that {the norm of how explicit a comment needs to be regarding its own epistemic standard} prompted you to write the original post. Honestly, the intensity of the post seems disproportionate to the size of the disagreement, and also the likelihood that people are going to disagree with you to the point that they want to not be in a community with you anymore. I don’t feel like we need to fork anything based on the distance between our positions.
Why do you think the intensity scalars are so different between us?
***
The comment makes it clear that it is subjective experience. I wouldn’t expect ortho to add it if ze didn’t think it was relevant. People sharing their impressions of a situation to get at the truth, which seemed to be the point of the post and comments, just is wide-net data gathering mode.
I don’t expect ortho to remember the number of minutes from nine years ago.
I don’t expect ortho to remember the number of conversations since 2012, and if ze had inserted a specific number, I wouldn’t have attached much weight to it for that reason.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
This is plausibly why ortho felt like adding zir experience, but there are other reasons ze might have had, and zir reason doesn’t really matter; to me, zir shared experience was just additional data.
This is in there well enough that I don’t see any value in saying it with more words. Crux?
“Hey maybe take a minute to listen to people like me” is implicit in the decision to share one’s experience. Crux?
See above.
I don’t think ortho would have shared zir experience if ze didn’t think zir interlocutors wanted to do better in the future, so I read this as implicit, and I think I would in any LW conversation. In fact, this sentence would have come across as bizarrely combative to me. Crux?
Hmmm, something has gone wrong. This is not the case, and I’m not sure what caused you to think it was the case.
“How explicit comments need to be regarding their own epistemic status” is a single star in the constellation of considerations that caused me to write the post. It’s one of the many ways in which I see people doing things that slightly decrease our collective ability to see what’s true, in a way that compounds negatively, where people might instead do things that slightly increase our collective ability, in a way that compounds positively.
But it’s in no way the central casus belli of the OP. The constellation is. So my answer to “Why do you think the intensity scalars are so different between us?” is “maybe they aren’t? I didn’t mean the thing you were surprised by.”
Here, I was pulling for the virtue of numeric specificity, which I think is generally understood on LW. I’m reminded of the time that some researchers investigated what various people meant by the phrase “a very real chance,” and found that at least one of them meant 20% and at least one of them meant 80% (which are opposites).
It’s true that numbers aren’t super reliable, but even estimated/ballpark numbers (you’ll note I wrote the phrase “as many as” and imagined ortho stating a ceiling) are much better for collective truth-tracking than wide-open vague phrases that allow people with very different interpretations to be equally confident in those interpretations. The goal, after all, at least in my view, is to help us narrow down the set of possible worlds consistent with observation. To provide data that distinguishes between possibilities.
True. (I reiterate, feeling a smidge defensive, that I’ve said more than once that the comment was net-positive as written, and so don’t wish to have to defend a claim like “it absolutely should have been different in this way!” That’s not a claim I’m making. I’m making the much weaker claim that my rewrite was better. Not that the original was insufficient.)
The thing that I’m pulling for, with the greater explicitness about its subjectivity …
Look, there’s this thing where sometimes people try to tell each other that something is okay. Like, “it’s okay if you get mad at me.”
Which is really weird, if you interpret it as them trying to give the other person permission to be mad.
But I think that’s usually not quite what’s happening? Instead, I think the speaker is usually thinking something along the lines of:
Gosh, in this situation, anger feels pretty valid, but there’s not universal agreement on that point—many people would think that anger is not valid, or would try to penalize or shut down someone who got mad here, or point at their anger in a delegitimizing sort of way. I don’t want to do that, and I don’t want them to be holding back, out of a fear that I will do that. So I’m going to signal in advance something like, “I will not resist or punish your anger.” Their anger was going to be valid whether I recognized its validity or not, but I can reduce the pressure on them by removing the threat of retaliation if they choose to let their emotions fly.
Similarly, yes, it was obvious that the comment was subjective experience. But there’s nevertheless something valuable that happens when someone explicitly acknowledges that what they are about to say is subjective experience. It pre-validates someone else who wants to carefully distinguish between subjectivity and objectivity. It signals to them that you won’t take that as an attack, or an attempt to delegitimize your contribution. It makes it easier to see and think clearly, and it gives the other person some handles to grab onto. “I’m not one of those people who’s going to confuse their own subjective experience for objective fact, and you can tell because I took a second to speak the shibboleth.”
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to do this. But I think it’s clearly stronger if it does.
I validate that. But I suspect you would not claim that their reason doesn’t matter at all, to anyone. And I suspect you would not claim that a substantial chunk of LWers aren’t guessing or intuiting or modeling or projecting reasons, and then responding based on the cardboard cutouts in their minds. The rewrite included more attempts to rule out everything else than the original comment did, because I think ruling out everything else is virtuous, and one of those moves that helps us track what’s going on, and reduces the fog and confusion and rate of misunderstandings.
I don’t think that’s true at all. I think that there are several different implications compatible with the act of posting ortho’s comment, and that “I’m suggesting that you weight my opinion more heavily based on me being right in this case” is only one such implication, and that it’s valuable to be specific about what you’re doing and why because other people don’t actually just “get” it. The illusion of transparency is a hell of a drug, and so is the typical mind fallacy. Both when you’re writing, and assume that people will just magically know what you’re trying to accomplish, and when you’re reading, and assume that everyone else’s interpretation will be pretty close to your own.
Again, I am not claiming, and have not at any point claimed, that ortho’s comment needed to head off that sort of misunderstanding at the pass. But I think it’s clearly better if it does so.
I actually included that sentence because I felt like ortho’s original comment was intentionally combative (and a little bizarrely so), and that my rewrite had removed too much of its intentional heat to be a sufficiently accurate restatement. So I think we’re not in disagreement on that.
Understood: the comment-karma-disparity issue is, for you, a glaring example of a larger constellation.
Also understood: you and I have different preferences for explicitly stating underlying claims. I don’t think your position is unreasonable, just that it will lead to much-longer comments possibly at the cost of clarity and engagement. Striking that balance is Hard.
I think we’ve drilled as far down as is productive on my concerns with the text of your post. I would like to see your follow-up post on the entire constellation, with the rigor customary here. You could definitely persuade me. I maybe was just not part of the target audience for your post.
(Something genuinely amusing, given the context, about the above being at 3 points out of 2 votes after four hours, compared to its parent being at 30 points out of 7 votes after five.)
I have an alternative and almost orthogonal interpretation for why the karma scores are the way they are.
Both in your orthonormal-Matt example, and now in this meta-example, the shorter original comments require less context to understand and got more upvotes, while the long meandering detail-oriented high-context responses were hardly even read by anyone.
This makes perfect sense to me—there’s a maximum comment length after which I get a strong urge to just ignore / skim a comment (which I initially did with your response here; and I never took the time to read Matt’s comments, though I also didn’t vote on orthonormal’s comment one way or another, nor vote in the jessicata post much at all), and I would be astonished if that only happened to me.
Also think about how people see these comments in the first place. Probably a significant chunk comes from people browsing the comment feed on the LW front page, and it makes perfect sense to scroll past a long sub-sub-sub-comment that might not even be relevant, and that you can’t understand without context, anyway.
So from my perspective, high-effort, high-context, lengthy sub-comments intrinsically incur a large attention / visibility (and therefore karma) penalty. Things like conciseness are also virtues, and if you don’t consider that it in your model of “good along three different axes, and bad along none as far as I can see”, then that model is incomplete.
(Also consider things like: How much time do you think the average reader spends on LW; what would be a good amount of time, relative to their other options; would you prefer a culture where hundreds of people take the opportunity cost to read sub-sub-sub-comments over one where they don’t; also people vary enormously in their reading speed; etc.)
Somewhat related: My post in this thread on some of the effects of the existing LW karma system: If we grant the above, one remaining problem is that the original orthonormal comment was highly upvoted but looked worse over time:
First, some off the cuff impressions of matt’s post (in the interest of data gathering):
In the initial thread I believe that I read the first paragraph of matt’s comment, decided I would not get much out of it, and stopped reading without voting.
Upon revisiting the thread and reading matt’s comment in full, I find it difficult to understand and do not believe I would be able to summarize or remember its main points now, about 15 minutes after the fact.
This seems somewhat interesting to test, so here is my summary from memory. After this I’ll reread matt’s post and compare what I thought it said upon first reading with what I think it says upon a second closer reading:
Commentary before re-reading: I expect that I missed a lot, since it was a long post and it did not stick in my mind particularly well. I also remember a lot of hedging that confused me, and points that went into parentheticals within parentheticals. These parentheticals were long enough that I remember losing track of what point was being made. I also may have confabulated arguments in this thread about upvotes and some from matt’s post.
I wanted to keep the summary “pure” in the sense that it is a genuine recollection without re-reading, but for clarity [person] is othonormal and [commenter to person] is RyanCarey.
Second attempt at summarizing while flipping back and forth between editor and matt’s comment:
Editing that summary to be much more concise:
I notice that as I am re-reading matt’s post, I expect that the potential reading of orthonormal’s that he presents at the beginning (a reading that I find uncharitable) is in fact matt’s reading. But he doesn’t actually say this outright. Instead he says “An available interpretation of orthonormal’s comment is...”. Indeed, I initially had an author’s note in the summary that reflected the point that I was unsure if “an available interpretation” was matt’s interpretation. It is only much later (inside a parenthetical) that he says “I want to note that while readers may react negatively to me characterising orthonormal’s behaviour as “hostile gossip”...” to indicate that the uncharitable reading is in fact Matt’s reading.
Matt’s comment also included some comments that I read as sneering:
I would have preferred his comment to start small with some questions about orthonormal’s experience rather than immediately accuse them of hostile gossip. For instance, matt might have asked about the extent of orthonormal’s contact with Geoff, how confident orthonormal is that Geoff is a cult leader, and whether orthonormal updated against Geoff being a cult leader in light of their friends believing Geoff wasn’t a cult leader, etc. Instead, those questions are assumed to have answers that are unsupportive of orthonormal’s original point (the answers assumed in matt’s comment in order: very little contact, extremely confident, anti-updates in the direction of higher confidence). This seems like a central example of an uncharitable comment.
Overall I find matt’s comment difficult to understand after multiple readings and uncharitable of those he is conversing with, although I do value the data it adds to the conversation. I believe this lack of charity is part of why matt’s comment has not done well in terms of karma. I still have not voted on matt’s comment and do not believe I will. There are parts of it that are valuable, but it is uncharitable and that is a value I hold above most others. In cases like these, where parts of a comment are valuable and other parts are the sort of thing that I would rather pruned from the gardens I spend my time in, I tend to withhold judgment.
How do my two summaries compare? I’m surprised by how close the first summary I gave was to the “much more concise” summary I gave later. I expected to have missed more, largely due to matt’s comment’s length. I also remember finding it distasteful, which I omitted from my summaries but likely stemmed from the lack of charity extended to orthonormal.
Do other readers find my summary, particularly my more concise summary, an accurate portrayal of matt’s comment? How would they react to that much more concise comment, as compared to matt’s comment?
Strong upvote for doing this process/experiment; this is outstanding and I separately appreciate the effort required.
I find your summary at least within-bounds, i.e. not fully ruled out by the words on the page. I obviously had a different impression, but I don’t think that it’s invalid to hold the interpretations and hypotheses that you do.
I particularly like and want to upvote the fact that you’re being clear and explicit about them being your interpretations and hypotheses; this is another LW-ish norm that is half-reliable and I would like to see fully reliable. Thanks for doing it.
To add one point:
When it comes to assessing whether a long comment or post is hard to read, quality and style of writing matters, too. SSC’s Nonfiction Writing Advice endlessly hammers home the point of dividing text into increasingly smaller chunks, and e.g. here’s one very long post by Eliezer that elicited multiple comments of the form “this was too boring to finish” (e.g. this one), some of which got alleviated merely by adding chapter breaks.
And since LW makes it trivial to add headings even to comments (e.g. I used headings here), I guess that’s one more criterion for me to judge long comments by.
(One could even imagine the LW site nudging long comments towards including stuff like headings. One could imagine a good version of a prompt like this: “This comment / post is >3k chars but consists of only 3 paragraphs and uses no headings. Consider adding some level of hierarchy, e.g. via headings.”)