@Said. I’ve been thinking a bit about this comment thread, going back to read some comments of yours about moderation, and trying to pass your general ITT regarding commenting norms. Here’s my current best guess about what seems important to you in this domain:
Our global intellectual community suffers from low standards
Many parts of science are seeing a catastrophic replication crisis—even neuroscience.
Facebook and Twitter are shining examples of what being overwhelmed with low-quality content looks like.
Our specific intellectual community (LessWrong) suffers from low standards
The process that elevates posts and idea is hardly reassuring. Lots of people upvote it, then maybe it gets curated, and then that’s it. No formal and rigorous checking or feedback, no outside reviewers, nothing. There’s a few comments, but nobody is being explicitly incentivised to find good counter-arguments.
The correct action here is to significantly increase our standards.
This will cause many people to not write most of the content they’re writing. Sure, this might be most of the content, but one man’s modus ponens is another’s modus tollens—the current content is just bad. There is an awful lot out there, and we need to refine it, not add to it.
The situation we are in is not one of slightly raising standards that are generally already pretty good, but running crisis-mitigation / triage on the horrendous state of the current internet and LessWrong. If someone writes a post that is not up to a good standard, this needs to be made apparent to them, for two reasons.
Firstly, because it damages the commons; they’re clogging up our collective intellectual space with wrong (often trivially wrong) points. If this is not made apparent in the comments, then it would be better if the post was not written at all. Immediately commenting to point out mistakes is the correct response, the person needs to learn that this is not to be tolerated. That way leads to madness, or worse, Tumblr.
Sure, they may try to reply to you, to argue their point further, you may even end up understanding them better, but it was still their fault to make the post wrong in the first place, not your fault for misunderstanding their writing or being highly critical of their basic errors.
And secondly, because criticising people’s ideas is the only way for them to improve. LessWrong is a place we actually care about being good, where people can come and practice the art of rationality. Practise means getting feedback, and coddling people with low standards will mean they will not be able to find their actually good ideas. And this is after all what’s most important—that we figure out true and important ideas.
---
I take the following quotes of yours as implying this interpretation.
I do not write top-level posts because my standards for ideas that are sufficiently important, valuable, novel, etc., to justify contributing to the flood of words that is the blogosphere, are fairly high. I would be most gratified to see more people follow my example.
It is good if underdeveloped ideas can be raised. It is good if they can be criticized. It is good if that criticism is not punished. It is good if the author of the underdeveloped idea responds either with a spirited defense or with “yeah, you’re right, that was a bad idea for the reasons you say—thanks, this was useful!”. This is what we should incentivize. This is how intellectual progress will take place.
Or, to put it another way: criticism of a bad idea does not constitute punishment for putting that idea forth—unless, of course, being criticized inherently causes one to lose face. But why should that be so? There’s only one real reason why, and it reflects quite poorly on a social environment if that reason obtains… Here, on Less Wrong, being criticized should be ok. Responding to criticism should be ok. Argument should be ok.
Otherwise you will get an echo chamber—and if instead of one echo chamber you have multiple ones, each with their own idiosyncratic echoes… well, I simply don’t see how that’s an improvement. Either way the site will have failed in its goal.
Without a “culture of unfettered criticism”, as you say, these very authors’ writings will go un-criticized, their claims will not be challenged, and the quality of their ideas will decline...
(This is, of course, not to mention the more obvious harms—the spread of bad ideas through our community consensus being only the most obvious of those.)
...in the absence of open and lively criticism, bad ideas proliferate, echo chambers are built, and discussion degenerates into streams of sheer nonsense.
I also think this explains my perception (more on this below) that many comments of yours ask for the author to do a lot of effort while doing very little yourself. Responses like this-
I assumed you meant what you wrote. It does not seem mysterious or confusing, just contradictory. (If you meant something other than what you wrote, well, I guess you’ll want to clarify).
-where it feels (to me) like it is on the other person to write well, not on you to expend effort to interpret them. They’re the one damaging the commons & who needs to improve.
---
So, to start with, I agree with my-model-of-you about the Standards Problem. There are incredibly few places in this world I can go to where I expect everyone to keep a high standard of evidence—certainly not any online platforms that I could name, nor most scientific journals. In person I have a few friends that I trust, and sending them google docs works well, but it’s clear that we need something that can coordinate intellectual progress in fields with 10s and 100s of people, not just groups of 3 or 4.
And it’s high in my priorities to get LessWrong to have a process for actually checking ideas, to which I can contribute a high effort post (like my own post on common knowledge) - where I can get good feedback that both I and the community trusts to actually find the good counter-arguments. This involves both incentivising people to find good counter-arguments, and also incentivising people to write rigorous posts (even if they are not the generators. I would love for someone to attempt to submit a technical explanation of the core ideas in Zvi’s Slack and the Sabbath sequence, for example. I think Eliezer managed to do something similar with his post “Moloch’s Toolbox”, adding rigour to Scott Alexander’s initial poetic post, and it’s sad that there’s no trusted process in the world for checking that and making it common knowledge in a larger community like this one).
But we’re not there yet, and (I think) I disagree with you about how to get to there. I think that the correct move at the minute is not for further negative incentive, but for a stronger positive incentive for good writing. I think the dream of “Keeping everything the same but removing all of the bad ideas” is likely a fiction. People need to be able to honestly put forward new and unrigorous ideas without expecting the Spanish Inquisition[1], to be able to find the one or two gems that can be elevated and canonised.
Right now my approach is to encourage people to try, and encourage them more when they get something very right. Respectively, upvotes and curation. In time, we’ll add more steps to the process, and clear places for evaluation and criticism. That’s why we’ve been working on the AI Alignment Forum and EA Forum 2.0 (two other basic platforms to later build upon), as well as thinking a lot about peer review and what additional infrastructure on the site will set up these pipelines for ideas to go through.
Oliver has previously said that the approach you’ve been taking was the approach that lead to a number of our top authors feeling unwelcome to post on the old LessWrong:
Eliezer, Scott G., Nate and a lot of the other top writers we’ve talked to (or who commented about the LessWrong culture somewhere publicly) have reported that LessWrong is a place that feels too hostile to post to, because of attitudes like the one you describe in this comment. Almost every major author we’ve interviewed has explicitly asked for some way to create content on LessWrong that is lower stakes and that allows for an explorative discussion instead of everyone just focusing on tearing apart their ideas. There has to be a place and a stage for exposing your idea to intense scrutiny, but we also need a place for explorative discussion and I am not happy about you trying to enforce a frame of intense scrutiny on every single post.
Your commenting style still has many of the properties that it did then. Let me be specific about that pattern that I’m talking about. In this thread with Benquo, this is what it felt like from my perspective:
You: These quotes from your post make no sense when juxtaposed.
Benquo: Can you do a bit of interpretive labour toward me?
You: No. It’s obvious that your quotes make no sense.
Benquo: Let me rephrase what I meant.
You: Still wrong. You’re not getting it? Notice this problem between these two quotes?
Benquo: I’m still not getting your point.
You: <Long and fairly interesting comment of your perspective on both the object level (making bread) and the meta level (what qualifies as a good explanation)>
Benquo: I want to push back on three assumptions you’ve made.
You: Another substantial comment. Also an extended snarky comment.
Benquo: It seems like you’re trying to misunderstand here, and being sarcastic about it, and I’m not going to engage further.
The long, substantive point was quite interesting. But the opening three comments really didn’t help Benquo, they felt to me snarky/unnecessarily aggressive, and it seemed to me you were asking Benquo to do a lot work that you weren’t willing to do (until after you’d written the three comments implying Benquo was obviously getting something wrong). I believe comments like these make many writers feel like LessWrong is a crueller place—like the LessWrong that they previously fled.
So from here on out, I, along with the rest of the mod team, do plan to treat all the comments of yours that put in low interpretive effort on your part—ones that feel like you’re requesting a large amount of effort from someone else, whilst doing no signalling that you intend to reciprocate—as bad for the health of the culture on LessWrong, and strong-downvote them accordingly, with no exceptions.
(This is a minority of your comments; I don’t expect this to significantly stem your ability to comment on the site, as the majority of your comments are much more substantive—there’s at least one in this very thread that I strongly-upvoted.)
I do want to be transparent Said, that if almost anyone else was writing comments that I felt were this damaging to the culture, I would’ve come down hard on them long ago (with suspensions and eventually a ban). I don’t intend to ban you any time soon, because I really value your place in this community—you’re one of the few people to build useful community infrastructure like ReadTheSeqeunces.com and the UI of GreaterWrong.com, and that’s been one of the most salient facts to me throughout all of my thinking on this matter. But after spending a great deal of time and effort worrying about the effects of your comments on the culture, I don’t intend to put in as much time and effort if this comes up again in the future (be it 2 months or 12), and will just use the moderation tools as seems appropriate to me.
---
[1] I just want to flag this point about what good environments for exploring ideas are like, as I think my model of you strongly disagrees with it (and thus all the points that follow from it). I’d be happy to discuss it further if so—though I do commit to spending no more than 2 hours thinking about responses on this comment thread including reading time (-and I will time myself).
But we’re not there yet, and (I think) I disagree with you about how to get to there. I think that the correct move at the minute is not for further negative incentive, but for a stronger positive incentive for good writing. I think the dream of “Keeping everything the same but removing all of the bad ideas” is likely a fiction. People need to be able to honestly put forward new and unrigorous ideas without expecting the Spanish Inquisition[1], to be able to find the one or two gems that can be elevated and canonised.
Right now my approach is to encourage people to try, and encourage them more when they get something very right. Respectively, upvotes and curation. In time, we’ll add more steps to the process, and clear places for evaluation and criticism. That’s why we’ve been working on the AI Alignment Forum and EA Forum 2.0 (two other basic platforms to later build upon), as well as thinking a lot about peer review and what additional infrastructure on the site will set up these pipelines for ideas to go through.
Why not something like:
Everything is posted to people’s personal blogs, never directly to the front page. While something’s on a personal blog, “brainstorming session” rules apply: no criticism (especially no harsh criticism), just riffing / elaboration / maybe some gentle constructive criticism (and that, perhaps, only if asked).
After this, an author can edit their post, or maybe post a new, better version; or maybe they can “workshop” it elsewhere, and then post an already-better version on LW immediately. In any case, a post that has either undergone this “gentle” discussion, or doesn’t need it, may be transferred to the front page. This may happen in one of three ways:
The author requests a frontpage transfer. It must be approved by a mod.
A mod suggests a frontpage transfer. It must be approved by the author.
Another user (perhaps, only those with some minimum karma value) suggests a frontpage transfer. It must be approved by a mod and also by the author.
Once on the frontpage, the post is exposed to the full scrutiny of the LW commentariat. Personal insults, gratuitous rudeness, and the like are still not tolerated, of course; but otherwise, the author’s feelings aren’t spared. People say what they think about the post. Spirited discussion is had. The author may defend the post, or not; in any case, it’s full “Spanish Inquisition” mode.
Repeat steps 2 and 3 until the post is generally agreed to be solid, not nonsensical, worthwhile, etc. (If this never happens, so be it. Some—indeed, many—ideas ought to be firmly, unsentimentally, explicitly, and publicly rejected.)
A post which survives this scrutiny and emerges as a generally-agreed-to-be-excellent gem, may then be nominated for curation, and—if approved for curation—enters into a corpus of such of the community’s output that we may proudly exhibit as genuine intellectual accomplishment, and refer to in years to come; the building blocks of a rock-solid epistemic edifice.
I believe this would satisfy both your desiderata and mine.
This seems like it’s solving the wrong problem. The problem with your comments isn’t that they are too critical or apply too high an epistemic standard; it is that you have been insulting, sarcastic, and unwilling to make clear, specific claims about what the piece was getting wrong, instead doing things like insinuating that I’m not worth listening to because I haven’t proved that I know about soda bread, and exaggerating my claims and then asking me to prove the exaggerated, false version.
(It seems like I’m strongly disagreeing with Ben Pace here, not just you.)
I would have actually been pretty happy to engage with a comment along the lines of “it seems like you’re making claim X, which contradicts claim Y.” That would have made it easy for me to respond along the lines of “Rather than X, I actually meant to make claim X’ which doesn’t contradict Y.” Likewise with respect to the exaggerations—if you’d made your understanding of my claims explicit, then I have some hope of correcting the misunderstanding. But if I have to guess what your interpretation is, I’m signed up for infinite amounts of interpretive labor. In general it seems like a bad policy to force people to guess what your criticism is.
In my model, this is indeed a large part of the problem. I like the idea behind Said’s proposal, and do think that it would reduce some of the incentives towards aggressiveness, but I still think that even under the proposal, the exchange on this post would have not been a good fit for LessWrong. I.e. this section from Ben Pace’s comment above still stands:
The long, substantive point was quite interesting. But the opening three comments really didn’t help Benquo, they felt to me snarky/unnecessarily aggressive, and it seemed to me you were asking Benquo to do a lot work that you weren’t willing to do (until after you’d written the three comments implying Benquo was obviously getting something wrong). I believe comments like these make many writers feel like LessWrong is a crueller place—like the LessWrong that they previously fled.
First: ideas are a dime a dozen. Coming up with abstract conceptual constructs, “fake frameworks”, clever explanations, clever schemes, clever systems, interesting mappings, cute analogies, etc., etc., is the kind of thing that the kind of person who posts on Less Wrong (and I include myself in this set) does reflexively, while daydreaming in a boring lecture, while taking a shower, while cooking. It is easy.
And if you’re having trouble brainstorming, if no cool new ideas come to you? Browse the web for a while; among the many billions of unique web pages out there, there is no shortage of ideas. There are more ideas than we can consider in a lifetime.
The problem is in finding the good ideas—which means the true and useful ones; developing those ideas; verifying their truth and their usefulness. And that means you have to incentivize scrutiny, you have to incentivize people to notice problems, to notice inconsistencies, to do reversal tests, to consider the relevance of domain knowledge, to step back from the oh-so-clever abstract conceptual construct and apply common sense, and above all to say something instead of just thinking “hmm… ehhh… meh”, mentally shrugging, and closing the browser tab.
So when you say that I was asking Benquo to do a lot of work that I wasn’t willing to do, I am not quite sure how to respond… I mean… yes? Of course I was? It’s precisely the responsibility of the author, of the proposer of an idea, to do that work! And what do you think is easier, for me or for any other commenter? To post a short, “snarky” comment, or to post nothing at all? If the rule you enforce is “every criticism an effortpost”, then what you incentivize is silence.
It is very easy to create an echo chamber, merely by setting a high bar for any criticisms.
Your view seems to be: “The author has done us a service by not only having an idea, which itself is admirable, but by posting that idea here! He has given us this gift, and we must repay him by not criticizing that idea unless we’ve put in at least as much effort into the criticism as the author put into writing the post.”
As I say above, that is not my view.
Second: Ben (Pace) says (and you quote) that “the opening three comments really didn’t help Benquo”. Well, perhaps. I can’t speak to that. But why focus on this? That is, why focus on whether my comments did or did not help Benquo?
If we were having a private, one-on-one conversation, that sort of scolding observation might be apropos. But Less Wrong is a public forum! Ought I concern myself only with whether my comments on a post help the author of the post? But if that was my only concern, I simply wouldn’t’ve posted. With all due respect to Benquo, I don’t know him personally; I have no particular reason to want to help him (nor, of course, have I any reason to harm him; I have, in fact, no particular reason to concern myself with his affairs one way or the other). If my comments were motivated merely by whether they helped the author of the post or comment to which I was directly responding, then the overwhelming majority of what I’ve ever said on Less Wrong would never have been posted.
The question, I think, is whether my comments helped anyone (and, if so, who, and how, and how many). And I can’t speak to that either.[1] But what I can say for sure is that similar comments, made by other people in analogous situations in the past, have helped me, many times; and I have observed that similar comments (mine and others’) have done great good, quite a few times in the past.
How might such “low-effort”[2] comments help? In several ways:
By pointing out something that others had not noticed (or similarly, by implying a perspective on the matter other than that from which people were viewing it before).
Similarly to #1, by reminding others of some relevant concern or concept of which they were aware but had forgotten, or had not thought to consider in this context, etc.
By creating common knowledge of some flaw or concern or similar, which many people were thinking of, but which none of them could be sure that anyone else also thought.
By alluding to some shared or collective knowledge or understanding, thereby making an extended point concisely.
By “breaking the spell” of a perceived tacit agreement not to point out something, not to criticize something, not to bring up a certain topic, etc.
Less Wrong, again, is a public forum. The point is for us to collectively seek truth and build useful things. When I comment, I consider whether my comment helps the collective with those goals. Whether it specifically helps the author of whatever I’m responding to, seems to me to be of secondary importance; and what’s more, taking that goal to instead be my primary goal when commenting, would drastically reduce the general usefulness of my comments (and in practice, of course, it would not even do that, but would instead drastically reduce their frequency).
[1] Well, some people told me that they liked my comments. But maybe they were just saying that out of politeness, or because they wanted to ingratiate themselves with me, or for god knows what other reason(s).
[2] But be careful of dismissing merely concise comments as “low-effort”. Recall the old joke about the repairman who sent a client an itemized bill for hitting an expensive device once with a hammer, and thereby making it work again: “Hitting it: $1. Knowing where to hit it: $10,000.” Similarly, making a one-sentence comment is easy. Making a comment that accomplishes a great deal with one sentence is a lot more valuable.
While ideas must compete for attention, so too must criticisms. I’ve been lead to believe that, somewhere in this thread, there is a good criticism of the top-level post. I spent some time looking for it, and what I found was a whole lot of miscommunication, criticism of things that don’t quite match what was written, and general muddle. You aren’t just asking Benquo to do a lot of work to avoid those miscommunications, you’re also asking the people who read your comments to do a lot of work to determine whether your comment is based on a miscommunication or not.
Setting too high a bar for criticism creates an echo chamber; but setting too low a bar does too, by obscuring the real arguments in a place where people can’t find them without a lot of whole lot of time.
I am not aware of any miscommunication that took place in my direction. Certainly, there has been misunderstanding of what I said. There has also been a lot of explaining, in detail and at length, on my part. But not so much vice-versa. Could you point out what idea of the OP you think I have misunderstood, and what attempts were made by Benquo to clarify it?
I have linked this post to a number of people, off Less Wrong. None of them had any trouble locating and understanding my criticisms; and I did repeat them several times, in several ways. To be honest, your comment perplexes me.
As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.
I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that’s fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we’ve not reached there you’re correct to be worried and want to enforce the standards yourself with low-effort comments (and I don’t mean to imply the comments don’t often contain implicit within them very good ideas).
But unfortunately, given your low-effort criticism feels so aggressive (according to me, the mods, and most writers I talk to in the rationality community), this is just going to destroy the first stage before we get the second. If you write further comments in this pattern which I have pointed to above, I will not continue to spend hours trying to pass your ITT and responding; I will just give you warnings and suspensions.
I may write another comment in this thread if there is something simple to clarify or something, but otherwise this is my last comment in this thread.
Without commenting on most of the rest of what you’ve said, I do want to note briefly that—
… spend hours trying to pass your ITT …
—if you are referring to this comment of yours, then I daresay the hours spent did not end up being productive (insofar as the state goal does not seem to have been reached). I appreciate, I suppose, the motivation behind the effort; but am dubious about the value of such things in general (especially extrapolating from this example).
That aside—I wish you luck, as always, with your efforts, and intend to continue doing what I can to help them succeed.
This is the first point at which I, at least, saw any indication that you thought Ben’s attempt to pass your ITT was anything less than completely accurate. If you thought his summary of your position wasn’t accurate, why didn’t you say so earlier ? Your response to the comment of his that you linked gave no indication of that, and thus seemed to give the impression that you thought it was an accurate summary (if there are places where you stated that you thought the summary wasn’t accurate and I simply missed it, feel free to point this out). My understanding is that often, when person A writes up a summary of what they believe to be person B’s position, the purpose is to ensure that the two are on the same page (not in the sense of agreeing, but in the sense that A understands what B is claiming). Thus, I think person A often hopes that person B will either confirm that “yes, that’s a pretty accurate summary of my position,” or “well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3″ or “no, you’ve completely misunderstood what I’m trying to say. Actually, I was trying to say [summary of person B’s position].”
To be perfectly clear, an underlying premise of this is that communication is hard, and thus that two people can be talking past each other even if both are putting in what feels like a normal amount of effort to write clearly and to understand what the other is saying. This implies that if a disagreement persists, one of the first things to try is to slow down for a moment and get clear on what each person is actually saying, which requires putting in more than what feels like a normal amount of effort, because what feels like a normal amount of effort is often not enough to actually facilitate understanding. I’m getting a vibe that you disagree with this line of thought. Is that correct? If so, where exactly do you disagree?
Out of politeness, and courtesy to Ben, I had hoped to avoid a head-on discussion of this topic. However, you make good points; and, in any case, given that you’ve called attention to this point, certainly it would be imprudent not to respond. So here goes, and I hope that Ben does not take this personally; the sentiment expressed in the grandparent still stands.
The truth is, Ben’s comment is an excellent example of why I am skeptical of “interpretive labor”, as well as related concepts like “principle of charity” (which was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere). When I read Ben’s comment, what I see is the following:
Perfectly clear, straightforward language (quoted from my comments) that unambiguously and effectively conveys my points, “paraphrased” in such a way that the paraphrasing is worse in almost every way than the original: more confused, less accurate, less precise, less specific.
My viewpoints (which, as mentioned, had been expressed quite clearly, and needed no rephrasing at all) distorted into caricatures of themselves.
A strange mix of more-or-less passable (if degraded) portrayals of my points, plus some caricatures / strawmen / rounding-to-the-nearest-cliche, plus some irrelevant additions, that manages to turn the entire summary of my views into a mishmash, of highly dubious value.
Ben indicates that he spent hours reading my commentary, trying to understand my views, and writing the comment in question (and I have no reason to doubt this). But if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?
What’s more, I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did. If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”, but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?
I think person A often hopes that person B will either confirm that “yes, that’s a pretty accurate summary of my position,” or “well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3” or “no, you’ve completely misunderstood what I’m trying to say. Actually, I was trying to say [summary of person B’s position].”
One may hope for something like this, certainly. But in practice, I find that conversations like this can easily result from that sort of attitude:
Alice: It’s raining outside.
Bob, after thinking really hard: Hmm. What I hear you saying is that there’s some sort of precipitation, possibly coming from the sky but you don’t say that specifically.
Alice: … what? No, it’s… it’s just raining. Regular rain. Like, I literally mean exactly what I said. Right now, it is raining outside.
Bob, frowning: Alice, I really wish you’d express yourself more clearly, but if I’m understanding you correctly, you’re implying that the current weather in this location is uncomfortable to walk around in? And—I’m guessing, now, since you’re not clear on this point, but—also that it’s cloudy, and not sunny?
Alice: …
Bob: …
Alice: Dude. Just… it’s raining. This isn’t hard.
Bob, frowning some more and looking thoughtful: Hmm…
And so on.
So, yes, communication is hard. But it’s not clear at all that this sort of solution really solves anything.
And at the same time, sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.
Note: I will not be engaging in much depth here, but wanted to flag one particularly important point:
Perfectly clear, straightforward language (quoted from my comments) that unambiguously and effectively conveys my points, “paraphrased” in such a way that the paraphrasing is worse in almost every way than the original: more confused, less accurate, less precise, less specific.
No. If Ben did not successfully interpret your language, your language wasn’t clear or unambiguous. The point of the ITT is the verify that any successful communication has taken place at all. If it hasn’t, everything that happens after that is just time wasting.
I’m afraid I can’t agree with this, at all. But to get into the reasons why, I’d have to speak increasingly discourteously; I do not expect this to be a productive endeavor. Feel free to contact me privately if you are interested in my further views on this, but otherwise, I will also disengage.
I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did.
This is exactly the problem that the ITT is trying to solve. Ben’s interpretation of what you said is Ben’s interpretation of what you said, whether he posts it or merely thinks it. If he merely thinks it, and then responds to you based on it, then he’ll be responding to a misunderstanding of what you actually said and the conversation won’t be productive. You’ll think he understood you, he’ll perhaps think he understood you, but he won’t have understood you, and the conversation will not go well because of it.
But if he writes it out, then you can see that he didn’t understand you, and help him understand what you actually meant before he tries to criticize something you didn’t even actually say. But this kind of thing only works if both people cooperate a little bit. (Okay, that’s a bit strong, I do think that the kind of thing Ben did has some benefit even though you didn’t respond to it. But a lot of the benefit comes from the back and forth.)
if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?
Again, this is merely evidence that communication is harder than it seems. Ben not writing down his interpretation of you doesn’t magically make him understand you better. All it does is hide the fact that he didn’t understand you, and when that fact is hidden it can cause problems that seem to come from nowhere.
If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”
That’s not the claim at all. The claim is that the reading that seems straightforward to you may not be the reading that seems straightforward to Ben. So if Ben relies on what seems to him a “straightforward reading,” he may be relying on a wrong reading of what you said, because you wanted to communicate something different.
but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?
I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that. And him putting forward the interpretation he thinks is correct gives you a jumping-off point for helping him to understand what you meant. Without that jumping-off point you would be shooting in the dark, throwing out different ways of rephrasing what you said until one stuck, or worse (as I’ve said several times now) you wouldn’t realize he had misunderstood you at all.
sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.
Yes, but you can’t hash out the substantive disagreements until you’ve sorted out any misunderstandings first. That would be like arguing about the population size of Athens when one of you thinks you’re talking about Athens, Greece and the other thinks you’re talking about Athens, Ohio.
I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that.
This, I think, is where we differ (well, this, and the relative value of spending time on “interpretive labor” vs. going ahead with the [what seems to you to be the] straightforward interpretation). I think that time spent thus is generally wasted (and sometimes, or often, even counterproductive), and I think that correcting misunderstandings that persist after such “interpretive labor” is not feasible in practice (at least, not by the direct route)—not to mention that attempting to do this anyway, detracts from the usefulness of the discussion.
By the way, I’m curious why you say that the principle of charity “was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere.” What do you think was the original, good form of the idea, what is the difference between that and the version the rationalist memesphere has adopted, and what is so bad about the rationalist version?
The original, good form of the principle of charity… well, actually, one or another principle under this name is decades old, or perhaps millennia; but in our circles, we can trace it back to Scott’s first post on Slate Star Codex, which I will quote almost in full:
This blog does not have a subject, but it has an ethos. That ethos might be summed up as: charity over absurdity.
Absurdity is the natural human tendency to dismiss anything you disagree with as so stupid it doesn’t even deserve consideration. In fact, you are virtuous for not considering it, maybe even heroic! You’re refusing to dignify the evil peddlers of bunkum by acknowledging them as legitimate debate partners.
Charity is the ability to override that response. To assume that if you don’t understand how someone could possibly believe something as stupid as they do, that this is more likely a failure of understanding on your part than a failure of reason on theirs.
There are many things charity is not. Charity is not a fuzzy-headed caricature-pomo attempt to say no one can ever be sure they’re right or wrong about anything. Once you understand the reasons a belief is attractive to someone, you can go ahead and reject it as soundly as you want. Nor is it an obligation to spend time researching every crazy belief that might come your way. Time is valuable, and the less of it you waste on intellectual wild goose chases, the better.
It’s more like Chesterton’s Fence. G.K. Chesterton gave the example of a fence in the middle of nowhere. A traveller comes across it, thinks “I can’t think of any reason to have a fence out here, it sure was dumb to build one” and so takes it down. She is then gored by an angry bull who was being kept on the other side of the fence.
Chesterton’s point is that “I can’t think of any reason to have a fence out here” is the worst reason to remove a fence. Someone had a reason to put a fence up here, and if you can’t even imagine what it was, it probably means there’s something you’re missing about the situation and that you’re meddling in things you don’t understand. None of this precludes the traveller who knows that this was historically a cattle farming area but is now abandoned – ie the traveller who understands what’s going on – from taking down the fence.
As with fences, so with arguments. If you have no clue how someone could believe something, and so you decide it’s stupid, you are much like Chesterton’s traveler dismissing the fence (and philosophers, like travelers, are at high risk of stumbling across bull.)
(Bolding mine, italics in original.)
A fair and reasonable principle, I think. We might also extend it—as, indeed, it has often been extended—to the injunction that opponents, and their arguments, ought not be dismissed merely because they appear to be evil. (For example, if it seems like I am suggesting that kittens must be tortured at every opportunity—well, who knows, perhaps I am?—but it is uncharitable to assume this, and to dismiss and denounce me for it, unless I’ve said this explicitly, or you’ve made a reasonable attempt to elicit a clarification, and I’ve confirmed that I am saying just that.)
So that is the unimpeachable idea. And what is the corruption? There are several, actually. Here’s one:
Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace “VNM” by “Dutch book” ?
Here, the suggestion is that being “charitable” requires that I mentally replace one technical term with another, totally different, technical term, turning a statement that is perfectly coherent—not absurd, not insane—but wrong, into a different statement that is correct. Evidently I am expected to do this with every one of my interlocutor’s statements. So, then what? Do I just assume that whenever anyone says anything to me that I think is wrong, what they actually mean is something correct? Is it just impossible for people to be wrong? Can I never be surprised by people’s claims? Is “huh, so what you’re saying is X? really?” totally out of the question? (Never mind the question of how I’m supposed to know what to “correct” my interlocutor’s comments to—it isn’t like there’s always, or even often, just one possible “correct” interpretation!)
And then the other corruption is the other side of the same coin. It’s what happens when people do apply this form of the “principle of charity”, and end up having conversations like some I’ve had recently, where I’ve been on the receiving end of this “charity”: I say something fairly straightforward, and my interlocutor, applying the principle of charity, and believing the literal or straightforward interpretation of my words to be evil (or something), mentally transforms my comments into something different (and, presumably, non-evil), and responds to that. Communication has not taken place; my words have not been heard.
There are other corruptions, too, more subtle ones (examples of which I’d have to take some time to hunt for), but these are more than bad enough!
Thanks for this. Sorry it’s taken me so long to reply here, didn’t mean to let this conversation hang for so long. I completely agree with about 99% of what you wrote here. The 1% I’ll hopefully address in the post I’m working on on this topic.
This substantially raised my estimate of how much harm Said’s been causing from “annoying but mostly harmless” to “actively attacking good discourse for being good”. I’ve switched my moderation policy to reign of terror because on future posts I intend to delete comments by Said that were as annoying as the initial exchange here. Not sure if that extends to other commenters, probably it should but I haven’t had other problems this bad.
This was now a week ago. The mod team discussed this a bit more, and I think it’s the correct call to give Said an official warning (link) for causing a significant number of negative experiences for other authors and commenters.
Said, this moderation call is different than most others, because I think there is a place for the kind of communication culture that you’ve advocated for, but LessWrong specifically is not that place, and it’s important to be clear about what kind of culture we are aiming for. I don’t think ill of you or that you are a bad person. Quite the opposite; as I’ve said above, I deeply appreciate a lot of the things you’ve build and advice you’ve given, and this is why I’ve tried to put in a lot of effort and care with my moderation comments and decisions here. I’m afraid I also think LessWrong will overall achieve its aims better if you stop commenting in (some of) the ways you have so far.
Said, if you receive a second official warning, it will come with a 1-month suspension. This will happen if another writer has an extensive interaction with you primarily based around you asking them to do a lot of interpretive labour and not providing the same in return, as I described in my main comment in this thread.
I am not at all sure it’s always true that posting nothing at all is easier than posting a short, snarky comment. The temptation to do the latter can be almost overwhelming.
And just as ideas are a dime a dozen, so are criticisms. Your arguments against disincentivizing criticism seem to me to have parallel arguments against disincentivizing posting; and your arguments for harsh criticism of top-level posts seem to me to have parallel arguments for harsh criticism of critical comments. (Of course the two aren’t exactly equivalent, not least because top-level posts are more visible than critical comments. Still, I think all the arguments cut both ways.)
I am not at all sure it’s always true that posting nothing at all is easier than posting a short, snarky comment. The temptation to do the latter can be almost overwhelming.
True enough! That temptation falls away, however, if one simply stops reading.
As for the rest—in principle, you’re entirely correct. In practice, I do not think what you say is true. For one thing, as I mentioned, even in the extreme case where literally no one posts anything at all, there nonetheless remain plenty of ideas to examine. But even that aside, the problem is this: once you sweep aside those ideas which are just trolling, or explicitly known to be false, or have the Time Cube nature, you’re still left with a massive pile of what might be good but what could easily be (and likely is) total nonsense (as well as other possibilities like “good but ultimately not useful”, “subtly wrong”, etc.).
On the other hand, once you sweep aside those criticisms which are nothing but rudeness or abuse, or obvious trolling, etc., what you’re left with is… not much, actually. There really is a shortage of good criticism. How many of the posts on Less Wrong, within—say—the past six months, have received almost no really useful scrutiny? It’s not none!
Finally, as for this—
… your arguments for harsh criticism of top-level posts seem to me to have parallel arguments for harsh criticism of critical comments
As with so many things: one person’s modus tollens is another’s modus ponens.
I think there’s a problem here where “broad attention” and “harsh attention” are different tools that suggest different thresholds. I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come. I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.
My position is that subreddit-like things are the correct way to separate out rules (both because it’s a natural unit of moderation, and it implies rulesets are mutually exclusive, and it makes visual presentation easy) and tag-like things are the correct way to separate out topics (because topics aren’t mutually exclusive and don’t obviously imply different rules). A version of lesswrong that has two subreddits, with names like ‘soft’ and ‘sharp’, seems like it would both offer a region for exploratory efforts and a region for solid accumulation, with users by default looking at both grouped together (but colored differently, perhaps).
One of the reasons why that vision seemed low priority (we might be getting to tags in the next few months, for example) was that, to the best of my knowledge, no poster was clamoring for the sharp subreddit. Most of what I would post to main in previous days would go there, and some of the posts I’m working on now are targeted at essentially that, but it’s much easier to post sharp posts in soft than it is to post soft posts in sharp.
Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good. The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it. Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.
Under this model, requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress (among people who haven’t already sorted into private groups), and perhaps more importantly gives a misleading idea of how progress is generated. If one is trying to learn to do math like a professional mathematician, it is much more helpful to watch their day-to-day activities and chatter with colleagues than it is to read their published papers, because their published papers sweep much of the real work under the rug. Often one generates a hideous proof and then searches more and finds a prettier proof, but without the hideous proof one might have given up. And one doesn’t just absorb until one is fully capable of producing professional math; one interleaves observation with attempts to do the labor oneself, discovering which bits of it are hard and getting feedback on one’s products.
I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come.
This seems like an excellent argument for dynamic RSS feeds (which I am almost certain is a point I’ve made to Oliver Habryka in a past conversation). Such a feature, plus a robust tagging system, would solve all problems of the sort you describe here.
I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.
It’s not clear why a post like this should be on Less Wrong at all, but if it must be, then there seems to be nothing stopping you from prefacing it with “please apply frontpage-level scrutiny to this one, but I don’t actually want this promoted to the frontpage”.
… tag-like things …
I think that a good tagging system should, indeed, be a high priority in features to add to Less Wrong.
… no poster was clamoring for the sharp subreddit …
Well, I was not clamoring for it because I was under the impression that the entire front page of Less Wrong was, as you say, the “sharp subreddit”. That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good.
I should like to see this belief defended. I am skeptical. But in any case, that’s what the personal blogs are for, no?
The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it.
Your meaning here is obscure to me, I’m afraid…
Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.
I consider that to be one of Graham’s weakest pieces of writing. At best, it’s useless rambling. At worst, it’s tantamount to “In Defense of Insight Porn”.
… requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress …
But this is precisely why I think it’s tremendously valuable that this harsh scrutiny take place in public. A post is promoted to the front page, and there, it’s scrutinized, and its ideas are discussed, etc.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training. They’re not just “anyone with an internet connection”. A professional mathematicians’s half-baked idea on a mathematical topic is simply not comparable with a random internet person’s (or even a random “rationalist”’s) half-baked idea on an arbitrary topic.
That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
How do you expect to solve this problem? The primary thing I’ve heard form you is defense of your style of commenting and its role in the epistemic environment, and regardless of whether or not I agree with it, the problem that I’m trying to solve is getting more good content on LW, because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction. When we ask people who made top tier posts before why they don’t make them now, or they put them elsewhere, the answer is resoundingly not “we were put off by mediocre content on LW” but “we were put off by commenters who were mean and made writing for LW unpleasant.”
Keep in mind that the problem here is not “how do we make LW a minimally acceptable place to post things?” but “how do we make posting for LW a better strategy than other competitors?”. I could put effort into editing my post on a Bayesian view of critical rationalism that’s been sitting in my Google Docs drafts for months to finally publish it on LW, or I could be satisfied that it was seen by the primary person I wrote it for, and just let it rot. I could spend some more hours reading a textbook to review for LessWrong, or I could host a dinner party in Berkeley and talk to other rationalists in person.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training.
I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training. Rationality, of course, is much more in its infancy than mathematics is, and so we should expect professional mathematicians to be better at mathematics than rationalists are at rationality. It’s also the case that people in mathematics grad school often make bad mathematical arguments that their peers and instructors should attempt to correct, but when they do so it’s typically with a level of professional courtesy that, while blunt, is rarely insulting.
So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.
It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
In this very interesting discussion I mostly agree with you and Ben, but one thing in the comment above seems to me importantly wrong in a way that’s relevant:
When we ask people who made top tier posts before why they don’t make them now, or they put them elsewhere, the answer is resoundingly not “we were put off by mediocre content on LW” but “we were put off by commenters who were mean and made writing for LW unpleasant.”
I bet that’s true. But you also need to consider people who never posted to LW at all but, if they had, would have made top-tier posts. Mediocre content is (I think) more likely to account for them than for people who were top-tier posters but then went away.
(Please don’t take me to be saying ”… and therefore we should be rude to people whose postings we think are mediocre, so that they go away and stop putting off the really good people”. I am not at all convinced that that is a good idea.)
I mostly agree, but one part seems a bit off and I feel like I should be on the record about it:
Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training.
It’s evidence that I’m a top example of the particular sort of rationality culture that LW is clustered around, and I think that’s enough to make the argument you’re trying to make, but being good at getting upvotes for writing about rationality is different in some important ways from being rational, in ways not captured by the analogy to math grad school.
I agree the analogy is not perfect, but I do think it’s better than you’re suggesting; in particular, it seems to me like going to math grad school as opposed to doing other things that require high mathematical ability (like quantitative finance, or going to physics grad school, or various styles of programming) is related to “writing about rationality rather than doing other things with rationality.” Like, many of the most rational people I know don’t ever post on LW because that doesn’t connect to their goals; similarly, many of the most mathematically talented people I know didn’t go to math grad school, because they ran the numbers on doing it and they didn’t add up.
But to restate the core point, I was trying to get at the question of “who do you think is worthy of not being sarcastic towards?”, because if the answer is something like “yeah, using sarcasm on the core LW userbase seems proper” this seems highly related to the question of “is this person making LW better or worse?”.
But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
I’d just like to comment that in my opinion, if we only had one post a month on LW, but it was guaranteed to be good and insightful and useful and relevant to the practice of rationality and not wrong in any way, that would be awesome.
The world is full of content. Attention is what is scarce.
That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
How do you expect to solve this problem?
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
… the problem that I’m trying to solve is getting more good content on LW …
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generic a desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specific goals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.
… because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction.
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training.
I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training.
This is a shocking statement. I had to reread this sentence several times before I could believe that I’d read it right.
… just what, exactly, do you mean by “rationality”, that could make this claim true?!
So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.
Both the first and the second are plausible (“reputation” is not really the right concept here, but I’ll let it stand for now). The third is also near enough to truth.
Let’s skip all the borderline examples and go straight to the top. Among “rationalists”, who has the highest reputation? Who is Top Rationalist? Obviously, it’s Eliezer. (Well, some people disagree. Fine. I think it’s Eliezer; I think you’re likely to agree; in any case he makes the top five easily, yes?)
I have great respect for Eliezer. I admire his work. I have said many times that the Sequences are tremendously important, well-written, etc. What’s more, though I’ve only met Eliezer a couple of times, it’s always seemed to me that he’s a decent guy, and I have absolutely nothing against him as a person.
But I’ve also read some of the stuff that Eliezer has posted on Facebook, over the course of the last half-decade or more. Some of it has been well-written and insightful. Some of it has been sheer absurdity, and if he had posted it on Less Wrong, you can bet that I would not spare those posts from the same unsentimental and blunt scrutiny. To do any less would be intellectual dishonesty.
Even the cleverest and best of us can produce nonsense. If no one scrutinizes our output, or if we’re surrounded only by “critics” who avoid anything substantive or harsh, the nonsense will soon dominate. This is worse than not having a Less Wrong at all.
It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming. How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming.
Uh, how’s that? Anyway, even if we grant that you tried this, well… no offense meant, but maybe you tried it the wrong way? “We tried doing something like this, once, and it didn’t work out, therefore it’s impossible or at least not worth trying” is hardly what you’d call “solid logic”.
How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
This is, indeed, a serious question, and one well worth considering in detail and at length, not just as a tangent to a tangent, deep in one subthread of an unrelated comments section.
But here’s one answer, given with the understanding that this is a brief sketch, and not the whole answer:
Prestige and value attract contributors. Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction. When you can say to someone, “I think your writing on <topic> is good enough for Less Wrong” and have that be a credible and unusual compliment, you will easily be able to find contributors. When you’ve created a culture where you can post on Less Wrong and there, get the best, most insightful, most no-nonsense, cuts-to-the-heart-of-the-matter criticism, people who are truly interested in perfecting their ideas will want to post here, and to submit to scrutiny.
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
Not so easy, I regret to say…
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
See above for why authors would want to do this. As for “a class of dedicated curators who would rewrite their posts”, I never suggested anything remotely like this, and would never suggest it.
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well. This is definitely a “there is a technical solution which cuts right through the Gordian knot of social problems” case.
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction.
Where would you point to as a previous example of success in this regard? I don’t think the golden age of Less Wrong counts, as it seems to me the primary reason LessWrong was ever known as a place with high standards is because Eliezer’s writing and thinking were exceptional enough to draw together a group of people who found it interesting, and that group was a pretty high-caliber group. But it’s not like they came here because of the insightful comments; they came here for the posts, and read the comments because they happened to be insightful (and interested in a particular mode of communication over point-seeking status games). When the same commenters were around, but the good post-writers disappeared or slowed down, the site slowly withered as the good commenters stopped checking because there weren’t any good posts.
There have been a few examples of people coming to LessWrong with an idea to sell, essentially, which I think is the primary group that you would attract by having a reputation as a forum that only good ideas survive. I don’t recall many of them becoming solid contributors, but note that this is possibly a memory selection effect; when I think of “someone attracted to LW because of the prestige of us agreeing with them” I think of many people whose one-track focuses were not impressive, when perhaps someone I respect originally came to LW for those reasons and then had other interests as well.
With regards to the “solid logic” comment, do give us some credit for having thought through this issue and collected what data we can. From my point of view, having tried to sample the community’s impressions, the only people who have said the equivalent of “ah, criticism will make the site better, even if it’s annoying” are people who are the obvious suspects when post writers say the equivalent of “yeah, I stopped posting on Less Wrong because the comments were annoyingly nitpicky rather than focusing on the core of the point.”
I do want to be clear that ‘high-standards’ and ‘annoying’ are different dimensions, here, and we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way. By the way I use the word “smooth”, things point in the opposite direction. [And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
We might not be talking about the same thing (in technical/implementation terms), as what you say does not apply to what I had in mind. (It’s awkward to hash this out in via comments like this; I’d be happy to discuss this in detail in a real-time chat medium like IRC.)
… we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way.
“Pedantically” is a caricature, I think; I would say “straightforwardly”—but then, we have a live example of what we’re referring to, so terminology is not crucial. That aside, I stand by this point, and reaffirm it.
I am deeply skeptical of “interpretive labor”, at least as you seem to use the term.[1] Most examples that I can recall having seen of it, around here, seem to me to have affected the conversation negatively. (For instance, your example elsethread is exactly what I’d prefer not to see from my interlocutors.)
In particular, this—
repair small errors in a transparent way
—doesn’t actually happen, as far as I can tell. What happens instead is that errors are compounded and complicated, while simultaneously being swept under the rug. It seems to me that this sort of “interpretive labor” does much to confuse and muddle discussions on Less Wrong, while effecting the appearance of “smooth” and productive communication.
By the way I use the word “smooth”, things point in the opposite direction.
I don’t know… I think it’s at least possible that we’re using the word in basically the same way, but disagree on what effects various behaviors have. But perhaps this point is worth discussing on its own (if, perhaps, not in this thread): what is this “smoothness” property of discussions, what why is it desirable? (Or is it?)
[And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated
They should ban you for how you’re interacting right now. I don’t know why they’re taking shit with your dodging the issue, but you either don’t have the ability to figure out when someone is correctly calling you out, or aren’t playing nice. Your brand of bullshit is a major reason I’ve avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don’t have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.
Lahwran, I downvoted your comment because I think it should be costly to write something that lowers the tone like this, but I appreciate you saying that this is the reason you left LW, and you might be right that I’m being too civil relative to the effects Said is directly having.
I’ve put in a bunch of effort to trade models of good discourse, but this conversation is heading towards its close. As I’ve said, if Said writes these sorts of comments in future, I’ll be hitting fairly hard with mod tools, regardless of his intentions. Notice that this brand of bullsh*t is otherwise largely gone from LW since the re-launch in March—Said has been an especially competent and productive individual who has this style of online interaction, so I’ve not wanted to dissuade him as strongly as the rest who’ve left, but my patience has since worn thin on this front, and I won’t be putting up with it in future.
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generica desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specificgoals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.
It seems like, having interpreted Vaniver as making an obvious error, you decided to argue at length against it instead of considering that he might have meant something else. This is tedious and is punishing Vaniver for not tediously overspecifying everything.
Suppose that one Alice writes something which I, on the straightforward reading, consider to be definitely and clearly wrong. I read it and imagine two possibilities:
(A) Alice meant exactly what it seems like she wrote.
Presumably, then, Alice disagrees with my judgment of what she wrote as being definitely and clearly wrong. Well, there is nothing unusual in this; I have often encountered cases where people hold views which I consider to be definitely and clearly wrong, and vice-versa. (Surely you can say the same?)
In this case, what else is there to do but to respond to what Alice wrote?
(B) Alice meant something other than what it seems like she wrote.
What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.
But suppose I go ahead and try anyway, I come up with some possible thing that Alice could’ve meant. Do I have any reason to conclude that this is the only possibility for what Alice could’ve meant? I do not. I might be able to think longer, and come up with other possibilities. None of them would offer me any reason to assume that that one is what Alice meant.
And suppose I do pick out (via some mysterious and, no doubt, dubious method) some particular alternate meaning for Alice’s words. Well, and is that correct, then, or wrong? If it’s wrong, then I will argue the point, presumably. But then I will be in the strange position of saying something like this:
“Alice, you wrote X. However, X is obviously wrong. So you couldn’t have meant that. You instead meant Y, probably. But that’s still wrong, and here’s why.”
Have I any reason at all to expect that Alice won’t come back with “Actually, no, I did mean X; why do you say it’s obviously wrong?!”, or “Actually, no, I meant Z!”? None at all. And I’ll have wasted my time, and for what?
This sort of thing is almost always a pointless and terrible way of carrying on a discussion, why is why I don’t and won’t do it.
“I often successfully guess what people meant; it being impossible comes as a surprise to me. Are you claiming this has never happened to you?”
And response B:
Ah, Said likely meant that it is impossible to reliably infer Alice’s meaning, rather than occasionally doing so. But is a strategy where one never infers truly superior to a strategy where one infers, and demonstrates that they’re doing so such that a flat contradiction can be easily corrected?
[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]
[EDIT: I made a mistake in this comment, where response B was originally [what someone would say after doing that substitution], and then I said “wait, it’s not obvious where that came from, I should put the thoughts that would generate that response” and didn’t apply the same mental movement to say “wait, it’s not obvious that response A is a flat response and response B is a thought process that would generate a response, which are different types, I should call that out.”]
Yes, exactly; response A would be the more reasonable one, and more conducive to a smooth continuation of the discussion. So, responding to that one:
“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilities, no matter how unlikely-seeming, and managing by chance to be right.
But, I can’t remember a time when I’ve read what someone said, rejected the obvious (but obviously wrong) interpretation, tried to guess what they actually meant, and succeeded. When I’ve tried, the actual thing that (as it turned out) they meant was always something which I could never have even imagined as a hypothesis, much less picked out as the likeliest meaning. (And, conversely, when someone else has tried to interpret my comments in symmetric situations, the result has been the same.)
In my experience, this is true: for all practical purposes, either you understand what someone meant, or it’s impossible to guess what they could’ve meant instead.
[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]
This is not what I’m implying, because it’s not what I’m saying and what I’m saying has a straightforward meaning that isn’t this. See this comment. “Literally” is a strawman (not an intentional one, of course, I’m assuming); it can seem like Alice means something, without that necessarily being anything like the “literal reading” of her words (which in any case is a red herring); “straightforward” is what I said, remember.
Edit: I don’t know where all this downvoting is coming from; why is the parent at −2? I did not downvote it, in any case…
A couple more things I think your disjunction is missing.
1) If you don’t know what Alice means, instead of guessing, you can ask.
(alternately, you can offer a brief guess, and give them the opportunity to clarify. This has the benefit of training your ability to infer more about what people mean). You can do all this without making any arguments or judgments until you actually know what Alice meant.
2) Your phrasing implies that if Alice writes something that “seems to straightforwardly mean something, and Alice meant something else”, that the issue is that Alice failed to write adequately. But it’s also possible for the failure to be on the part of your comprehension rather than Alice’s writing. (This might be because Alice is writing for an audience of people with more context/background than you, or different life experiences than you)
Re: asking: well, sure. But what level of confidence in having understood what someone said should prompt asking them for clarification?
If the answer is “anything less than 100%”, then you just never respond directly to anything anyone writes, without first going through an elaborate dance of “before I respond or comment, let me verify that this is what you meant: [insert re-stating of the entirety of the post or comment you’re responding to]”; then, after they say “yes, that is what I meant”, you respond; then, before they respond to you, they first go “now, let me make sure I understand your response: [insert re-stating of the entirety of your response]” … and so on.
Obviously, this is no way to have a discussion.
But if there is some threshold of confidence in having understood that licenses you to go ahead and respond, without first asking whether your interlocutor meant the thing that it seems like they meant, then… well, you’re going to have situations where it turns out that actually, they meant something else.
Unless, of course, what you’re proposing is a policy of always asking for clarification if you disagree, or think that your interlocutor is mistaken, etc.? But then what you’re doing is imposing a greater cost on dissenting responses than assenting ones. Is this really what you want?
Re: did Alice fail to communicate or did I fail to comprehend: well, the question of “who is responsible for successful communication—author or audience?” is hardly a new one. Certainly any answer other than “it is, to some extent, a collaborative effort” is clearly wrong.
The question is, just how much is “some extent”? It is, of course, quite possible to be so pedantic, so literal-minded, so all-around impenetrable, that even the most heroically patient and singularly clear of authors cannot get through to you. On the other hand, it’s also possible to write sloppily, or to just plain have bad ideas. (If I write something that is wrong, and you express your disagreement, and I say “no, you’ve misunderstood, actually I’m right”, is it fair to say that you’ve failed in your duty as a conscientious reader?)
In any case, the matter seems somewhat academic. As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said. (Certainly I’ve seen no one posting any corrections to my reading of the OP. Mere claims that I’ve misunderstood, with no elaboration, are hardly convincing!)
what level of confidence in having understood what someone said should prompt asking them for clarification?
This is an isolated demand for rigor. Obviously there’s no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.
Note that I say “obviously mistaken.” If your interlocutor says something that seems mistaken, that’s one thing, and as you say, it shouldn’t always prompt a request for clarification; sometimes there’s just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things, that may indicate that there is something they see that you don’t, in which case it would be useful to ask for clarification.
In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.” It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn’t be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don’t claim to know what particular standards he has in mind, but clearly standards that would be useful for “solving problems related to advancing human rationality and avoiding human extinction”). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for “good content” in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)
As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said.
“The case at hand” was your misunderstanding of Vaniver, not Benquo.
Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form “any time your interlocutor says something that seems obviously mistaken, ask for clarification”). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that’s sometimes an indication that you should ask for clarification. Sometimes it’s not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.
EDIT: if it turns out you didn’t mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn’t need to ask you for clarification).
Ikaxas, I would be strong-upvoting your comments here except that I’m guessing engaging further here does more harm than good. I’d like to encourage you to write a separate post instead, perhaps reusing large portions of your comments. It seems like you have a bunch of valuable things to say about how to use the interpretive labor concept properly in discourse.
Well, the second part of your comment (after the rule) pre-empts much of what I was going to say, so—yes, indeed. Other than that:
I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.
Yes, I think this seems like a rather self-serving set of judgments.
As it happens, I didn’t mean my question literally, in the sense that it was a rhetorical question. My point, in fact, was almost precisely what you responded, namely: clearly the threshold is not 100%, and also clearly, it’s going to depend on context… but that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
Other points:
But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things …
I have never met such a person, despite being surrounded, in my social environment, by people at least as intelligent as I am, and often more so. In my experience, everyone says obviously wrong things sometimes (and, conversely, I sometimes say things that seem obviously wrong to others). If this never happens to you, then this might be evidence of some troubling properties of your social circles.
In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.”
That’s still vacuous, though. If that’s what it’s a stand-in for, then I stand by my comments.
Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)
Indeed, I could have. But consider these two scenarios:
Scenario 1:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Scenario 2:
Alice: [makes some statement]
Bob: That’s obviously wrong, because [reasons].
Alice: But of course [straightforward reading] isn’t actually what I meant, as that would indeed be obviously wrong. Instead, I meant [other thing].
You seem to be saying that Scenario 1 is obviously (!!) superior to Scenario 2. But I disagree! I think Scenario 2 is better.
… now, does this claim of mine seem obviously wrong to you? Is it immediately clear why I say this? (If I hadn’t asked this, would you have asked for clarification?) I hope you don’t mind if I defer the rest of my point until after your response to this bit, as I think it’s an interesting test case. (If you don’t want to guess, fair enough; let me know, and I’ll just make the rest of my point.)
I’ve been mulling over where I went wrong here, and I think I’ve got it.
that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there’s some threshold or some clear rule for deciding when to ask for clarification, it’s not worth implementing “ask for clarification if you’re unsure” as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that’s not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone’s fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it’s worth stopping to have one or both parties do something in the vicinity of trying to pass the other’s ITT, to see where the confusion is.
I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I’m much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn’t enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver’s point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn’t really entail that you ought to have asked for clarification here, in this very instance.
Anyway, as Ben suggested I’m working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I’ll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.)
consider these two scenarios
I agree the model I’ve been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don’t think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
If this is where you are going, I have a couple disagreements with it, but I’ll wait until you’ve explained the rest of your point to state them in case I’ve guessed wrong (which I’d guess is fairly likely in this case).
My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.
Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)
That’s not a spurious binary, and in any case it doesn’t make the disjunction wrong. Observe:
Let P = “Alice meant exactly what it seems like she wrote.”
¬P = “It is not the case that Alice meant exactly what it seems like she wrote.”
And we know that P ∨ ¬P is true for all P.
Is “It is not the case that Alice meant exactly what it seems like she wrote” the same as “Alice meant something other than what it seems like she wrote”?
No, not quite. Other possibilities include things like “Alice didn’t mean anything at all, and was making a nonsense comment, as a sort of performance art”, etc. But I think we can discount those.
One thing I don’t like about this proposal is—and you’re hearing me right—it doesn’t positively enough incentivise criticism.
In particular, there needs to be a place where when the idea is put to the test (to peer review), if someone writes a knock-down critique, that person is celebrated and their accomplishments are to be incorporated into their reputation in the community.
We want to have a place that both strongly incentivises good ideas, and strongly incentivises checking them—not disanalogous to how in Inadequate Equilibria the Visitor says on his planet the ‘replicators’ are given major prestige.
Because I want criticisms that look like this, not like this.
The first link is Zvi’s thoughtful and well-written critique of a point made in Eliezer’s “No Fire Alarm for AGI” post. This is good criticism that puts lots of effort into being clear to the reader, and is very well written. That’s why we curated it and it got loads of karma.
The comments of yours that I don’t like arethings I would not want to find on almost any site successfully pursuing intellectual progress. They’re not nice comments to receive, but they’re also not very good criticism. Again, this isn’t all of your comments, but it often feels to me like you’re not engaging with the post very well (can’t pass the author’s ITT), or if your criticisms are true they’re attacking side notes (like a step that wasn’t rigorous, even though it was an unimportant step that wouldn’t be hard to make rigorous). If you look the places of great intellectual progress in groups, you don’t see that they reduced the effort barrier to criticism to the minimum, they increased the incentive for important criticism, that knocked down the core of an idea.
If your criticisms were written in a way that didn’t feel like it was rude / putting a burden on the author that you’re not willing to share, then that would be fine. If they were important (e.g. you were knocking down core ideas in the sequences, big mistakes everyone was making, or even just the central point of the post) then I would accept more blunt/rudeness. But when it’s neither, then it’s not good enough.
For what it’s worth, I bet the intention was as follows: Ben had mentioned that he was going to ration his time in this thread for fear of rabbit-holes, he thought you might prefer to have some idea how much more Said-Ben discussion was possible, and so (the amount of time he’d spent not being immediately visible) he added that note. So, exactly the result of insulting intent.
If Said is insulted by your clarity about how much time you’re investing in interpretive labor, then I think this is evidence that Said’s sense of offense is not value-aligned with good discourse. If someone put a note like that on a response to a comment by me, I’d feel like they were making an effort to be metacooperative. 30 minutes is a long time for a single comment!
If your criticisms were written in a way that didn’t feel like it was rude / putting a burden on the author that you’re not willing to share, then that would be fine. If they were important (e.g. you were knocking down core ideas in the sequences, big mistakes everyone was making, or even just the central point of the post) then I would accept more blunt/rudeness. But when it’s neither, then it’s not good enough.
As I’ve commented, the point in that comment went to the heart of my objection not only to this post, but to a great many posts that are similar to this one along a critically important axis. I continue to be dismayed by the casualness with which this concern has been dismissed, given that it seems to me to be of the greatest importance to the epistemic health of Less Wrong.
@Said. I’ve been thinking a bit about this comment thread, going back to read some comments of yours about moderation, and trying to pass your general ITT regarding commenting norms. Here’s my current best guess about what seems important to you in this domain:
Our global intellectual community suffers from low standards
Many parts of science are seeing a catastrophic replication crisis—even neuroscience.
Facebook and Twitter are shining examples of what being overwhelmed with low-quality content looks like.
Our specific intellectual community (LessWrong) suffers from low standards
The process that elevates posts and idea is hardly reassuring. Lots of people upvote it, then maybe it gets curated, and then that’s it. No formal and rigorous checking or feedback, no outside reviewers, nothing. There’s a few comments, but nobody is being explicitly incentivised to find good counter-arguments.
The correct action here is to significantly increase our standards.
This will cause many people to not write most of the content they’re writing. Sure, this might be most of the content, but one man’s modus ponens is another’s modus tollens—the current content is just bad. There is an awful lot out there, and we need to refine it, not add to it.
The situation we are in is not one of slightly raising standards that are generally already pretty good, but running crisis-mitigation / triage on the horrendous state of the current internet and LessWrong. If someone writes a post that is not up to a good standard, this needs to be made apparent to them, for two reasons.
Firstly, because it damages the commons; they’re clogging up our collective intellectual space with wrong (often trivially wrong) points. If this is not made apparent in the comments, then it would be better if the post was not written at all. Immediately commenting to point out mistakes is the correct response, the person needs to learn that this is not to be tolerated. That way leads to madness, or worse, Tumblr.
Sure, they may try to reply to you, to argue their point further, you may even end up understanding them better, but it was still their fault to make the post wrong in the first place, not your fault for misunderstanding their writing or being highly critical of their basic errors.
And secondly, because criticising people’s ideas is the only way for them to improve. LessWrong is a place we actually care about being good, where people can come and practice the art of rationality. Practise means getting feedback, and coddling people with low standards will mean they will not be able to find their actually good ideas. And this is after all what’s most important—that we figure out true and important ideas.
---
I take the following quotes of yours as implying this interpretation.
One:
Two:
Three:
Four:
I also think this explains my perception (more on this below) that many comments of yours ask for the author to do a lot of effort while doing very little yourself. Responses like this-
-where it feels (to me) like it is on the other person to write well, not on you to expend effort to interpret them. They’re the one damaging the commons & who needs to improve.
---
So, to start with, I agree with my-model-of-you about the Standards Problem. There are incredibly few places in this world I can go to where I expect everyone to keep a high standard of evidence—certainly not any online platforms that I could name, nor most scientific journals. In person I have a few friends that I trust, and sending them google docs works well, but it’s clear that we need something that can coordinate intellectual progress in fields with 10s and 100s of people, not just groups of 3 or 4.
And it’s high in my priorities to get LessWrong to have a process for actually checking ideas, to which I can contribute a high effort post (like my own post on common knowledge) - where I can get good feedback that both I and the community trusts to actually find the good counter-arguments. This involves both incentivising people to find good counter-arguments, and also incentivising people to write rigorous posts (even if they are not the generators. I would love for someone to attempt to submit a technical explanation of the core ideas in Zvi’s Slack and the Sabbath sequence, for example. I think Eliezer managed to do something similar with his post “Moloch’s Toolbox”, adding rigour to Scott Alexander’s initial poetic post, and it’s sad that there’s no trusted process in the world for checking that and making it common knowledge in a larger community like this one).
But we’re not there yet, and (I think) I disagree with you about how to get to there. I think that the correct move at the minute is not for further negative incentive, but for a stronger positive incentive for good writing. I think the dream of “Keeping everything the same but removing all of the bad ideas” is likely a fiction. People need to be able to honestly put forward new and unrigorous ideas without expecting the Spanish Inquisition[1], to be able to find the one or two gems that can be elevated and canonised.
Right now my approach is to encourage people to try, and encourage them more when they get something very right. Respectively, upvotes and curation. In time, we’ll add more steps to the process, and clear places for evaluation and criticism. That’s why we’ve been working on the AI Alignment Forum and EA Forum 2.0 (two other basic platforms to later build upon), as well as thinking a lot about peer review and what additional infrastructure on the site will set up these pipelines for ideas to go through.
Oliver has previously said that the approach you’ve been taking was the approach that lead to a number of our top authors feeling unwelcome to post on the old LessWrong:
Your commenting style still has many of the properties that it did then. Let me be specific about that pattern that I’m talking about. In this thread with Benquo, this is what it felt like from my perspective:
The long, substantive point was quite interesting. But the opening three comments really didn’t help Benquo, they felt to me snarky/unnecessarily aggressive, and it seemed to me you were asking Benquo to do a lot work that you weren’t willing to do (until after you’d written the three comments implying Benquo was obviously getting something wrong). I believe comments like these make many writers feel like LessWrong is a crueller place—like the LessWrong that they previously fled.
So from here on out, I, along with the rest of the mod team, do plan to treat all the comments of yours that put in low interpretive effort on your part—ones that feel like you’re requesting a large amount of effort from someone else, whilst doing no signalling that you intend to reciprocate—as bad for the health of the culture on LessWrong, and strong-downvote them accordingly, with no exceptions.
(This is a minority of your comments; I don’t expect this to significantly stem your ability to comment on the site, as the majority of your comments are much more substantive—there’s at least one in this very thread that I strongly-upvoted.)
I do want to be transparent Said, that if almost anyone else was writing comments that I felt were this damaging to the culture, I would’ve come down hard on them long ago (with suspensions and eventually a ban). I don’t intend to ban you any time soon, because I really value your place in this community—you’re one of the few people to build useful community infrastructure like ReadTheSeqeunces.com and the UI of GreaterWrong.com, and that’s been one of the most salient facts to me throughout all of my thinking on this matter. But after spending a great deal of time and effort worrying about the effects of your comments on the culture, I don’t intend to put in as much time and effort if this comes up again in the future (be it 2 months or 12), and will just use the moderation tools as seems appropriate to me.
---
[1] I just want to flag this point about what good environments for exploring ideas are like, as I think my model of you strongly disagrees with it (and thus all the points that follow from it). I’d be happy to discuss it further if so—though I do commit to spending no more than 2 hours thinking about responses on this comment thread including reading time (-and I will time myself).
Why not something like:
Everything is posted to people’s personal blogs, never directly to the front page. While something’s on a personal blog, “brainstorming session” rules apply: no criticism (especially no harsh criticism), just riffing / elaboration / maybe some gentle constructive criticism (and that, perhaps, only if asked).
After this, an author can edit their post, or maybe post a new, better version; or maybe they can “workshop” it elsewhere, and then post an already-better version on LW immediately. In any case, a post that has either undergone this “gentle” discussion, or doesn’t need it, may be transferred to the front page. This may happen in one of three ways:
The author requests a frontpage transfer. It must be approved by a mod.
A mod suggests a frontpage transfer. It must be approved by the author.
Another user (perhaps, only those with some minimum karma value) suggests a frontpage transfer. It must be approved by a mod and also by the author.
Once on the frontpage, the post is exposed to the full scrutiny of the LW commentariat. Personal insults, gratuitous rudeness, and the like are still not tolerated, of course; but otherwise, the author’s feelings aren’t spared. People say what they think about the post. Spirited discussion is had. The author may defend the post, or not; in any case, it’s full “Spanish Inquisition” mode.
Repeat steps 2 and 3 until the post is generally agreed to be solid, not nonsensical, worthwhile, etc. (If this never happens, so be it. Some—indeed, many—ideas ought to be firmly, unsentimentally, explicitly, and publicly rejected.)
A post which survives this scrutiny and emerges as a generally-agreed-to-be-excellent gem, may then be nominated for curation, and—if approved for curation—enters into a corpus of such of the community’s output that we may proudly exhibit as genuine intellectual accomplishment, and refer to in years to come; the building blocks of a rock-solid epistemic edifice.
I believe this would satisfy both your desiderata and mine.
This seems like it’s solving the wrong problem. The problem with your comments isn’t that they are too critical or apply too high an epistemic standard; it is that you have been insulting, sarcastic, and unwilling to make clear, specific claims about what the piece was getting wrong, instead doing things like insinuating that I’m not worth listening to because I haven’t proved that I know about soda bread, and exaggerating my claims and then asking me to prove the exaggerated, false version.
(It seems like I’m strongly disagreeing with Ben Pace here, not just you.)
I would have actually been pretty happy to engage with a comment along the lines of “it seems like you’re making claim X, which contradicts claim Y.” That would have made it easy for me to respond along the lines of “Rather than X, I actually meant to make claim X’ which doesn’t contradict Y.” Likewise with respect to the exaggerations—if you’d made your understanding of my claims explicit, then I have some hope of correcting the misunderstanding. But if I have to guess what your interpretation is, I’m signed up for infinite amounts of interpretive labor. In general it seems like a bad policy to force people to guess what your criticism is.
In my model, this is indeed a large part of the problem. I like the idea behind Said’s proposal, and do think that it would reduce some of the incentives towards aggressiveness, but I still think that even under the proposal, the exchange on this post would have not been a good fit for LessWrong. I.e. this section from Ben Pace’s comment above still stands:
There are two things to say here, I think.
First: ideas are a dime a dozen. Coming up with abstract conceptual constructs, “fake frameworks”, clever explanations, clever schemes, clever systems, interesting mappings, cute analogies, etc., etc., is the kind of thing that the kind of person who posts on Less Wrong (and I include myself in this set) does reflexively, while daydreaming in a boring lecture, while taking a shower, while cooking. It is easy.
And if you’re having trouble brainstorming, if no cool new ideas come to you? Browse the web for a while; among the many billions of unique web pages out there, there is no shortage of ideas. There are more ideas than we can consider in a lifetime.
The problem is in finding the good ideas—which means the true and useful ones; developing those ideas; verifying their truth and their usefulness. And that means you have to incentivize scrutiny, you have to incentivize people to notice problems, to notice inconsistencies, to do reversal tests, to consider the relevance of domain knowledge, to step back from the oh-so-clever abstract conceptual construct and apply common sense, and above all to say something instead of just thinking “hmm… ehhh… meh”, mentally shrugging, and closing the browser tab.
So when you say that I was asking Benquo to do a lot of work that I wasn’t willing to do, I am not quite sure how to respond… I mean… yes? Of course I was? It’s precisely the responsibility of the author, of the proposer of an idea, to do that work! And what do you think is easier, for me or for any other commenter? To post a short, “snarky” comment, or to post nothing at all? If the rule you enforce is “every criticism an effortpost”, then what you incentivize is silence.
It is very easy to create an echo chamber, merely by setting a high bar for any criticisms.
Your view seems to be: “The author has done us a service by not only having an idea, which itself is admirable, but by posting that idea here! He has given us this gift, and we must repay him by not criticizing that idea unless we’ve put in at least as much effort into the criticism as the author put into writing the post.”
As I say above, that is not my view.
Second: Ben (Pace) says (and you quote) that “the opening three comments really didn’t help Benquo”. Well, perhaps. I can’t speak to that. But why focus on this? That is, why focus on whether my comments did or did not help Benquo?
If we were having a private, one-on-one conversation, that sort of scolding observation might be apropos. But Less Wrong is a public forum! Ought I concern myself only with whether my comments on a post help the author of the post? But if that was my only concern, I simply wouldn’t’ve posted. With all due respect to Benquo, I don’t know him personally; I have no particular reason to want to help him (nor, of course, have I any reason to harm him; I have, in fact, no particular reason to concern myself with his affairs one way or the other). If my comments were motivated merely by whether they helped the author of the post or comment to which I was directly responding, then the overwhelming majority of what I’ve ever said on Less Wrong would never have been posted.
The question, I think, is whether my comments helped anyone (and, if so, who, and how, and how many). And I can’t speak to that either.[1] But what I can say for sure is that similar comments, made by other people in analogous situations in the past, have helped me, many times; and I have observed that similar comments (mine and others’) have done great good, quite a few times in the past.
How might such “low-effort”[2] comments help? In several ways:
By pointing out something that others had not noticed (or similarly, by implying a perspective on the matter other than that from which people were viewing it before).
Similarly to #1, by reminding others of some relevant concern or concept of which they were aware but had forgotten, or had not thought to consider in this context, etc.
By creating common knowledge of some flaw or concern or similar, which many people were thinking of, but which none of them could be sure that anyone else also thought.
By alluding to some shared or collective knowledge or understanding, thereby making an extended point concisely.
By “breaking the spell” of a perceived tacit agreement not to point out something, not to criticize something, not to bring up a certain topic, etc.
Less Wrong, again, is a public forum. The point is for us to collectively seek truth and build useful things. When I comment, I consider whether my comment helps the collective with those goals. Whether it specifically helps the author of whatever I’m responding to, seems to me to be of secondary importance; and what’s more, taking that goal to instead be my primary goal when commenting, would drastically reduce the general usefulness of my comments (and in practice, of course, it would not even do that, but would instead drastically reduce their frequency).
[1] Well, some people told me that they liked my comments. But maybe they were just saying that out of politeness, or because they wanted to ingratiate themselves with me, or for god knows what other reason(s).
[2] But be careful of dismissing merely concise comments as “low-effort”. Recall the old joke about the repairman who sent a client an itemized bill for hitting an expensive device once with a hammer, and thereby making it work again: “Hitting it: $1. Knowing where to hit it: $10,000.” Similarly, making a one-sentence comment is easy. Making a comment that accomplishes a great deal with one sentence is a lot more valuable.
While ideas must compete for attention, so too must criticisms. I’ve been lead to believe that, somewhere in this thread, there is a good criticism of the top-level post. I spent some time looking for it, and what I found was a whole lot of miscommunication, criticism of things that don’t quite match what was written, and general muddle. You aren’t just asking Benquo to do a lot of work to avoid those miscommunications, you’re also asking the people who read your comments to do a lot of work to determine whether your comment is based on a miscommunication or not.
Setting too high a bar for criticism creates an echo chamber; but setting too low a bar does too, by obscuring the real arguments in a place where people can’t find them without a lot of whole lot of time.
I am not aware of any miscommunication that took place in my direction. Certainly, there has been misunderstanding of what I said. There has also been a lot of explaining, in detail and at length, on my part. But not so much vice-versa. Could you point out what idea of the OP you think I have misunderstood, and what attempts were made by Benquo to clarify it?
I have linked this post to a number of people, off Less Wrong. None of them had any trouble locating and understanding my criticisms; and I did repeat them several times, in several ways. To be honest, your comment perplexes me.
As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.
I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that’s fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we’ve not reached there you’re correct to be worried and want to enforce the standards yourself with low-effort comments (and I don’t mean to imply the comments don’t often contain implicit within them very good ideas).
But unfortunately, given your low-effort criticism feels so aggressive (according to me, the mods, and most writers I talk to in the rationality community), this is just going to destroy the first stage before we get the second. If you write further comments in this pattern which I have pointed to above, I will not continue to spend hours trying to pass your ITT and responding; I will just give you warnings and suspensions.
I may write another comment in this thread if there is something simple to clarify or something, but otherwise this is my last comment in this thread.
Without commenting on most of the rest of what you’ve said, I do want to note briefly that—
—if you are referring to this comment of yours, then I daresay the hours spent did not end up being productive (insofar as the state goal does not seem to have been reached). I appreciate, I suppose, the motivation behind the effort; but am dubious about the value of such things in general (especially extrapolating from this example).
That aside—I wish you luck, as always, with your efforts, and intend to continue doing what I can to help them succeed.
This is the first point at which I, at least, saw any indication that you thought Ben’s attempt to pass your ITT was anything less than completely accurate. If you thought his summary of your position wasn’t accurate, why didn’t you say so earlier ? Your response to the comment of his that you linked gave no indication of that, and thus seemed to give the impression that you thought it was an accurate summary (if there are places where you stated that you thought the summary wasn’t accurate and I simply missed it, feel free to point this out). My understanding is that often, when person A writes up a summary of what they believe to be person B’s position, the purpose is to ensure that the two are on the same page (not in the sense of agreeing, but in the sense that A understands what B is claiming). Thus, I think person A often hopes that person B will either confirm that “yes, that’s a pretty accurate summary of my position,” or “well, parts of that are correct, but it differs from my actual position in ways 1, 2, and 3″ or “no, you’ve completely misunderstood what I’m trying to say. Actually, I was trying to say [summary of person B’s position].”
To be perfectly clear, an underlying premise of this is that communication is hard, and thus that two people can be talking past each other even if both are putting in what feels like a normal amount of effort to write clearly and to understand what the other is saying. This implies that if a disagreement persists, one of the first things to try is to slow down for a moment and get clear on what each person is actually saying, which requires putting in more than what feels like a normal amount of effort, because what feels like a normal amount of effort is often not enough to actually facilitate understanding. I’m getting a vibe that you disagree with this line of thought. Is that correct? If so, where exactly do you disagree?
Out of politeness, and courtesy to Ben, I had hoped to avoid a head-on discussion of this topic. However, you make good points; and, in any case, given that you’ve called attention to this point, certainly it would be imprudent not to respond. So here goes, and I hope that Ben does not take this personally; the sentiment expressed in the grandparent still stands.
The truth is, Ben’s comment is an excellent example of why I am skeptical of “interpretive labor”, as well as related concepts like “principle of charity” (which was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere). When I read Ben’s comment, what I see is the following:
Perfectly clear, straightforward language (quoted from my comments) that unambiguously and effectively conveys my points, “paraphrased” in such a way that the paraphrasing is worse in almost every way than the original: more confused, less accurate, less precise, less specific.
My viewpoints (which, as mentioned, had been expressed quite clearly, and needed no rephrasing at all) distorted into caricatures of themselves.
A strange mix of more-or-less passable (if degraded) portrayals of my points, plus some caricatures / strawmen / rounding-to-the-nearest-cliche, plus some irrelevant additions, that manages to turn the entire summary of my views into a mishmash, of highly dubious value.
Ben indicates that he spent hours reading my commentary, trying to understand my views, and writing the comment in question (and I have no reason to doubt this). But if one may spend hours on such a thing, and end up with such disappointing results, what’s the point?
What’s more, I see no indication in Ben’s post that he had the same estimate of the results of his efforts as I did. If the claim is “doing interpretive labor lets you understand your interlocutor, where a straightforward reading may lead you astray”, but the reality is “doing interpretive labor leaves you with the entirely erroneous impression that you’ve understood your interlocutor when in fact you haven’t, thus wasting your time not just for no benefit, but with a negative effect”, then, again—why do it?
One may hope for something like this, certainly. But in practice, I find that conversations like this can easily result from that sort of attitude:
Alice: It’s raining outside.
Bob, after thinking really hard: Hmm. What I hear you saying is that there’s some sort of precipitation, possibly coming from the sky but you don’t say that specifically.
Alice: … what? No, it’s… it’s just raining. Regular rain. Like, I literally mean exactly what I said. Right now, it is raining outside.
Bob, frowning: Alice, I really wish you’d express yourself more clearly, but if I’m understanding you correctly, you’re implying that the current weather in this location is uncomfortable to walk around in? And—I’m guessing, now, since you’re not clear on this point, but—also that it’s cloudy, and not sunny?
Alice: …
Bob: …
Alice: Dude. Just… it’s raining. This isn’t hard.
Bob, frowning some more and looking thoughtful: Hmm…
And so on.
So, yes, communication is hard. But it’s not clear at all that this sort of solution really solves anything.
And at the same time, sometimes there are just actual disagreements. I think maybe some folks in this conversation forget that, or don’t like to think about it, or… heck, I don’t know. I’m speculating here. But there’s a remarkable lack of acknowledgment, here, of the fact that sometimes someone is just wrong, and people are disagreeing with that person because he’s wrong, and they’re right.
Note: I will not be engaging in much depth here, but wanted to flag one particularly important point:
No. If Ben did not successfully interpret your language, your language wasn’t clear or unambiguous. The point of the ITT is the verify that any successful communication has taken place at all. If it hasn’t, everything that happens after that is just time wasting.
Yes, this, precisely this.
I’m afraid I can’t agree with this, at all. But to get into the reasons why, I’d have to speak increasingly discourteously; I do not expect this to be a productive endeavor. Feel free to contact me privately if you are interested in my further views on this, but otherwise, I will also disengage.
This is exactly the problem that the ITT is trying to solve. Ben’s interpretation of what you said is Ben’s interpretation of what you said, whether he posts it or merely thinks it. If he merely thinks it, and then responds to you based on it, then he’ll be responding to a misunderstanding of what you actually said and the conversation won’t be productive. You’ll think he understood you, he’ll perhaps think he understood you, but he won’t have understood you, and the conversation will not go well because of it.
But if he writes it out, then you can see that he didn’t understand you, and help him understand what you actually meant before he tries to criticize something you didn’t even actually say. But this kind of thing only works if both people cooperate a little bit. (Okay, that’s a bit strong, I do think that the kind of thing Ben did has some benefit even though you didn’t respond to it. But a lot of the benefit comes from the back and forth.)
Again, this is merely evidence that communication is harder than it seems. Ben not writing down his interpretation of you doesn’t magically make him understand you better. All it does is hide the fact that he didn’t understand you, and when that fact is hidden it can cause problems that seem to come from nowhere.
That’s not the claim at all. The claim is that the reading that seems straightforward to you may not be the reading that seems straightforward to Ben. So if Ben relies on what seems to him a “straightforward reading,” he may be relying on a wrong reading of what you said, because you wanted to communicate something different.
I mean, yes, maybe Ben thought that after writing all that he understood what you were saying. But if he misunderstood you have the power to correct that. And him putting forward the interpretation he thinks is correct gives you a jumping-off point for helping him to understand what you meant. Without that jumping-off point you would be shooting in the dark, throwing out different ways of rephrasing what you said until one stuck, or worse (as I’ve said several times now) you wouldn’t realize he had misunderstood you at all.
Yes, but you can’t hash out the substantive disagreements until you’ve sorted out any misunderstandings first. That would be like arguing about the population size of Athens when one of you thinks you’re talking about Athens, Greece and the other thinks you’re talking about Athens, Ohio.
This, I think, is where we differ (well, this, and the relative value of spending time on “interpretive labor” vs. going ahead with the [what seems to you to be the] straightforward interpretation). I think that time spent thus is generally wasted (and sometimes, or often, even counterproductive), and I think that correcting misunderstandings that persist after such “interpretive labor” is not feasible in practice (at least, not by the direct route)—not to mention that attempting to do this anyway, detracts from the usefulness of the discussion.
By the way, I’m curious why you say that the principle of charity “was an unimpeachable idea, but was quickly corrupted, in the rationalist memesphere.” What do you think was the original, good form of the idea, what is the difference between that and the version the rationalist memesphere has adopted, and what is so bad about the rationalist version?
The original, good form of the principle of charity… well, actually, one or another principle under this name is decades old, or perhaps millennia; but in our circles, we can trace it back to Scott’s first post on Slate Star Codex, which I will quote almost in full:
(Bolding mine, italics in original.)
A fair and reasonable principle, I think. We might also extend it—as, indeed, it has often been extended—to the injunction that opponents, and their arguments, ought not be dismissed merely because they appear to be evil. (For example, if it seems like I am suggesting that kittens must be tortured at every opportunity—well, who knows, perhaps I am?—but it is uncharitable to assume this, and to dismiss and denounce me for it, unless I’ve said this explicitly, or you’ve made a reasonable attempt to elicit a clarification, and I’ve confirmed that I am saying just that.)
So that is the unimpeachable idea. And what is the corruption? There are several, actually. Here’s one:
(Source.)
Here, the suggestion is that being “charitable” requires that I mentally replace one technical term with another, totally different, technical term, turning a statement that is perfectly coherent—not absurd, not insane—but wrong, into a different statement that is correct. Evidently I am expected to do this with every one of my interlocutor’s statements. So, then what? Do I just assume that whenever anyone says anything to me that I think is wrong, what they actually mean is something correct? Is it just impossible for people to be wrong? Can I never be surprised by people’s claims? Is “huh, so what you’re saying is X? really?” totally out of the question? (Never mind the question of how I’m supposed to know what to “correct” my interlocutor’s comments to—it isn’t like there’s always, or even often, just one possible “correct” interpretation!)
And then the other corruption is the other side of the same coin. It’s what happens when people do apply this form of the “principle of charity”, and end up having conversations like some I’ve had recently, where I’ve been on the receiving end of this “charity”: I say something fairly straightforward, and my interlocutor, applying the principle of charity, and believing the literal or straightforward interpretation of my words to be evil (or something), mentally transforms my comments into something different (and, presumably, non-evil), and responds to that. Communication has not taken place; my words have not been heard.
There are other corruptions, too, more subtle ones (examples of which I’d have to take some time to hunt for), but these are more than bad enough!
Thanks for this. Sorry it’s taken me so long to reply here, didn’t mean to let this conversation hang for so long. I completely agree with about 99% of what you wrote here. The 1% I’ll hopefully address in the post I’m working on on this topic.
This substantially raised my estimate of how much harm Said’s been causing from “annoying but mostly harmless” to “actively attacking good discourse for being good”. I’ve switched my moderation policy to reign of terror because on future posts I intend to delete comments by Said that were as annoying as the initial exchange here. Not sure if that extends to other commenters, probably it should but I haven’t had other problems this bad.
nods Thank you, Said.
This was now a week ago. The mod team discussed this a bit more, and I think it’s the correct call to give Said an official warning (link) for causing a significant number of negative experiences for other authors and commenters.
Said, this moderation call is different than most others, because I think there is a place for the kind of communication culture that you’ve advocated for, but LessWrong specifically is not that place, and it’s important to be clear about what kind of culture we are aiming for. I don’t think ill of you or that you are a bad person. Quite the opposite; as I’ve said above, I deeply appreciate a lot of the things you’ve build and advice you’ve given, and this is why I’ve tried to put in a lot of effort and care with my moderation comments and decisions here. I’m afraid I also think LessWrong will overall achieve its aims better if you stop commenting in (some of) the ways you have so far.
Said, if you receive a second official warning, it will come with a 1-month suspension. This will happen if another writer has an extensive interaction with you primarily based around you asking them to do a lot of interpretive labour and not providing the same in return, as I described in my main comment in this thread.
I am not at all sure it’s always true that posting nothing at all is easier than posting a short, snarky comment. The temptation to do the latter can be almost overwhelming.
And just as ideas are a dime a dozen, so are criticisms. Your arguments against disincentivizing criticism seem to me to have parallel arguments against disincentivizing posting; and your arguments for harsh criticism of top-level posts seem to me to have parallel arguments for harsh criticism of critical comments. (Of course the two aren’t exactly equivalent, not least because top-level posts are more visible than critical comments. Still, I think all the arguments cut both ways.)
True enough! That temptation falls away, however, if one simply stops reading.
As for the rest—in principle, you’re entirely correct. In practice, I do not think what you say is true. For one thing, as I mentioned, even in the extreme case where literally no one posts anything at all, there nonetheless remain plenty of ideas to examine. But even that aside, the problem is this: once you sweep aside those ideas which are just trolling, or explicitly known to be false, or have the Time Cube nature, you’re still left with a massive pile of what might be good but what could easily be (and likely is) total nonsense (as well as other possibilities like “good but ultimately not useful”, “subtly wrong”, etc.).
On the other hand, once you sweep aside those criticisms which are nothing but rudeness or abuse, or obvious trolling, etc., what you’re left with is… not much, actually. There really is a shortage of good criticism. How many of the posts on Less Wrong, within—say—the past six months, have received almost no really useful scrutiny? It’s not none!
Finally, as for this—
As with so many things: one person’s modus tollens is another’s modus ponens.
I think there’s a problem here where “broad attention” and “harsh attention” are different tools that suggest different thresholds. I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come. I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.
My position is that subreddit-like things are the correct way to separate out rules (both because it’s a natural unit of moderation, and it implies rulesets are mutually exclusive, and it makes visual presentation easy) and tag-like things are the correct way to separate out topics (because topics aren’t mutually exclusive and don’t obviously imply different rules). A version of lesswrong that has two subreddits, with names like ‘soft’ and ‘sharp’, seems like it would both offer a region for exploratory efforts and a region for solid accumulation, with users by default looking at both grouped together (but colored differently, perhaps).
One of the reasons why that vision seemed low priority (we might be getting to tags in the next few months, for example) was that, to the best of my knowledge, no poster was clamoring for the sharp subreddit. Most of what I would post to main in previous days would go there, and some of the posts I’m working on now are targeted at essentially that, but it’s much easier to post sharp posts in soft than it is to post soft posts in sharp.
Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good. The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it. Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.
Under this model, requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress (among people who haven’t already sorted into private groups), and perhaps more importantly gives a misleading idea of how progress is generated. If one is trying to learn to do math like a professional mathematician, it is much more helpful to watch their day-to-day activities and chatter with colleagues than it is to read their published papers, because their published papers sweep much of the real work under the rug. Often one generates a hideous proof and then searches more and finds a prettier proof, but without the hideous proof one might have given up. And one doesn’t just absorb until one is fully capable of producing professional math; one interleaves observation with attempts to do the labor oneself, discovering which bits of it are hard and getting feedback on one’s products.
This seems like an excellent argument for dynamic RSS feeds (which I am almost certain is a point I’ve made to Oliver Habryka in a past conversation). Such a feature, plus a robust tagging system, would solve all problems of the sort you describe here.
It’s not clear why a post like this should be on Less Wrong at all, but if it must be, then there seems to be nothing stopping you from prefacing it with “please apply frontpage-level scrutiny to this one, but I don’t actually want this promoted to the frontpage”.
I think that a good tagging system should, indeed, be a high priority in features to add to Less Wrong.
Well, I was not clamoring for it because I was under the impression that the entire front page of Less Wrong was, as you say, the “sharp subreddit”. That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
I should like to see this belief defended. I am skeptical. But in any case, that’s what the personal blogs are for, no?
Your meaning here is obscure to me, I’m afraid…
I consider that to be one of Graham’s weakest pieces of writing. At best, it’s useless rambling. At worst, it’s tantamount to “In Defense of Insight Porn”.
But this is precisely why I think it’s tremendously valuable that this harsh scrutiny take place in public. A post is promoted to the front page, and there, it’s scrutinized, and its ideas are discussed, etc.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training. They’re not just “anyone with an internet connection”. A professional mathematicians’s half-baked idea on a mathematical topic is simply not comparable with a random internet person’s (or even a random “rationalist”’s) half-baked idea on an arbitrary topic.
How do you expect to solve this problem? The primary thing I’ve heard form you is defense of your style of commenting and its role in the epistemic environment, and regardless of whether or not I agree with it, the problem that I’m trying to solve is getting more good content on LW, because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction. When we ask people who made top tier posts before why they don’t make them now, or they put them elsewhere, the answer is resoundingly not “we were put off by mediocre content on LW” but “we were put off by commenters who were mean and made writing for LW unpleasant.”
Keep in mind that the problem here is not “how do we make LW a minimally acceptable place to post things?” but “how do we make posting for LW a better strategy than other competitors?”. I could put effort into editing my post on a Bayesian view of critical rationalism that’s been sitting in my Google Docs drafts for months to finally publish it on LW, or I could be satisfied that it was seen by the primary person I wrote it for, and just let it rot. I could spend some more hours reading a textbook to review for LessWrong, or I could host a dinner party in Berkeley and talk to other rationalists in person.
I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training. Rationality, of course, is much more in its infancy than mathematics is, and so we should expect professional mathematicians to be better at mathematics than rationalists are at rationality. It’s also the case that people in mathematics grad school often make bad mathematical arguments that their peers and instructors should attempt to correct, but when they do so it’s typically with a level of professional courtesy that, while blunt, is rarely insulting.
So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.
It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
In this very interesting discussion I mostly agree with you and Ben, but one thing in the comment above seems to me importantly wrong in a way that’s relevant:
I bet that’s true. But you also need to consider people who never posted to LW at all but, if they had, would have made top-tier posts. Mediocre content is (I think) more likely to account for them than for people who were top-tier posters but then went away.
(Please don’t take me to be saying ”… and therefore we should be rude to people whose postings we think are mediocre, so that they go away and stop putting off the really good people”. I am not at all convinced that that is a good idea.)
I agree that meh content can be harmful in that way. I don’t think that Said’s successfully selectively discouraging meh content.
I mostly agree, but one part seems a bit off and I feel like I should be on the record about it:
It’s evidence that I’m a top example of the particular sort of rationality culture that LW is clustered around, and I think that’s enough to make the argument you’re trying to make, but being good at getting upvotes for writing about rationality is different in some important ways from being rational, in ways not captured by the analogy to math grad school.
I agree the analogy is not perfect, but I do think it’s better than you’re suggesting; in particular, it seems to me like going to math grad school as opposed to doing other things that require high mathematical ability (like quantitative finance, or going to physics grad school, or various styles of programming) is related to “writing about rationality rather than doing other things with rationality.” Like, many of the most rational people I know don’t ever post on LW because that doesn’t connect to their goals; similarly, many of the most mathematically talented people I know didn’t go to math grad school, because they ran the numbers on doing it and they didn’t add up.
But to restate the core point, I was trying to get at the question of “who do you think is worthy of not being sarcastic towards?”, because if the answer is something like “yeah, using sarcasm on the core LW userbase seems proper” this seems highly related to the question of “is this person making LW better or worse?”.
I’d just like to comment that in my opinion, if we only had one post a month on LW, but it was guaranteed to be good and insightful and useful and relevant to the practice of rationality and not wrong in any way, that would be awesome.
The world is full of content. Attention is what is scarce.
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generic a desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specific goals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
This is a shocking statement. I had to reread this sentence several times before I could believe that I’d read it right.
… just what, exactly, do you mean by “rationality”, that could make this claim true?!
Both the first and the second are plausible (“reputation” is not really the right concept here, but I’ll let it stand for now). The third is also near enough to truth.
Let’s skip all the borderline examples and go straight to the top. Among “rationalists”, who has the highest reputation? Who is Top Rationalist? Obviously, it’s Eliezer. (Well, some people disagree. Fine. I think it’s Eliezer; I think you’re likely to agree; in any case he makes the top five easily, yes?)
I have great respect for Eliezer. I admire his work. I have said many times that the Sequences are tremendously important, well-written, etc. What’s more, though I’ve only met Eliezer a couple of times, it’s always seemed to me that he’s a decent guy, and I have absolutely nothing against him as a person.
But I’ve also read some of the stuff that Eliezer has posted on Facebook, over the course of the last half-decade or more. Some of it has been well-written and insightful. Some of it has been sheer absurdity, and if he had posted it on Less Wrong, you can bet that I would not spare those posts from the same unsentimental and blunt scrutiny. To do any less would be intellectual dishonesty.
Even the cleverest and best of us can produce nonsense. If no one scrutinizes our output, or if we’re surrounded only by “critics” who avoid anything substantive or harsh, the nonsense will soon dominate. This is worse than not having a Less Wrong at all.
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming. How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.
Uh, how’s that? Anyway, even if we grant that you tried this, well… no offense meant, but maybe you tried it the wrong way? “We tried doing something like this, once, and it didn’t work out, therefore it’s impossible or at least not worth trying” is hardly what you’d call “solid logic”.
This is, indeed, a serious question, and one well worth considering in detail and at length, not just as a tangent to a tangent, deep in one subthread of an unrelated comments section.
But here’s one answer, given with the understanding that this is a brief sketch, and not the whole answer:
Prestige and value attract contributors. Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction. When you can say to someone, “I think your writing on <topic> is good enough for Less Wrong” and have that be a credible and unusual compliment, you will easily be able to find contributors. When you’ve created a culture where you can post on Less Wrong and there, get the best, most insightful, most no-nonsense, cuts-to-the-heart-of-the-matter criticism, people who are truly interested in perfecting their ideas will want to post here, and to submit to scrutiny.
Not so easy, I regret to say…
See above for why authors would want to do this. As for “a class of dedicated curators who would rewrite their posts”, I never suggested anything remotely like this, and would never suggest it.
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well. This is definitely a “there is a technical solution which cuts right through the Gordian knot of social problems” case.
Where would you point to as a previous example of success in this regard? I don’t think the golden age of Less Wrong counts, as it seems to me the primary reason LessWrong was ever known as a place with high standards is because Eliezer’s writing and thinking were exceptional enough to draw together a group of people who found it interesting, and that group was a pretty high-caliber group. But it’s not like they came here because of the insightful comments; they came here for the posts, and read the comments because they happened to be insightful (and interested in a particular mode of communication over point-seeking status games). When the same commenters were around, but the good post-writers disappeared or slowed down, the site slowly withered as the good commenters stopped checking because there weren’t any good posts.
There have been a few examples of people coming to LessWrong with an idea to sell, essentially, which I think is the primary group that you would attract by having a reputation as a forum that only good ideas survive. I don’t recall many of them becoming solid contributors, but note that this is possibly a memory selection effect; when I think of “someone attracted to LW because of the prestige of us agreeing with them” I think of many people whose one-track focuses were not impressive, when perhaps someone I respect originally came to LW for those reasons and then had other interests as well.
With regards to the “solid logic” comment, do give us some credit for having thought through this issue and collected what data we can. From my point of view, having tried to sample the community’s impressions, the only people who have said the equivalent of “ah, criticism will make the site better, even if it’s annoying” are people who are the obvious suspects when post writers say the equivalent of “yeah, I stopped posting on Less Wrong because the comments were annoyingly nitpicky rather than focusing on the core of the point.”
I do want to be clear that ‘high-standards’ and ‘annoying’ are different dimensions, here, and we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way. By the way I use the word “smooth”, things point in the opposite direction. [And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
We might not be talking about the same thing (in technical/implementation terms), as what you say does not apply to what I had in mind. (It’s awkward to hash this out in via comments like this; I’d be happy to discuss this in detail in a real-time chat medium like IRC.)
“Pedantically” is a caricature, I think; I would say “straightforwardly”—but then, we have a live example of what we’re referring to, so terminology is not crucial. That aside, I stand by this point, and reaffirm it.
I am deeply skeptical of “interpretive labor”, at least as you seem to use the term.[1] Most examples that I can recall having seen of it, around here, seem to me to have affected the conversation negatively. (For instance, your example elsethread is exactly what I’d prefer not to see from my interlocutors.)
In particular, this—
—doesn’t actually happen, as far as I can tell. What happens instead is that errors are compounded and complicated, while simultaneously being swept under the rug. It seems to me that this sort of “interpretive labor” does much to confuse and muddle discussions on Less Wrong, while effecting the appearance of “smooth” and productive communication.
I don’t know… I think it’s at least possible that we’re using the word in basically the same way, but disagree on what effects various behaviors have. But perhaps this point is worth discussing on its own (if, perhaps, not in this thread): what is this “smoothness” property of discussions, what why is it desirable? (Or is it?)
This sounds like a post I’d enjoy reading!
[1] Where is this term even from, by the way…?
https://acesounderglass.com/2015/06/09/interpretive-labor/
This seems like a proposal to make LW contentless, with lots of vacuously true statements.
They should ban you for how you’re interacting right now. I don’t know why they’re taking shit with your dodging the issue, but you either don’t have the ability to figure out when someone is correctly calling you out, or aren’t playing nice. Your brand of bullshit is a major reason I’ve avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don’t have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.
Lahwran, I downvoted your comment because I think it should be costly to write something that lowers the tone like this, but I appreciate you saying that this is the reason you left LW, and you might be right that I’m being too civil relative to the effects Said is directly having.
I’ve put in a bunch of effort to trade models of good discourse, but this conversation is heading towards its close. As I’ve said, if Said writes these sorts of comments in future, I’ll be hitting fairly hard with mod tools, regardless of his intentions. Notice that this brand of bullsh*t is otherwise largely gone from LW since the re-launch in March—Said has been an especially competent and productive individual who has this style of online interaction, so I’ve not wanted to dissuade him as strongly as the rest who’ve left, but my patience has since worn thin on this front, and I won’t be putting up with it in future.
It seems like, having interpreted Vaniver as making an obvious error, you decided to argue at length against it instead of considering that he might have meant something else. This is tedious and is punishing Vaniver for not tediously overspecifying everything.
This attitude makes very little sense.
Suppose that one Alice writes something which I, on the straightforward reading, consider to be definitely and clearly wrong. I read it and imagine two possibilities:
(A) Alice meant exactly what it seems like she wrote.
Presumably, then, Alice disagrees with my judgment of what she wrote as being definitely and clearly wrong. Well, there is nothing unusual in this; I have often encountered cases where people hold views which I consider to be definitely and clearly wrong, and vice-versa. (Surely you can say the same?)
In this case, what else is there to do but to respond to what Alice wrote?
(B) Alice meant something other than what it seems like she wrote.
What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.
But suppose I go ahead and try anyway, I come up with some possible thing that Alice could’ve meant. Do I have any reason to conclude that this is the only possibility for what Alice could’ve meant? I do not. I might be able to think longer, and come up with other possibilities. None of them would offer me any reason to assume that that one is what Alice meant.
And suppose I do pick out (via some mysterious and, no doubt, dubious method) some particular alternate meaning for Alice’s words. Well, and is that correct, then, or wrong? If it’s wrong, then I will argue the point, presumably. But then I will be in the strange position of saying something like this:
“Alice, you wrote X. However, X is obviously wrong. So you couldn’t have meant that. You instead meant Y, probably. But that’s still wrong, and here’s why.”
Have I any reason at all to expect that Alice won’t come back with “Actually, no, I did mean X; why do you say it’s obviously wrong?!”, or “Actually, no, I meant Z!”? None at all. And I’ll have wasted my time, and for what?
This sort of thing is almost always a pointless and terrible way of carrying on a discussion, why is why I don’t and won’t do it.
Consider response A:
“I often successfully guess what people meant; it being impossible comes as a surprise to me. Are you claiming this has never happened to you?”
And response B:
Ah, Said likely meant that it is impossible to reliably infer Alice’s meaning, rather than occasionally doing so. But is a strategy where one never infers truly superior to a strategy where one infers, and demonstrates that they’re doing so such that a flat contradiction can be easily corrected?
[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]
[EDIT: I made a mistake in this comment, where response B was originally [what someone would say after doing that substitution], and then I said “wait, it’s not obvious where that came from, I should put the thoughts that would generate that response” and didn’t apply the same mental movement to say “wait, it’s not obvious that response A is a flat response and response B is a thought process that would generate a response, which are different types, I should call that out.”]
Yes, exactly; response A would be the more reasonable one, and more conducive to a smooth continuation of the discussion. So, responding to that one:
“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilities, no matter how unlikely-seeming, and managing by chance to be right.
But, I can’t remember a time when I’ve read what someone said, rejected the obvious (but obviously wrong) interpretation, tried to guess what they actually meant, and succeeded. When I’ve tried, the actual thing that (as it turned out) they meant was always something which I could never have even imagined as a hypothesis, much less picked out as the likeliest meaning. (And, conversely, when someone else has tried to interpret my comments in symmetric situations, the result has been the same.)
In my experience, this is true: for all practical purposes, either you understand what someone meant, or it’s impossible to guess what they could’ve meant instead.
This is not what I’m implying, because it’s not what I’m saying and what I’m saying has a straightforward meaning that isn’t this. See this comment. “Literally” is a strawman (not an intentional one, of course, I’m assuming); it can seem like Alice means something, without that necessarily being anything like the “literal reading” of her words (which in any case is a red herring); “straightforward” is what I said, remember.
Edit: I don’t know where all this downvoting is coming from; why is the parent at −2? I did not downvote it, in any case…
A couple more things I think your disjunction is missing.
1) If you don’t know what Alice means, instead of guessing, you can ask.
(alternately, you can offer a brief guess, and give them the opportunity to clarify. This has the benefit of training your ability to infer more about what people mean). You can do all this without making any arguments or judgments until you actually know what Alice meant.
2) Your phrasing implies that if Alice writes something that “seems to straightforwardly mean something, and Alice meant something else”, that the issue is that Alice failed to write adequately. But it’s also possible for the failure to be on the part of your comprehension rather than Alice’s writing. (This might be because Alice is writing for an audience of people with more context/background than you, or different life experiences than you)
Re: asking: well, sure. But what level of confidence in having understood what someone said should prompt asking them for clarification?
If the answer is “anything less than 100%”, then you just never respond directly to anything anyone writes, without first going through an elaborate dance of “before I respond or comment, let me verify that this is what you meant: [insert re-stating of the entirety of the post or comment you’re responding to]”; then, after they say “yes, that is what I meant”, you respond; then, before they respond to you, they first go “now, let me make sure I understand your response: [insert re-stating of the entirety of your response]” … and so on.
Obviously, this is no way to have a discussion.
But if there is some threshold of confidence in having understood that licenses you to go ahead and respond, without first asking whether your interlocutor meant the thing that it seems like they meant, then… well, you’re going to have situations where it turns out that actually, they meant something else.
Unless, of course, what you’re proposing is a policy of always asking for clarification if you disagree, or think that your interlocutor is mistaken, etc.? But then what you’re doing is imposing a greater cost on dissenting responses than assenting ones. Is this really what you want?
Re: did Alice fail to communicate or did I fail to comprehend: well, the question of “who is responsible for successful communication—author or audience?” is hardly a new one. Certainly any answer other than “it is, to some extent, a collaborative effort” is clearly wrong.
The question is, just how much is “some extent”? It is, of course, quite possible to be so pedantic, so literal-minded, so all-around impenetrable, that even the most heroically patient and singularly clear of authors cannot get through to you. On the other hand, it’s also possible to write sloppily, or to just plain have bad ideas. (If I write something that is wrong, and you express your disagreement, and I say “no, you’ve misunderstood, actually I’m right”, is it fair to say that you’ve failed in your duty as a conscientious reader?)
In any case, the matter seems somewhat academic. As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said. (Certainly I’ve seen no one posting any corrections to my reading of the OP. Mere claims that I’ve misunderstood, with no elaboration, are hardly convincing!)
This is an isolated demand for rigor. Obviously there’s no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.
Note that I say “obviously mistaken.” If your interlocutor says something that seems mistaken, that’s one thing, and as you say, it shouldn’t always prompt a request for clarification; sometimes there’s just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things, that may indicate that there is something they see that you don’t, in which case it would be useful to ask for clarification.
In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.” It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn’t be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don’t claim to know what particular standards he has in mind, but clearly standards that would be useful for “solving problems related to advancing human rationality and avoiding human extinction”). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for “good content” in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)
“The case at hand” was your misunderstanding of Vaniver, not Benquo.
Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form “any time your interlocutor says something that seems obviously mistaken, ask for clarification”). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that’s sometimes an indication that you should ask for clarification. Sometimes it’s not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.
EDIT: if it turns out you didn’t mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn’t need to ask you for clarification).
Ikaxas, I would be strong-upvoting your comments here except that I’m guessing engaging further here does more harm than good. I’d like to encourage you to write a separate post instead, perhaps reusing large portions of your comments. It seems like you have a bunch of valuable things to say about how to use the interpretive labor concept properly in discourse.
Thanks for the encouragement. I will try writing one and see how it goes.
Well, the second part of your comment (after the rule) pre-empts much of what I was going to say, so—yes, indeed. Other than that:
Yes, I think this seems like a rather self-serving set of judgments.
As it happens, I didn’t mean my question literally, in the sense that it was a rhetorical question. My point, in fact, was almost precisely what you responded, namely: clearly the threshold is not 100%, and also clearly, it’s going to depend on context… but that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
Other points:
I have never met such a person, despite being surrounded, in my social environment, by people at least as intelligent as I am, and often more so. In my experience, everyone says obviously wrong things sometimes (and, conversely, I sometimes say things that seem obviously wrong to others). If this never happens to you, then this might be evidence of some troubling properties of your social circles.
That’s still vacuous, though. If that’s what it’s a stand-in for, then I stand by my comments.
Indeed, I could have. But consider these two scenarios:
Scenario 1:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Scenario 2:
Alice: [makes some statement]
Bob: That’s obviously wrong, because [reasons].
Alice: But of course [straightforward reading] isn’t actually what I meant, as that would indeed be obviously wrong. Instead, I meant [other thing].
You seem to be saying that Scenario 1 is obviously (!!) superior to Scenario 2. But I disagree! I think Scenario 2 is better.
… now, does this claim of mine seem obviously wrong to you? Is it immediately clear why I say this? (If I hadn’t asked this, would you have asked for clarification?) I hope you don’t mind if I defer the rest of my point until after your response to this bit, as I think it’s an interesting test case. (If you don’t want to guess, fair enough; let me know, and I’ll just make the rest of my point.)
I’ve been mulling over where I went wrong here, and I think I’ve got it.
I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there’s some threshold or some clear rule for deciding when to ask for clarification, it’s not worth implementing “ask for clarification if you’re unsure” as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that’s not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone’s fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it’s worth stopping to have one or both parties do something in the vicinity of trying to pass the other’s ITT, to see where the confusion is.
I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I’m much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn’t enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver’s point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn’t really entail that you ought to have asked for clarification here, in this very instance.
Anyway, as Ben suggested I’m working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I’ll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.)
I agree the model I’ve been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don’t think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
If this is where you are going, I have a couple disagreements with it, but I’ll wait until you’ve explained the rest of your point to state them in case I’ve guessed wrong (which I’d guess is fairly likely in this case).
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Sounds good, and I am looking forward to reading your post!
Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)
Ah, thanks!
Your disjunction is wrong.
EDIT: oops, replied to the wrong comment.
How?
Spurious binary between one way things really seem, and the many ways one might guess. Even the one way it seems to you is in fact an educated guess.
That’s not a spurious binary, and in any case it doesn’t make the disjunction wrong. Observe:
Let P = “Alice meant exactly what it seems like she wrote.”
¬P = “It is not the case that Alice meant exactly what it seems like she wrote.”
And we know that P ∨ ¬P is true for all P.
Is “It is not the case that Alice meant exactly what it seems like she wrote” the same as “Alice meant something other than what it seems like she wrote”?
No, not quite. Other possibilities include things like “Alice didn’t mean anything at all, and was making a nonsense comment, as a sort of performance art”, etc. But I think we can discount those.
First thoughts:
One thing I don’t like about this proposal is—and you’re hearing me right—it doesn’t positively enough incentivise criticism.
In particular, there needs to be a place where when the idea is put to the test (to peer review), if someone writes a knock-down critique, that person is celebrated and their accomplishments are to be incorporated into their reputation in the community.
We want to have a place that both strongly incentivises good ideas, and strongly incentivises checking them—not disanalogous to how in Inadequate Equilibria the Visitor says on his planet the ‘replicators’ are given major prestige.
Because I want criticisms that look like this, not like this.
The first link is Zvi’s thoughtful and well-written critique of a point made in Eliezer’s “No Fire Alarm for AGI” post. This is good criticism that puts lots of effort into being clear to the reader, and is very well written. That’s why we curated it and it got loads of karma.
The comments of yours that I don’t like are things I would not want to find on almost any site successfully pursuing intellectual progress. They’re not nice comments to receive, but they’re also not very good criticism. Again, this isn’t all of your comments, but it often feels to me like you’re not engaging with the post very well (can’t pass the author’s ITT), or if your criticisms are true they’re attacking side notes (like a step that wasn’t rigorous, even though it was an unimportant step that wouldn’t be hard to make rigorous). If you look the places of great intellectual progress in groups, you don’t see that they reduced the effort barrier to criticism to the minimum, they increased the incentive for important criticism, that knocked down the core of an idea.
If your criticisms were written in a way that didn’t feel like it was rude / putting a burden on the author that you’re not willing to share, then that would be fine. If they were important (e.g. you were knocking down core ideas in the sequences, big mistakes everyone was making, or even just the central point of the post) then I would accept more blunt/rudeness. But when it’s neither, then it’s not good enough.
(I’m at 30 mins.)
Honestly, this is just insulting. I don’t know if you intended it that way, but this does an excellent job of discouraging me from engaging.
For what it’s worth, I bet the intention was as follows: Ben had mentioned that he was going to ration his time in this thread for fear of rabbit-holes, he thought you might prefer to have some idea how much more Said-Ben discussion was possible, and so (the amount of time he’d spent not being immediately visible) he added that note. So, exactly the result of insulting intent.
I didn’t intend it that way. I will not write them further and will keep them privately.
If Said is insulted by your clarity about how much time you’re investing in interpretive labor, then I think this is evidence that Said’s sense of offense is not value-aligned with good discourse. If someone put a note like that on a response to a comment by me, I’d feel like they were making an effort to be metacooperative. 30 minutes is a long time for a single comment!
As I’ve commented, the point in that comment went to the heart of my objection not only to this post, but to a great many posts that are similar to this one along a critically important axis. I continue to be dismayed by the casualness with which this concern has been dismissed, given that it seems to me to be of the greatest importance to the epistemic health of Less Wrong.