I think there’s a problem here where “broad attention” and “harsh attention” are different tools that suggest different thresholds. I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come. I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.
My position is that subreddit-like things are the correct way to separate out rules (both because it’s a natural unit of moderation, and it implies rulesets are mutually exclusive, and it makes visual presentation easy) and tag-like things are the correct way to separate out topics (because topics aren’t mutually exclusive and don’t obviously imply different rules). A version of lesswrong that has two subreddits, with names like ‘soft’ and ‘sharp’, seems like it would both offer a region for exploratory efforts and a region for solid accumulation, with users by default looking at both grouped together (but colored differently, perhaps).
One of the reasons why that vision seemed low priority (we might be getting to tags in the next few months, for example) was that, to the best of my knowledge, no poster was clamoring for the sharp subreddit. Most of what I would post to main in previous days would go there, and some of the posts I’m working on now are targeted at essentially that, but it’s much easier to post sharp posts in soft than it is to post soft posts in sharp.
Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good. The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it. Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.
Under this model, requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress (among people who haven’t already sorted into private groups), and perhaps more importantly gives a misleading idea of how progress is generated. If one is trying to learn to do math like a professional mathematician, it is much more helpful to watch their day-to-day activities and chatter with colleagues than it is to read their published papers, because their published papers sweep much of the real work under the rug. Often one generates a hideous proof and then searches more and finds a prettier proof, but without the hideous proof one might have given up. And one doesn’t just absorb until one is fully capable of producing professional math; one interleaves observation with attempts to do the labor oneself, discovering which bits of it are hard and getting feedback on one’s products.
I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come.
This seems like an excellent argument for dynamic RSS feeds (which I am almost certain is a point I’ve made to Oliver Habryka in a past conversation). Such a feature, plus a robust tagging system, would solve all problems of the sort you describe here.
I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.
It’s not clear why a post like this should be on Less Wrong at all, but if it must be, then there seems to be nothing stopping you from prefacing it with “please apply frontpage-level scrutiny to this one, but I don’t actually want this promoted to the frontpage”.
… tag-like things …
I think that a good tagging system should, indeed, be a high priority in features to add to Less Wrong.
… no poster was clamoring for the sharp subreddit …
Well, I was not clamoring for it because I was under the impression that the entire front page of Less Wrong was, as you say, the “sharp subreddit”. That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good.
I should like to see this belief defended. I am skeptical. But in any case, that’s what the personal blogs are for, no?
The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it.
Your meaning here is obscure to me, I’m afraid…
Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.
I consider that to be one of Graham’s weakest pieces of writing. At best, it’s useless rambling. At worst, it’s tantamount to “In Defense of Insight Porn”.
… requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress …
But this is precisely why I think it’s tremendously valuable that this harsh scrutiny take place in public. A post is promoted to the front page, and there, it’s scrutinized, and its ideas are discussed, etc.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training. They’re not just “anyone with an internet connection”. A professional mathematicians’s half-baked idea on a mathematical topic is simply not comparable with a random internet person’s (or even a random “rationalist”’s) half-baked idea on an arbitrary topic.
That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
How do you expect to solve this problem? The primary thing I’ve heard form you is defense of your style of commenting and its role in the epistemic environment, and regardless of whether or not I agree with it, the problem that I’m trying to solve is getting more good content on LW, because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction. When we ask people who made top tier posts before why they don’t make them now, or they put them elsewhere, the answer is resoundingly not “we were put off by mediocre content on LW” but “we were put off by commenters who were mean and made writing for LW unpleasant.”
Keep in mind that the problem here is not “how do we make LW a minimally acceptable place to post things?” but “how do we make posting for LW a better strategy than other competitors?”. I could put effort into editing my post on a Bayesian view of critical rationalism that’s been sitting in my Google Docs drafts for months to finally publish it on LW, or I could be satisfied that it was seen by the primary person I wrote it for, and just let it rot. I could spend some more hours reading a textbook to review for LessWrong, or I could host a dinner party in Berkeley and talk to other rationalists in person.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training.
I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training. Rationality, of course, is much more in its infancy than mathematics is, and so we should expect professional mathematicians to be better at mathematics than rationalists are at rationality. It’s also the case that people in mathematics grad school often make bad mathematical arguments that their peers and instructors should attempt to correct, but when they do so it’s typically with a level of professional courtesy that, while blunt, is rarely insulting.
So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.
It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
In this very interesting discussion I mostly agree with you and Ben, but one thing in the comment above seems to me importantly wrong in a way that’s relevant:
When we ask people who made top tier posts before why they don’t make them now, or they put them elsewhere, the answer is resoundingly not “we were put off by mediocre content on LW” but “we were put off by commenters who were mean and made writing for LW unpleasant.”
I bet that’s true. But you also need to consider people who never posted to LW at all but, if they had, would have made top-tier posts. Mediocre content is (I think) more likely to account for them than for people who were top-tier posters but then went away.
(Please don’t take me to be saying ”… and therefore we should be rude to people whose postings we think are mediocre, so that they go away and stop putting off the really good people”. I am not at all convinced that that is a good idea.)
I mostly agree, but one part seems a bit off and I feel like I should be on the record about it:
Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training.
It’s evidence that I’m a top example of the particular sort of rationality culture that LW is clustered around, and I think that’s enough to make the argument you’re trying to make, but being good at getting upvotes for writing about rationality is different in some important ways from being rational, in ways not captured by the analogy to math grad school.
I agree the analogy is not perfect, but I do think it’s better than you’re suggesting; in particular, it seems to me like going to math grad school as opposed to doing other things that require high mathematical ability (like quantitative finance, or going to physics grad school, or various styles of programming) is related to “writing about rationality rather than doing other things with rationality.” Like, many of the most rational people I know don’t ever post on LW because that doesn’t connect to their goals; similarly, many of the most mathematically talented people I know didn’t go to math grad school, because they ran the numbers on doing it and they didn’t add up.
But to restate the core point, I was trying to get at the question of “who do you think is worthy of not being sarcastic towards?”, because if the answer is something like “yeah, using sarcasm on the core LW userbase seems proper” this seems highly related to the question of “is this person making LW better or worse?”.
But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
I’d just like to comment that in my opinion, if we only had one post a month on LW, but it was guaranteed to be good and insightful and useful and relevant to the practice of rationality and not wrong in any way, that would be awesome.
The world is full of content. Attention is what is scarce.
That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
How do you expect to solve this problem?
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
… the problem that I’m trying to solve is getting more good content on LW …
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generic a desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specific goals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.
… because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction.
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training.
I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training.
This is a shocking statement. I had to reread this sentence several times before I could believe that I’d read it right.
… just what, exactly, do you mean by “rationality”, that could make this claim true?!
So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.
Both the first and the second are plausible (“reputation” is not really the right concept here, but I’ll let it stand for now). The third is also near enough to truth.
Let’s skip all the borderline examples and go straight to the top. Among “rationalists”, who has the highest reputation? Who is Top Rationalist? Obviously, it’s Eliezer. (Well, some people disagree. Fine. I think it’s Eliezer; I think you’re likely to agree; in any case he makes the top five easily, yes?)
I have great respect for Eliezer. I admire his work. I have said many times that the Sequences are tremendously important, well-written, etc. What’s more, though I’ve only met Eliezer a couple of times, it’s always seemed to me that he’s a decent guy, and I have absolutely nothing against him as a person.
But I’ve also read some of the stuff that Eliezer has posted on Facebook, over the course of the last half-decade or more. Some of it has been well-written and insightful. Some of it has been sheer absurdity, and if he had posted it on Less Wrong, you can bet that I would not spare those posts from the same unsentimental and blunt scrutiny. To do any less would be intellectual dishonesty.
Even the cleverest and best of us can produce nonsense. If no one scrutinizes our output, or if we’re surrounded only by “critics” who avoid anything substantive or harsh, the nonsense will soon dominate. This is worse than not having a Less Wrong at all.
It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming. How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming.
Uh, how’s that? Anyway, even if we grant that you tried this, well… no offense meant, but maybe you tried it the wrong way? “We tried doing something like this, once, and it didn’t work out, therefore it’s impossible or at least not worth trying” is hardly what you’d call “solid logic”.
How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
This is, indeed, a serious question, and one well worth considering in detail and at length, not just as a tangent to a tangent, deep in one subthread of an unrelated comments section.
But here’s one answer, given with the understanding that this is a brief sketch, and not the whole answer:
Prestige and value attract contributors. Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction. When you can say to someone, “I think your writing on <topic> is good enough for Less Wrong” and have that be a credible and unusual compliment, you will easily be able to find contributors. When you’ve created a culture where you can post on Less Wrong and there, get the best, most insightful, most no-nonsense, cuts-to-the-heart-of-the-matter criticism, people who are truly interested in perfecting their ideas will want to post here, and to submit to scrutiny.
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
Not so easy, I regret to say…
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
See above for why authors would want to do this. As for “a class of dedicated curators who would rewrite their posts”, I never suggested anything remotely like this, and would never suggest it.
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well. This is definitely a “there is a technical solution which cuts right through the Gordian knot of social problems” case.
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction.
Where would you point to as a previous example of success in this regard? I don’t think the golden age of Less Wrong counts, as it seems to me the primary reason LessWrong was ever known as a place with high standards is because Eliezer’s writing and thinking were exceptional enough to draw together a group of people who found it interesting, and that group was a pretty high-caliber group. But it’s not like they came here because of the insightful comments; they came here for the posts, and read the comments because they happened to be insightful (and interested in a particular mode of communication over point-seeking status games). When the same commenters were around, but the good post-writers disappeared or slowed down, the site slowly withered as the good commenters stopped checking because there weren’t any good posts.
There have been a few examples of people coming to LessWrong with an idea to sell, essentially, which I think is the primary group that you would attract by having a reputation as a forum that only good ideas survive. I don’t recall many of them becoming solid contributors, but note that this is possibly a memory selection effect; when I think of “someone attracted to LW because of the prestige of us agreeing with them” I think of many people whose one-track focuses were not impressive, when perhaps someone I respect originally came to LW for those reasons and then had other interests as well.
With regards to the “solid logic” comment, do give us some credit for having thought through this issue and collected what data we can. From my point of view, having tried to sample the community’s impressions, the only people who have said the equivalent of “ah, criticism will make the site better, even if it’s annoying” are people who are the obvious suspects when post writers say the equivalent of “yeah, I stopped posting on Less Wrong because the comments were annoyingly nitpicky rather than focusing on the core of the point.”
I do want to be clear that ‘high-standards’ and ‘annoying’ are different dimensions, here, and we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way. By the way I use the word “smooth”, things point in the opposite direction. [And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well.
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
We might not be talking about the same thing (in technical/implementation terms), as what you say does not apply to what I had in mind. (It’s awkward to hash this out in via comments like this; I’d be happy to discuss this in detail in a real-time chat medium like IRC.)
… we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way.
“Pedantically” is a caricature, I think; I would say “straightforwardly”—but then, we have a live example of what we’re referring to, so terminology is not crucial. That aside, I stand by this point, and reaffirm it.
I am deeply skeptical of “interpretive labor”, at least as you seem to use the term.[1] Most examples that I can recall having seen of it, around here, seem to me to have affected the conversation negatively. (For instance, your example elsethread is exactly what I’d prefer not to see from my interlocutors.)
In particular, this—
repair small errors in a transparent way
—doesn’t actually happen, as far as I can tell. What happens instead is that errors are compounded and complicated, while simultaneously being swept under the rug. It seems to me that this sort of “interpretive labor” does much to confuse and muddle discussions on Less Wrong, while effecting the appearance of “smooth” and productive communication.
By the way I use the word “smooth”, things point in the opposite direction.
I don’t know… I think it’s at least possible that we’re using the word in basically the same way, but disagree on what effects various behaviors have. But perhaps this point is worth discussing on its own (if, perhaps, not in this thread): what is this “smoothness” property of discussions, what why is it desirable? (Or is it?)
[And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated
They should ban you for how you’re interacting right now. I don’t know why they’re taking shit with your dodging the issue, but you either don’t have the ability to figure out when someone is correctly calling you out, or aren’t playing nice. Your brand of bullshit is a major reason I’ve avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don’t have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.
Lahwran, I downvoted your comment because I think it should be costly to write something that lowers the tone like this, but I appreciate you saying that this is the reason you left LW, and you might be right that I’m being too civil relative to the effects Said is directly having.
I’ve put in a bunch of effort to trade models of good discourse, but this conversation is heading towards its close. As I’ve said, if Said writes these sorts of comments in future, I’ll be hitting fairly hard with mod tools, regardless of his intentions. Notice that this brand of bullsh*t is otherwise largely gone from LW since the re-launch in March—Said has been an especially competent and productive individual who has this style of online interaction, so I’ve not wanted to dissuade him as strongly as the rest who’ve left, but my patience has since worn thin on this front, and I won’t be putting up with it in future.
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generica desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specificgoals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.
It seems like, having interpreted Vaniver as making an obvious error, you decided to argue at length against it instead of considering that he might have meant something else. This is tedious and is punishing Vaniver for not tediously overspecifying everything.
Suppose that one Alice writes something which I, on the straightforward reading, consider to be definitely and clearly wrong. I read it and imagine two possibilities:
(A) Alice meant exactly what it seems like she wrote.
Presumably, then, Alice disagrees with my judgment of what she wrote as being definitely and clearly wrong. Well, there is nothing unusual in this; I have often encountered cases where people hold views which I consider to be definitely and clearly wrong, and vice-versa. (Surely you can say the same?)
In this case, what else is there to do but to respond to what Alice wrote?
(B) Alice meant something other than what it seems like she wrote.
What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.
But suppose I go ahead and try anyway, I come up with some possible thing that Alice could’ve meant. Do I have any reason to conclude that this is the only possibility for what Alice could’ve meant? I do not. I might be able to think longer, and come up with other possibilities. None of them would offer me any reason to assume that that one is what Alice meant.
And suppose I do pick out (via some mysterious and, no doubt, dubious method) some particular alternate meaning for Alice’s words. Well, and is that correct, then, or wrong? If it’s wrong, then I will argue the point, presumably. But then I will be in the strange position of saying something like this:
“Alice, you wrote X. However, X is obviously wrong. So you couldn’t have meant that. You instead meant Y, probably. But that’s still wrong, and here’s why.”
Have I any reason at all to expect that Alice won’t come back with “Actually, no, I did mean X; why do you say it’s obviously wrong?!”, or “Actually, no, I meant Z!”? None at all. And I’ll have wasted my time, and for what?
This sort of thing is almost always a pointless and terrible way of carrying on a discussion, why is why I don’t and won’t do it.
“I often successfully guess what people meant; it being impossible comes as a surprise to me. Are you claiming this has never happened to you?”
And response B:
Ah, Said likely meant that it is impossible to reliably infer Alice’s meaning, rather than occasionally doing so. But is a strategy where one never infers truly superior to a strategy where one infers, and demonstrates that they’re doing so such that a flat contradiction can be easily corrected?
[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]
[EDIT: I made a mistake in this comment, where response B was originally [what someone would say after doing that substitution], and then I said “wait, it’s not obvious where that came from, I should put the thoughts that would generate that response” and didn’t apply the same mental movement to say “wait, it’s not obvious that response A is a flat response and response B is a thought process that would generate a response, which are different types, I should call that out.”]
Yes, exactly; response A would be the more reasonable one, and more conducive to a smooth continuation of the discussion. So, responding to that one:
“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilities, no matter how unlikely-seeming, and managing by chance to be right.
But, I can’t remember a time when I’ve read what someone said, rejected the obvious (but obviously wrong) interpretation, tried to guess what they actually meant, and succeeded. When I’ve tried, the actual thing that (as it turned out) they meant was always something which I could never have even imagined as a hypothesis, much less picked out as the likeliest meaning. (And, conversely, when someone else has tried to interpret my comments in symmetric situations, the result has been the same.)
In my experience, this is true: for all practical purposes, either you understand what someone meant, or it’s impossible to guess what they could’ve meant instead.
[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]
This is not what I’m implying, because it’s not what I’m saying and what I’m saying has a straightforward meaning that isn’t this. See this comment. “Literally” is a strawman (not an intentional one, of course, I’m assuming); it can seem like Alice means something, without that necessarily being anything like the “literal reading” of her words (which in any case is a red herring); “straightforward” is what I said, remember.
Edit: I don’t know where all this downvoting is coming from; why is the parent at −2? I did not downvote it, in any case…
A couple more things I think your disjunction is missing.
1) If you don’t know what Alice means, instead of guessing, you can ask.
(alternately, you can offer a brief guess, and give them the opportunity to clarify. This has the benefit of training your ability to infer more about what people mean). You can do all this without making any arguments or judgments until you actually know what Alice meant.
2) Your phrasing implies that if Alice writes something that “seems to straightforwardly mean something, and Alice meant something else”, that the issue is that Alice failed to write adequately. But it’s also possible for the failure to be on the part of your comprehension rather than Alice’s writing. (This might be because Alice is writing for an audience of people with more context/background than you, or different life experiences than you)
Re: asking: well, sure. But what level of confidence in having understood what someone said should prompt asking them for clarification?
If the answer is “anything less than 100%”, then you just never respond directly to anything anyone writes, without first going through an elaborate dance of “before I respond or comment, let me verify that this is what you meant: [insert re-stating of the entirety of the post or comment you’re responding to]”; then, after they say “yes, that is what I meant”, you respond; then, before they respond to you, they first go “now, let me make sure I understand your response: [insert re-stating of the entirety of your response]” … and so on.
Obviously, this is no way to have a discussion.
But if there is some threshold of confidence in having understood that licenses you to go ahead and respond, without first asking whether your interlocutor meant the thing that it seems like they meant, then… well, you’re going to have situations where it turns out that actually, they meant something else.
Unless, of course, what you’re proposing is a policy of always asking for clarification if you disagree, or think that your interlocutor is mistaken, etc.? But then what you’re doing is imposing a greater cost on dissenting responses than assenting ones. Is this really what you want?
Re: did Alice fail to communicate or did I fail to comprehend: well, the question of “who is responsible for successful communication—author or audience?” is hardly a new one. Certainly any answer other than “it is, to some extent, a collaborative effort” is clearly wrong.
The question is, just how much is “some extent”? It is, of course, quite possible to be so pedantic, so literal-minded, so all-around impenetrable, that even the most heroically patient and singularly clear of authors cannot get through to you. On the other hand, it’s also possible to write sloppily, or to just plain have bad ideas. (If I write something that is wrong, and you express your disagreement, and I say “no, you’ve misunderstood, actually I’m right”, is it fair to say that you’ve failed in your duty as a conscientious reader?)
In any case, the matter seems somewhat academic. As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said. (Certainly I’ve seen no one posting any corrections to my reading of the OP. Mere claims that I’ve misunderstood, with no elaboration, are hardly convincing!)
what level of confidence in having understood what someone said should prompt asking them for clarification?
This is an isolated demand for rigor. Obviously there’s no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.
Note that I say “obviously mistaken.” If your interlocutor says something that seems mistaken, that’s one thing, and as you say, it shouldn’t always prompt a request for clarification; sometimes there’s just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things, that may indicate that there is something they see that you don’t, in which case it would be useful to ask for clarification.
In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.” It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn’t be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don’t claim to know what particular standards he has in mind, but clearly standards that would be useful for “solving problems related to advancing human rationality and avoiding human extinction”). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for “good content” in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)
As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said.
“The case at hand” was your misunderstanding of Vaniver, not Benquo.
Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form “any time your interlocutor says something that seems obviously mistaken, ask for clarification”). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that’s sometimes an indication that you should ask for clarification. Sometimes it’s not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.
EDIT: if it turns out you didn’t mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn’t need to ask you for clarification).
Ikaxas, I would be strong-upvoting your comments here except that I’m guessing engaging further here does more harm than good. I’d like to encourage you to write a separate post instead, perhaps reusing large portions of your comments. It seems like you have a bunch of valuable things to say about how to use the interpretive labor concept properly in discourse.
Well, the second part of your comment (after the rule) pre-empts much of what I was going to say, so—yes, indeed. Other than that:
I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.
Yes, I think this seems like a rather self-serving set of judgments.
As it happens, I didn’t mean my question literally, in the sense that it was a rhetorical question. My point, in fact, was almost precisely what you responded, namely: clearly the threshold is not 100%, and also clearly, it’s going to depend on context… but that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
Other points:
But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things …
I have never met such a person, despite being surrounded, in my social environment, by people at least as intelligent as I am, and often more so. In my experience, everyone says obviously wrong things sometimes (and, conversely, I sometimes say things that seem obviously wrong to others). If this never happens to you, then this might be evidence of some troubling properties of your social circles.
In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.”
That’s still vacuous, though. If that’s what it’s a stand-in for, then I stand by my comments.
Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)
Indeed, I could have. But consider these two scenarios:
Scenario 1:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Scenario 2:
Alice: [makes some statement]
Bob: That’s obviously wrong, because [reasons].
Alice: But of course [straightforward reading] isn’t actually what I meant, as that would indeed be obviously wrong. Instead, I meant [other thing].
You seem to be saying that Scenario 1 is obviously (!!) superior to Scenario 2. But I disagree! I think Scenario 2 is better.
… now, does this claim of mine seem obviously wrong to you? Is it immediately clear why I say this? (If I hadn’t asked this, would you have asked for clarification?) I hope you don’t mind if I defer the rest of my point until after your response to this bit, as I think it’s an interesting test case. (If you don’t want to guess, fair enough; let me know, and I’ll just make the rest of my point.)
I’ve been mulling over where I went wrong here, and I think I’ve got it.
that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there’s some threshold or some clear rule for deciding when to ask for clarification, it’s not worth implementing “ask for clarification if you’re unsure” as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that’s not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone’s fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it’s worth stopping to have one or both parties do something in the vicinity of trying to pass the other’s ITT, to see where the confusion is.
I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I’m much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn’t enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver’s point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn’t really entail that you ought to have asked for clarification here, in this very instance.
Anyway, as Ben suggested I’m working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I’ll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.)
consider these two scenarios
I agree the model I’ve been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don’t think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
If this is where you are going, I have a couple disagreements with it, but I’ll wait until you’ve explained the rest of your point to state them in case I’ve guessed wrong (which I’d guess is fairly likely in this case).
My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.
Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)
That’s not a spurious binary, and in any case it doesn’t make the disjunction wrong. Observe:
Let P = “Alice meant exactly what it seems like she wrote.”
¬P = “It is not the case that Alice meant exactly what it seems like she wrote.”
And we know that P ∨ ¬P is true for all P.
Is “It is not the case that Alice meant exactly what it seems like she wrote” the same as “Alice meant something other than what it seems like she wrote”?
No, not quite. Other possibilities include things like “Alice didn’t mean anything at all, and was making a nonsense comment, as a sort of performance art”, etc. But I think we can discount those.
I think there’s a problem here where “broad attention” and “harsh attention” are different tools that suggest different thresholds. I might think, for example, that a post announcing open registration for EA Global should be shown not just to everyone visiting the EA Forum, but also everyone subscribed to the EA Forum RSS, without thinking that it is a genuine intellectual accomplishment that will be referred to for years to come. I might also post an idea that I think should be held to high standards but don’t expect to be of wide interest, like my thoughts on how map design influences strategy games and what designs are best suited for a particular game.
My position is that subreddit-like things are the correct way to separate out rules (both because it’s a natural unit of moderation, and it implies rulesets are mutually exclusive, and it makes visual presentation easy) and tag-like things are the correct way to separate out topics (because topics aren’t mutually exclusive and don’t obviously imply different rules). A version of lesswrong that has two subreddits, with names like ‘soft’ and ‘sharp’, seems like it would both offer a region for exploratory efforts and a region for solid accumulation, with users by default looking at both grouped together (but colored differently, perhaps).
One of the reasons why that vision seemed low priority (we might be getting to tags in the next few months, for example) was that, to the best of my knowledge, no poster was clamoring for the sharp subreddit. Most of what I would post to main in previous days would go there, and some of the posts I’m working on now are targeted at essentially that, but it’s much easier to post sharp posts in soft than it is to post soft posts in sharp.
Another reason why that vision seemed low priority was the belief that much of progress happens by transmission of ‘half-baked’ ideas, since the various pieces necessary to get the fully baked idea may reside in different people, or because one half-formed idea kicks off a train of thought in someone else that leads somewhere good. The reason to expose lots of people to a Nietzschean maxim is not because you think it is true and that they should just adopt it, but because you expect them to get something useful out of reacting to it. Or, to take Paul Graham’s post on essays, it devalues attempts to raise questions (even if you don’t have an airtight answer yet) compared to arguments for positions.
Under this model, requiring that ideas survive harsh scrutiny before spreading them widely kills the ability to make this sort of collaborative progress (among people who haven’t already sorted into private groups), and perhaps more importantly gives a misleading idea of how progress is generated. If one is trying to learn to do math like a professional mathematician, it is much more helpful to watch their day-to-day activities and chatter with colleagues than it is to read their published papers, because their published papers sweep much of the real work under the rug. Often one generates a hideous proof and then searches more and finds a prettier proof, but without the hideous proof one might have given up. And one doesn’t just absorb until one is fully capable of producing professional math; one interleaves observation with attempts to do the labor oneself, discovering which bits of it are hard and getting feedback on one’s products.
This seems like an excellent argument for dynamic RSS feeds (which I am almost certain is a point I’ve made to Oliver Habryka in a past conversation). Such a feature, plus a robust tagging system, would solve all problems of the sort you describe here.
It’s not clear why a post like this should be on Less Wrong at all, but if it must be, then there seems to be nothing stopping you from prefacing it with “please apply frontpage-level scrutiny to this one, but I don’t actually want this promoted to the frontpage”.
I think that a good tagging system should, indeed, be a high priority in features to add to Less Wrong.
Well, I was not clamoring for it because I was under the impression that the entire front page of Less Wrong was, as you say, the “sharp subreddit”. That few or none of the people who post (as opposed to merely comment) on Less Wrong are interested in such an environment is merely as expected, and is, in fact, a sign of the problem.
I should like to see this belief defended. I am skeptical. But in any case, that’s what the personal blogs are for, no?
Your meaning here is obscure to me, I’m afraid…
I consider that to be one of Graham’s weakest pieces of writing. At best, it’s useless rambling. At worst, it’s tantamount to “In Defense of Insight Porn”.
But this is precisely why I think it’s tremendously valuable that this harsh scrutiny take place in public. A post is promoted to the front page, and there, it’s scrutinized, and its ideas are discussed, etc.
The comparison to professional mathematicians is inapt. Professional mathematicians, engaging in day-to-day activities and chattering with colleagues, have been pre-selected for being on the extreme right tail of mathematical ability and training. They’re not just “anyone with an internet connection”. A professional mathematicians’s half-baked idea on a mathematical topic is simply not comparable with a random internet person’s (or even a random “rationalist”’s) half-baked idea on an arbitrary topic.
How do you expect to solve this problem? The primary thing I’ve heard form you is defense of your style of commenting and its role in the epistemic environment, and regardless of whether or not I agree with it, the problem that I’m trying to solve is getting more good content on LW, because that’s how LW seems useful for solving problems related to advancing human rationality and avoiding human extinction. When we ask people who made top tier posts before why they don’t make them now, or they put them elsewhere, the answer is resoundingly not “we were put off by mediocre content on LW” but “we were put off by commenters who were mean and made writing for LW unpleasant.”
Keep in mind that the problem here is not “how do we make LW a minimally acceptable place to post things?” but “how do we make posting for LW a better strategy than other competitors?”. I could put effort into editing my post on a Bayesian view of critical rationalism that’s been sitting in my Google Docs drafts for months to finally publish it on LW, or I could be satisfied that it was seen by the primary person I wrote it for, and just let it rot. I could spend some more hours reading a textbook to review for LessWrong, or I could host a dinner party in Berkeley and talk to other rationalists in person.
I notice some confusion here; Benquo is in the top 100 LW users of all time by karma, which seems to be at least as much selection for rationality as being in math grad school is selection for mathematical ability and training. Rationality, of course, is much more in its infancy than mathematics is, and so we should expect professional mathematicians to be better at mathematics than rationalists are at rationality. It’s also the case that people in mathematics grad school often make bad mathematical arguments that their peers and instructors should attempt to correct, but when they do so it’s typically with a level of professional courtesy that, while blunt, is rarely insulting.
So it seems like the position you’re taking here is either something like “no rationalist has enough reputation that they deserve something like professional courtesy”, “some rationalists do, but it’s perhaps a dozen of them instead of hundreds,” or “concise sarcasm is what professional courtesy look like,” or something harder for me to construct.
It seems to me that LW sometimes has problems with mediocre commenters who are more prolific than they are insightful, who need to somehow be dissuaded from clogging up the site. But if you’re putting Benquo in that category, I really don’t see how we’re going to get more than, say, a post a month on LW, at which point why have LW instead of a collection of personal blogs?
In this very interesting discussion I mostly agree with you and Ben, but one thing in the comment above seems to me importantly wrong in a way that’s relevant:
I bet that’s true. But you also need to consider people who never posted to LW at all but, if they had, would have made top-tier posts. Mediocre content is (I think) more likely to account for them than for people who were top-tier posters but then went away.
(Please don’t take me to be saying ”… and therefore we should be rude to people whose postings we think are mediocre, so that they go away and stop putting off the really good people”. I am not at all convinced that that is a good idea.)
I agree that meh content can be harmful in that way. I don’t think that Said’s successfully selectively discouraging meh content.
I mostly agree, but one part seems a bit off and I feel like I should be on the record about it:
It’s evidence that I’m a top example of the particular sort of rationality culture that LW is clustered around, and I think that’s enough to make the argument you’re trying to make, but being good at getting upvotes for writing about rationality is different in some important ways from being rational, in ways not captured by the analogy to math grad school.
I agree the analogy is not perfect, but I do think it’s better than you’re suggesting; in particular, it seems to me like going to math grad school as opposed to doing other things that require high mathematical ability (like quantitative finance, or going to physics grad school, or various styles of programming) is related to “writing about rationality rather than doing other things with rationality.” Like, many of the most rational people I know don’t ever post on LW because that doesn’t connect to their goals; similarly, many of the most mathematically talented people I know didn’t go to math grad school, because they ran the numbers on doing it and they didn’t add up.
But to restate the core point, I was trying to get at the question of “who do you think is worthy of not being sarcastic towards?”, because if the answer is something like “yeah, using sarcasm on the core LW userbase seems proper” this seems highly related to the question of “is this person making LW better or worse?”.
I’d just like to comment that in my opinion, if we only had one post a month on LW, but it was guaranteed to be good and insightful and useful and relevant to the practice of rationality and not wrong in any way, that would be awesome.
The world is full of content. Attention is what is scarce.
By attracting better people, and expecting better of those who are here already. Some will not rise to that expectation. That is to be expected. We will not see further posts from them. That is to be welcomed.
But this can only be a misguided goal. What is “good content”? Why do you want it? That is far too generic a desideratum! If you just want “good content”, and you don’t really care what kind of “good content”, you’ll inevitably suffer value / focus drift; and if you always want more “good content” without specific goals concerning how much and what kind and what is it for, then you’ll… well, you’ll have the sort of problem you’re having now, to be honest.
I struggle to believe that you really think that “more good content”, period, no specifics, is what translates into avoiding human extinction.
This is a shocking statement. I had to reread this sentence several times before I could believe that I’d read it right.
… just what, exactly, do you mean by “rationality”, that could make this claim true?!
Both the first and the second are plausible (“reputation” is not really the right concept here, but I’ll let it stand for now). The third is also near enough to truth.
Let’s skip all the borderline examples and go straight to the top. Among “rationalists”, who has the highest reputation? Who is Top Rationalist? Obviously, it’s Eliezer. (Well, some people disagree. Fine. I think it’s Eliezer; I think you’re likely to agree; in any case he makes the top five easily, yes?)
I have great respect for Eliezer. I admire his work. I have said many times that the Sequences are tremendously important, well-written, etc. What’s more, though I’ve only met Eliezer a couple of times, it’s always seemed to me that he’s a decent guy, and I have absolutely nothing against him as a person.
But I’ve also read some of the stuff that Eliezer has posted on Facebook, over the course of the last half-decade or more. Some of it has been well-written and insightful. Some of it has been sheer absurdity, and if he had posted it on Less Wrong, you can bet that I would not spare those posts from the same unsentimental and blunt scrutiny. To do any less would be intellectual dishonesty.
Even the cleverest and best of us can produce nonsense. If no one scrutinizes our output, or if we’re surrounded only by “critics” who avoid anything substantive or harsh, the nonsense will soon dominate. This is worse than not having a Less Wrong at all.
But my suggestion answers precisely this concern! How can you ask this question after I’ve addressed this matter in such detail?
I claim that we tried this, from about 2014 to 2016, and that the results were underwhelming. How will you attract better people, and from where? [This is a serious question, instead of just exasperation; we do actually have a budget that we could devote to attracting better people if there were promising approaches.]
As Benquo suggests, there are additional specifics that are necessary, that are tedious to spell out but I assumed easy to infer.
Your explanation doesn’t suggest why authors would want to do step #2, or where we would get a class of dedicated curators who would rewrite their posts for them when they don’t do it themselves. [Noting also that it would be helpful if those curators were not just better at composition than the original authors, but also better at conceptual understanding, such that they could distill things effectively instead of merely summarizing and arranging the thoughts of others.]
Perhaps another angle on the problem: there is a benefit to having one conversational locus. Putting something on the frontpage of LessWrong makes it more likely that people who check LessWrong have read it, and moves us closer to the ‘having one conversational locus’ world. It seems to me like you’re making a claim of the form “the only things worth having in that primary conversational locus are the sorts of things where the author is fine handling my sarcastic criticisms”, and I disagree with that, because of the aforementioned models of how progress works.
Uh, how’s that? Anyway, even if we grant that you tried this, well… no offense meant, but maybe you tried it the wrong way? “We tried doing something like this, once, and it didn’t work out, therefore it’s impossible or at least not worth trying” is hardly what you’d call “solid logic”.
This is, indeed, a serious question, and one well worth considering in detail and at length, not just as a tangent to a tangent, deep in one subthread of an unrelated comments section.
But here’s one answer, given with the understanding that this is a brief sketch, and not the whole answer:
Prestige and value attract contributors. Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated, where may be found not a graphomanic torrent of “content” but a scant few gems of true insight and well-tested intellectual innovations, and then “my essay on <topic> was posted on Less Wrong, and even they found no fault with it” becomes a point of pride, and “curated on Less Wrong” becomes a mark of distinction. When you can say to someone, “I think your writing on <topic> is good enough for Less Wrong” and have that be a credible and unusual compliment, you will easily be able to find contributors. When you’ve created a culture where you can post on Less Wrong and there, get the best, most insightful, most no-nonsense, cuts-to-the-heart-of-the-matter criticism, people who are truly interested in perfecting their ideas will want to post here, and to submit to scrutiny.
Not so easy, I regret to say…
See above for why authors would want to do this. As for “a class of dedicated curators who would rewrite their posts”, I never suggested anything remotely like this, and would never suggest it.
Dynamic RSS feeds (or, to be more precise, the tagging and dynamic-listing infrastructure that would enable dynamic RSS feeds) would handily solve this problem as well. This is definitely a “there is a technical solution which cuts right through the Gordian knot of social problems” case.
Where would you point to as a previous example of success in this regard? I don’t think the golden age of Less Wrong counts, as it seems to me the primary reason LessWrong was ever known as a place with high standards is because Eliezer’s writing and thinking were exceptional enough to draw together a group of people who found it interesting, and that group was a pretty high-caliber group. But it’s not like they came here because of the insightful comments; they came here for the posts, and read the comments because they happened to be insightful (and interested in a particular mode of communication over point-seeking status games). When the same commenters were around, but the good post-writers disappeared or slowed down, the site slowly withered as the good commenters stopped checking because there weren’t any good posts.
There have been a few examples of people coming to LessWrong with an idea to sell, essentially, which I think is the primary group that you would attract by having a reputation as a forum that only good ideas survive. I don’t recall many of them becoming solid contributors, but note that this is possibly a memory selection effect; when I think of “someone attracted to LW because of the prestige of us agreeing with them” I think of many people whose one-track focuses were not impressive, when perhaps someone I respect originally came to LW for those reasons and then had other interests as well.
With regards to the “solid logic” comment, do give us some credit for having thought through this issue and collected what data we can. From my point of view, having tried to sample the community’s impressions, the only people who have said the equivalent of “ah, criticism will make the site better, even if it’s annoying” are people who are the obvious suspects when post writers say the equivalent of “yeah, I stopped posting on Less Wrong because the comments were annoyingly nitpicky rather than focusing on the core of the point.”
I do want to be clear that ‘high-standards’ and ‘annoying’ are different dimensions, here, and we seem to be in a frustrating equilibrium where you see some features of your comments that make them annoying as actually good and thus perhaps something to optimize for (?!?), as opposed to a regrettable problem that is not worth the cost to fix given budgetary constraints. Perhaps an example of this is your comment in a parallel thread, where you suggest pedantically interpreting the word “impossible” makes conversations more smooth than doing interpretative labor to repair small errors in a transparent way. By the way I use the word “smooth”, things point in the opposite direction. [And this seems connected to a distinction between double crux and Stalnaker-style conversations, which is a post on my todo list that also hasn’t been written yet.]
Dynamic RSS feeds are the opposite of a solution to this problem; the mechanism that constructs a single conversational locus is broadcast, where everyone is watching the same 9 o’clock news, as opposed to decentralized communication, where different people are reading different blogs and can’t refer to particular bits of analysis and assume that others have come across it before. Contrast the experience of someone trying to discuss the previous night’s Monday Night Football game with another football fan and two gamers trying to discuss their previous night’s video gaming with each other; even if they happened to play the same game, they almost certainly weren’t in the same match.
The thing that tagging helps you do is say “this post is more interesting to people who care about life extension research than people who don’t”, but that means you don’t show it to people who don’t care about life extension, and so when someone chats with someone else about Sarah Constantin’s analysis of a particular line of research, the other person is more likely to say “huh?” than if they sometimes get writings about a topic that doesn’t natively interest them through a curated feed.
We might not be talking about the same thing (in technical/implementation terms), as what you say does not apply to what I had in mind. (It’s awkward to hash this out in via comments like this; I’d be happy to discuss this in detail in a real-time chat medium like IRC.)
“Pedantically” is a caricature, I think; I would say “straightforwardly”—but then, we have a live example of what we’re referring to, so terminology is not crucial. That aside, I stand by this point, and reaffirm it.
I am deeply skeptical of “interpretive labor”, at least as you seem to use the term.[1] Most examples that I can recall having seen of it, around here, seem to me to have affected the conversation negatively. (For instance, your example elsethread is exactly what I’d prefer not to see from my interlocutors.)
In particular, this—
—doesn’t actually happen, as far as I can tell. What happens instead is that errors are compounded and complicated, while simultaneously being swept under the rug. It seems to me that this sort of “interpretive labor” does much to confuse and muddle discussions on Less Wrong, while effecting the appearance of “smooth” and productive communication.
I don’t know… I think it’s at least possible that we’re using the word in basically the same way, but disagree on what effects various behaviors have. But perhaps this point is worth discussing on its own (if, perhaps, not in this thread): what is this “smoothness” property of discussions, what why is it desirable? (Or is it?)
This sounds like a post I’d enjoy reading!
[1] Where is this term even from, by the way…?
https://acesounderglass.com/2015/06/09/interpretive-labor/
This seems like a proposal to make LW contentless, with lots of vacuously true statements.
They should ban you for how you’re interacting right now. I don’t know why they’re taking shit with your dodging the issue, but you either don’t have the ability to figure out when someone is correctly calling you out, or aren’t playing nice. Your brand of bullshit is a major reason I’ve avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don’t have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.
Lahwran, I downvoted your comment because I think it should be costly to write something that lowers the tone like this, but I appreciate you saying that this is the reason you left LW, and you might be right that I’m being too civil relative to the effects Said is directly having.
I’ve put in a bunch of effort to trade models of good discourse, but this conversation is heading towards its close. As I’ve said, if Said writes these sorts of comments in future, I’ll be hitting fairly hard with mod tools, regardless of his intentions. Notice that this brand of bullsh*t is otherwise largely gone from LW since the re-launch in March—Said has been an especially competent and productive individual who has this style of online interaction, so I’ve not wanted to dissuade him as strongly as the rest who’ve left, but my patience has since worn thin on this front, and I won’t be putting up with it in future.
It seems like, having interpreted Vaniver as making an obvious error, you decided to argue at length against it instead of considering that he might have meant something else. This is tedious and is punishing Vaniver for not tediously overspecifying everything.
This attitude makes very little sense.
Suppose that one Alice writes something which I, on the straightforward reading, consider to be definitely and clearly wrong. I read it and imagine two possibilities:
(A) Alice meant exactly what it seems like she wrote.
Presumably, then, Alice disagrees with my judgment of what she wrote as being definitely and clearly wrong. Well, there is nothing unusual in this; I have often encountered cases where people hold views which I consider to be definitely and clearly wrong, and vice-versa. (Surely you can say the same?)
In this case, what else is there to do but to respond to what Alice wrote?
(B) Alice meant something other than what it seems like she wrote.
What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.
But suppose I go ahead and try anyway, I come up with some possible thing that Alice could’ve meant. Do I have any reason to conclude that this is the only possibility for what Alice could’ve meant? I do not. I might be able to think longer, and come up with other possibilities. None of them would offer me any reason to assume that that one is what Alice meant.
And suppose I do pick out (via some mysterious and, no doubt, dubious method) some particular alternate meaning for Alice’s words. Well, and is that correct, then, or wrong? If it’s wrong, then I will argue the point, presumably. But then I will be in the strange position of saying something like this:
“Alice, you wrote X. However, X is obviously wrong. So you couldn’t have meant that. You instead meant Y, probably. But that’s still wrong, and here’s why.”
Have I any reason at all to expect that Alice won’t come back with “Actually, no, I did mean X; why do you say it’s obviously wrong?!”, or “Actually, no, I meant Z!”? None at all. And I’ll have wasted my time, and for what?
This sort of thing is almost always a pointless and terrible way of carrying on a discussion, why is why I don’t and won’t do it.
Consider response A:
“I often successfully guess what people meant; it being impossible comes as a surprise to me. Are you claiming this has never happened to you?”
And response B:
Ah, Said likely meant that it is impossible to reliably infer Alice’s meaning, rather than occasionally doing so. But is a strategy where one never infers truly superior to a strategy where one infers, and demonstrates that they’re doing so such that a flat contradiction can be easily corrected?
[Incidentally, I believe this is the disjunction Benquo is pointing at; you seem to imply that either you interpret Alice literally, or you misinterpret Alice, which excludes the case where you correctly interpret Alice.]
[EDIT: I made a mistake in this comment, where response B was originally [what someone would say after doing that substitution], and then I said “wait, it’s not obvious where that came from, I should put the thoughts that would generate that response” and didn’t apply the same mental movement to say “wait, it’s not obvious that response A is a flat response and response B is a thought process that would generate a response, which are different types, I should call that out.”]
Yes, exactly; response A would be the more reasonable one, and more conducive to a smooth continuation of the discussion. So, responding to that one:
“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilities, no matter how unlikely-seeming, and managing by chance to be right.
But, I can’t remember a time when I’ve read what someone said, rejected the obvious (but obviously wrong) interpretation, tried to guess what they actually meant, and succeeded. When I’ve tried, the actual thing that (as it turned out) they meant was always something which I could never have even imagined as a hypothesis, much less picked out as the likeliest meaning. (And, conversely, when someone else has tried to interpret my comments in symmetric situations, the result has been the same.)
In my experience, this is true: for all practical purposes, either you understand what someone meant, or it’s impossible to guess what they could’ve meant instead.
This is not what I’m implying, because it’s not what I’m saying and what I’m saying has a straightforward meaning that isn’t this. See this comment. “Literally” is a strawman (not an intentional one, of course, I’m assuming); it can seem like Alice means something, without that necessarily being anything like the “literal reading” of her words (which in any case is a red herring); “straightforward” is what I said, remember.
Edit: I don’t know where all this downvoting is coming from; why is the parent at −2? I did not downvote it, in any case…
A couple more things I think your disjunction is missing.
1) If you don’t know what Alice means, instead of guessing, you can ask.
(alternately, you can offer a brief guess, and give them the opportunity to clarify. This has the benefit of training your ability to infer more about what people mean). You can do all this without making any arguments or judgments until you actually know what Alice meant.
2) Your phrasing implies that if Alice writes something that “seems to straightforwardly mean something, and Alice meant something else”, that the issue is that Alice failed to write adequately. But it’s also possible for the failure to be on the part of your comprehension rather than Alice’s writing. (This might be because Alice is writing for an audience of people with more context/background than you, or different life experiences than you)
Re: asking: well, sure. But what level of confidence in having understood what someone said should prompt asking them for clarification?
If the answer is “anything less than 100%”, then you just never respond directly to anything anyone writes, without first going through an elaborate dance of “before I respond or comment, let me verify that this is what you meant: [insert re-stating of the entirety of the post or comment you’re responding to]”; then, after they say “yes, that is what I meant”, you respond; then, before they respond to you, they first go “now, let me make sure I understand your response: [insert re-stating of the entirety of your response]” … and so on.
Obviously, this is no way to have a discussion.
But if there is some threshold of confidence in having understood that licenses you to go ahead and respond, without first asking whether your interlocutor meant the thing that it seems like they meant, then… well, you’re going to have situations where it turns out that actually, they meant something else.
Unless, of course, what you’re proposing is a policy of always asking for clarification if you disagree, or think that your interlocutor is mistaken, etc.? But then what you’re doing is imposing a greater cost on dissenting responses than assenting ones. Is this really what you want?
Re: did Alice fail to communicate or did I fail to comprehend: well, the question of “who is responsible for successful communication—author or audience?” is hardly a new one. Certainly any answer other than “it is, to some extent, a collaborative effort” is clearly wrong.
The question is, just how much is “some extent”? It is, of course, quite possible to be so pedantic, so literal-minded, so all-around impenetrable, that even the most heroically patient and singularly clear of authors cannot get through to you. On the other hand, it’s also possible to write sloppily, or to just plain have bad ideas. (If I write something that is wrong, and you express your disagreement, and I say “no, you’ve misunderstood, actually I’m right”, is it fair to say that you’ve failed in your duty as a conscientious reader?)
In any case, the matter seems somewhat academic. As far as I can tell, in the case at hand, I have not misunderstood anything that Benquo said. (Certainly I’ve seen no one posting any corrections to my reading of the OP. Mere claims that I’ve misunderstood, with no elaboration, are hardly convincing!)
This is an isolated demand for rigor. Obviously there’s no precise level of confidence, in percentages, that should prompt asking clarification. As with many things, context matters. Sometimes, what indicates a need to ask for clarification is that a disagreement persists for longer than it seems like it ought to (indicating that there might be something deeper at work, like a misunderstanding). Sometimes, what indicates this is your interlocutor saying something that seems absurd or obviously mistaken. The second seems relevant in the immediate instance, given that what prompted this line of discussion was your taking Vaniver at his word when he said something that seemed, to you, obviously mistaken.
Note that I say “obviously mistaken.” If your interlocutor says something that seems mistaken, that’s one thing, and as you say, it shouldn’t always prompt a request for clarification; sometimes there’s just a simple disagreement in play. But if your interlocutor says something that seems obviously wrong, and at the same time they seem like a generally smart person who isn’t wont to say obviously wrong things, that may indicate that there is something they see that you don’t, in which case it would be useful to ask for clarification.
In this particular case, it seems to me that “good content” could be vacuous, or it could be a stand-in for something like “content that meets some standards which I vaguely have in mind but don’t feel the desire or need to specify at the moment.” It looks like Vaniver, hoping that you would realize that the first usage is so obviously dumb that he wouldn’t be intending it, used it to mean the second usage in order to save some typing time or brain cycles or something (I don’t claim to know what particular standards he has in mind, but clearly standards that would be useful for “solving problems related to advancing human rationality and avoiding human extinction”). You interpreted it as the first anyways, even though it seemed to you quite obviously a bad idea to optimize for “good content” in that vacuous sense. Instead, the fact that it seemed not only wrong, but obviously wrong, should have alerted you to the fact that Vaniver perhaps meant something different, at which point you could have asked for clarification (“what do you have in mind when you say ‘good content’, that seems to me obviously too vacuous to be a good idea. Perhaps you have some more concrete standards in mind and simply decided not to spell them out?”)
“The case at hand” was your misunderstanding of Vaniver, not Benquo.
Hm. After writing this comment I notice I did something of the same thing to you. I interpreted your request for a numerical threshold literally, even though I considered it not only mistaken, but obviously so. Thus I retract my claim (at least in its strong form “any time your interlocutor says something that seems obviously mistaken, ask for clarification”). I continue to think that asking for clarification is often useful, but I think that, as with many things, there are few or no hard-and-fast rules for when to do so; rather, there are messy heuristics. If your interlocutor says something obviously mistaken, that’s sometimes an indication that you should ask for clarification. Sometimes it’s not. I think it probably would have been prudent for you to either ask for clarification from Vaniver, or assume he didn’t mean the vacuous interpretation of “good content.” I think I probably don’t need to ask for clarification about what you meant, it seemed pretty obvious you meant it literally. I realize this seems like a rather self-serving set of judgements. Perhaps it is. I’m not really sure what to do about that right now, or whether and how to revise it.
EDIT: if it turns out you didn’t mean it literally, then obviously I will know how I should revise my judgements (namely I should revise my judgement that I didn’t need to ask you for clarification).
Ikaxas, I would be strong-upvoting your comments here except that I’m guessing engaging further here does more harm than good. I’d like to encourage you to write a separate post instead, perhaps reusing large portions of your comments. It seems like you have a bunch of valuable things to say about how to use the interpretive labor concept properly in discourse.
Thanks for the encouragement. I will try writing one and see how it goes.
Well, the second part of your comment (after the rule) pre-empts much of what I was going to say, so—yes, indeed. Other than that:
Yes, I think this seems like a rather self-serving set of judgments.
As it happens, I didn’t mean my question literally, in the sense that it was a rhetorical question. My point, in fact, was almost precisely what you responded, namely: clearly the threshold is not 100%, and also clearly, it’s going to depend on context… but that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
Other points:
I have never met such a person, despite being surrounded, in my social environment, by people at least as intelligent as I am, and often more so. In my experience, everyone says obviously wrong things sometimes (and, conversely, I sometimes say things that seem obviously wrong to others). If this never happens to you, then this might be evidence of some troubling properties of your social circles.
That’s still vacuous, though. If that’s what it’s a stand-in for, then I stand by my comments.
Indeed, I could have. But consider these two scenarios:
Scenario 1:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Scenario 2:
Alice: [makes some statement]
Bob: That’s obviously wrong, because [reasons].
Alice: But of course [straightforward reading] isn’t actually what I meant, as that would indeed be obviously wrong. Instead, I meant [other thing].
You seem to be saying that Scenario 1 is obviously (!!) superior to Scenario 2. But I disagree! I think Scenario 2 is better.
… now, does this claim of mine seem obviously wrong to you? Is it immediately clear why I say this? (If I hadn’t asked this, would you have asked for clarification?) I hope you don’t mind if I defer the rest of my point until after your response to this bit, as I think it’s an interesting test case. (If you don’t want to guess, fair enough; let me know, and I’ll just make the rest of my point.)
I’ve been mulling over where I went wrong here, and I think I’ve got it.
I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there’s some threshold or some clear rule for deciding when to ask for clarification, it’s not worth implementing “ask for clarification if you’re unsure” as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that’s not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone’s fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it’s worth stopping to have one or both parties do something in the vicinity of trying to pass the other’s ITT, to see where the confusion is.
I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I’m much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn’t enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver’s point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn’t really entail that you ought to have asked for clarification here, in this very instance.
Anyway, as Ben suggested I’m working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I’ll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.)
I agree the model I’ve been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don’t think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
If this is where you are going, I have a couple disagreements with it, but I’ll wait until you’ve explained the rest of your point to state them in case I’ve guessed wrong (which I’d guess is fairly likely in this case).
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Sounds good, and I am looking forward to reading your post!
Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)
Ah, thanks!
Your disjunction is wrong.
EDIT: oops, replied to the wrong comment.
How?
Spurious binary between one way things really seem, and the many ways one might guess. Even the one way it seems to you is in fact an educated guess.
That’s not a spurious binary, and in any case it doesn’t make the disjunction wrong. Observe:
Let P = “Alice meant exactly what it seems like she wrote.”
¬P = “It is not the case that Alice meant exactly what it seems like she wrote.”
And we know that P ∨ ¬P is true for all P.
Is “It is not the case that Alice meant exactly what it seems like she wrote” the same as “Alice meant something other than what it seems like she wrote”?
No, not quite. Other possibilities include things like “Alice didn’t mean anything at all, and was making a nonsense comment, as a sort of performance art”, etc. But I think we can discount those.