34. Obey these rules before you obey grammarians, who say things like “Don’t split infinitives” or “Don’t begin sentences with And or But” and “Don’t end a sentence with a preposition.”
Real grammarians, i.e. linguists who study the grammar of English as it is, teach us that these aren’t actually rules of grammar anyway, so much as prescriptions that were made up out of whole cloth for various reasons and that never had much to do with the way English was spoken or written. Here, for example, is an index of postings on Language Log (a group blog run by several professional linguists) about the split-infinitive issue. (The well-known story of this silly prescription was that it was decided in the 18th century that, since you can’t split infinitives in Latin [Latin infinitives are a single word], you shouldn’t split them in English either.)
Relatedly, the passive in English has a bad reputation that is not very well deserved. See here for a full explanation by the author of the Cambridge Grammar of the English Language.
You’d think this was just so much nitpicking—and to some extent it is—but understanding these issues fully can help you make better rhetorical use of English. This is particularly true of the passive—the article I linked above explains how passive and active versions of the same clause help us place emphasis in a sentence exactly where it will do us the most good. (As such, I think the strongest version of your point 13 that I could endorse would be “Understand clearly the difference between active and passive, and choose between them advisedly.”)
One more point which I raise not least because it’s a stunningly entertaining read: the same author’s (Geoff Pullum’s) “The Land of the Free and The Elements of Style” (PDF), an utter demolition of the grammar advice given in Strunk and White’s book. This is NOT to say that S&W’s stylistic advice should be thrown out as well, but Pullum certainly establishes that (a) they have absolutely no idea what they’re talking about when grammar is concerned, and that (b) they follow almost none of their own grammatical or stylistic prescriptions, so the whole thing should be taken with a grain of salt. Read Pullum’s article if you enjoy a well-deserved poison pen book review and would like to learn a few things about English grammar in the process.
I hesitate to counter your nitpicking with more nitpicking, but I do agree that “understanding these issues fully can help you make better rhetorical use of English”. And so, I’d like to correct some of what you write about the split infinitive. The story is somewhat more subtle and interesting.
The well-known story of this silly prescription was that it was decided in the 18th century that, since you can’t split infinitives in Latin [Latin infinitives are a single word], you shouldn’t split them in English either.
This well-known story is actually a myth that has no factual basis. It is not true that the prohibition against split infinitives was decided in the 18th century (they started debating it mid-19th century), and more importantly none of the grammarians railing against it in those times based their arguments on anything to do with Latin. Never happened. The story seems to be a modern 20th-century invention, and has spread widely among those who oppose prescriptive grammarians because it makes them look very silly. It is repeated in many popular articles and books (e.g. Pinker’s The Language Instinct), but for all that is completely untrue.
The interesting question, then, is—why did prescriptive grammarians of the 19th century start railing against the split infinitive, whereas the grammarians of the 18th century didn’t much care about it? And the answer is, in the 18th century the split infinitive largely wasn’t there. There are some examples we can find going back all the way to the 14th century, but they are rare examples. In fact, if you just read some random 18th century prose, you’re likely to quickly run into phrases that sound a little awkward to the modern ear, because they seem to intentionally avoid splitting the infinitive. But those authors didn’t try to write awkwardly or intentionally avoid the split infinitive (which wasn’t known as a prohibition). They were using the conventions of their time in which it was a rarity.
In the 19th century the split infinitive started occurring more often (perhaps became a fad of sorts), and that’s why the grammarians noticed it. Ever since then, despite all their efforts, it has only grown more popular and accepted. And yet minding your split infinitives is not bad advice to a writer (although wholesale rejection is decidedly silly), because, when overused, they tend to sound gimmicky and tinny (to forestall the obvious objection “anything is bad when overused”: true, but split infinitives get there faster. You can’t easily go wrong with sentences filled with “to X Y-ly”, but do just a few “to Y-ly X” in a sequence, and it begins to look weird).
(I also disagree with your praise of Pullum’s persistent critique of S&W; there’s much criticism that can be made of that book, but it deserves criticism made in good faith. This blog post (not by me) offers a few clear examples of what I found distasteful in Pullum’s bombastic approach.)
Real grammarians, i.e. linguists who study the grammar of English as it is, teach us that these aren’t actually rules of grammar anyway, so much as prescriptions that were made up out of whole cloth for various reasons and that never had much to do with the way English was spoken or written.
But do also note that a lot of people do believe those prescriptions to be valid, and view breaking them to be low status. All the “singular they is fine” blog posts in the world are irrelevant if using singular they will annoy half your readers.
Of course, I tend to use singular they anyway. It’s often the best alternative and I doubt that many people in my likely target audience will really care. But you should still know the biggest things that will annoy people, so that if you use them, it will be out of conscious choice and not of ignorance.
-16. Know your intended audience. Learn how they think and what they like to read. Tailor your writing to them.
Could stand more emphasis, in my opinion; this seems to be the overarching goal which subsumes the other advice. If your intended audience doesn’t like in media res, for instance, don’t do it.
I’m confused. Was grouchymusicologist’s comment significantly different prior to editing? I don’t see any issues with the way it is now. (I also don’t see anything that isn’t covered in Intro to Linguistics, but the links are good resources and the material generally bears repeating for a wider audience.)
I’m confused. I wouldn’t call the above comment an example of some of the clearer writing on this site, but I don’t find that anything about it significantly impedes my comprehension.
Although come to think of it, I’ve heard more or less the same points before, so maybe my perception of its clarity is corrupted by prior knowledge.
Was it the construction of the paragraph that you’re found confusing, or the assumed prior knowledge of various grammatical disputes (splitting infinitives, passive vs. active, singular they)?
You’re not helping to clarify what aspect of the comment made it seem like “Yes, yes, but [long, extremely detailed nitpick in academic-ese]” for people who didn’t perceive it that way.
But my eyes cross and I clench my fists a little when comments consist of “Yes, yes, but [long, extremely detailed nitpick in academic-ese].” …
Rationality and clear thinking should be as basic as Dick and Jane.
Provisionally agree in the general sense, but… should linguistics? (And what about physics?) I guess my objection is: if someone has an academic nitpick, why shouldn’t it be phrased in the dialect of academia?
A lot of things (most things) on LW are about rationality and clear thinking, but some are about (and require) specialized knowledge. Conflating the two subjects by applying the same standards of discourse seems counterproductive.
Rationality and clear thinking should be as basic as Dick and Jane.
There is a cost to simplicity in terms of precision. There’s a lot to be said about finding ways to convey your ideas with “beautiful simplicity”—in the way often attributed to Feynman—but some ideas just cannot be reduced to such a level, and some of those ideas are important.
Case in point: the differences between what a frequentist means by “probablity” and what a Bayesian means by “probability”. The existential significance of the lack of curvature to the universe. (Sure, I could say, “Why its a big deal that spacetime is flat”—but that’s conveying a different range of meanings than the other statement, which if I hadn’t already ‘primed’ you to that same understanding might’ve lead you to another conclusion.)
MWI, Aumann’s Agreement Theorem, Great Filter concerns for existential risk, anthropic arguments in general, Bayes’s Theorem in the non-finite case. But even these are not in general high priority issues for rationality. I think it is fair to say that most of the important ideas can have bumpersticker size statements. But, the level of unpacking may be so large from the bumpersticker forms that they only reason the bumpersticker form seems to do anything useful is just illusion of transparency.
If you want the “back cover blurb” for a 600-page book, that’s an entirely sensible request… but it seems weird to criticize a 600-page book on the grounds that it isn’t as accessible as a back-cover blurb. Back-cover blurbs can exist in addition to the books; they needn’t be instead of.
What I challenge is the idea that most posts/comments here ought to make good cover blurbs.
If I need a cover blurb, it seems more productive to say “Hey, I need a cover blurb, any recommendations?” than to point to arbitrary contributions and say “This isn’t a very good cover blurb.”
Ok. If they are that large, say a one paragraph blurb, then I really don’t think there’s anything generally discussed here that could not if carefully phrased get the primary points across if someone is willing to read the paragraph and then actually think about it.
Off the top of my head, the first thing that comes to mind is: supergoals and how to assess them. Second: the process of figuring out how to parse a true utility function from a fake utility function.
Rationality is—or should be—for regular people, and very few regular people need to worry about the curvature of the universe in an average day.
Requiring rationality to be restricted to an aversion to edge-cases limits its usefulness to the point of being almost entirely without value.
To relate this more directly: that flat-spacetime thing is very relevant to understanding how “something” can come from “nothing”. Which touches on how we all got here—a very important, existentially speaking, question. One that can have an impact on even the ‘ordinary’ person’s ‘average day’. After all; if it turns out there’s no reason for anyone to believe in a God, then many of the things many people do or say on a daily basis become… extraneous at best.
Furthermore: one of the things that instrumental rationality as an approach needs to have in its “toolkit” is the ability to deeply examine thoughts, ideas, and events in advance and from those examinations create heuristics (“rules of thumb”) that enable us to make better decisions. That requires the use of sometimes very ‘technical’ turns of phrase. It’s simply unavoidable.
That gets all the more true when you’re trying to convey a very precise thought about a very nuanced topic. The thing is, regardless of where one looks in life there are more levels of complexity than we normally pay attention to. But that doesn’t make those levels of complexity irrelevant; it just means that we abstract that complexity away in our ‘average’ lives. Enter said heuristics.
Part of instrumental rationality as an approach, I believe, is the notion of at least occassionally breaking down into their constituent parts the various forms of complexity we usually ignore, in order to try to come up with better abstractions with which to ignore said complexity when it shouldn’t be a focus of our attention. I’ve gotten in “trouble” here on lesswrong for making similar statements before, however -- (though to add nuance that was more about whether generalizations are appropriate in a given ‘depth’ of conversation.)
“ever”: within the projected remaining longevity of anyone currently alive.
“average person”: A sufficient portion of people who are no more than 1 standard deviation away from the mode of any given manner of behavior as to be representative of the whole.
-- that being said: no, no I do not.
A different set of definitions:
“ever”: throughout the remainder of history
“an average person”: at least one person who is validly described as ‘average’ at the time it happens
-- Yes, yes I do.
Even explaining that took more nuance than you’d like, I suspect. Please note how radically different the two statements are, even though they both conform very closely to what you said. THIS is why nuance is sometimes indispensable.
Within our lifetimes, conversational speech will not resemble a legal document.
Not all conversations, no—but if an average person is unprepared for legalese then he’d better always have a lawyer with him when he signs anything, ever. This has an unhappy context for our conversatoin: is there a rationality-equivalent of a lawyer?
Could you do me a favor and elaborate? One thing I know for sure is that the quicker I’m writing, the longer my sentences are (a terrible habit). But I don’t know if that’s what you’re talking about or if it’s something else.
No apology needed, I appreciate the feedback. My comments often come out looking longer or wordier than they seemed while I was composing them, and I’ll try to remember that tendency and keep a lid on it when possible.
I’m not being sarcastic. Sometimes writing in a way that’s easy for other people to understand is just hard. Speaking for myself, normally when my own comments aren’t clear it’s because I’ve spent as much time as I’m willing to spend on writing a comment trying to come up with clearer ways to convey my idea, not because it feels gross or because I’m not trying. (For example, I rewrote that last sentence at least 4 times and it’s still pretty clunky.) This seems to come as second nature to some people, the rest of us have to struggle a bit.
None of this is intended to detract from your point. Clearer writing is better.
Yeah, what saturn said, pretty much. And as comments from Desrtopa and pedanterrific in this thread suggest, not everyone finds my writing as opaque as you do. If I can make my writing 10% clearer by spending double the effort on it, I’m only occasionally going to think that’s a good tradeoff (particularly when the writing in question is blog comments and not, say, my professional work).
Forgive me, but this seems like a little bit of an overreaction. You’re the only one who’s called me out for writing style (although I have no trouble believing that others have thought the same thing and not said it). Frankly, I don’t comment much, but when I do, my comments tend to be reasonably highly rated.
The incomprehensible-to-outsiders thing strikes me as a reach. LW by all appearances is growing rapidly without noticeable worsening in the quality of discourse or community, which is a remarkable accomplishment. When outsiders do complain about LW being unapproachable, it’s not because of people like me writing long sentences. It’s because of jargon, a lot of shared background that takes time to catch up on, and the novelty of some of the ideas.
I’ve already said I will make a reasonable effort to do better. So, respectfully, with that promise, I think I’ve shouldered enough responsibility for improving colloquy around here for the time being.
(Because I don’t know how well in control of my tone I am, I want to clarify that I appreciate your feedback on my commenting style, and I very much do not want to come across as annoyed or snippy.)
It sounds like you’re implying that a typical comment/post on LW should be accessible, in terms of rhetoric and content, to everyone on the Internet. That idea, I dismiss out of hand.
The principle of charity moves me to look for an alternative reading. The best one I can come up with is that there’s some threshold of accessibility that you have in mind, which you assert a typical LW comment/post should and does not achieve.
So, OK. Can you be somewhat more concrete about what that threshold is? For example, can you point to some examples of writing you think just-meets that threshold?
In the summer between high school and college, I took a couple of courses at a parochial school. At some point some of the other students said something, not unkindly, about the way I talked. I asked them what they meant, specifically. They nearly fell over laughing. After a couple of repetitions of my question and laughter, one of them managed to get out that they wouldn’t ever have said “specifically”.
I explained that I could hear the words they used, but I didn’t know how I could tell what words they didn’t use.
I don’t remember what was mentioned (in a different conversation) as a respect-worthy SAT score, I just remember being shocked and horrified at how low it was and drawing on reserves of tact to (I hope) not show how I felt.
In retrospect, I now know that it’s possible to acquire a feeling for what vocabulary set people use. It was also the only school or summer camp environment I was in (it got better in college) where people didn’t harass me, and I wish I had observed enough to get some idea of what made the difference.
Ultimately, I don’t think actual plain talk (in other words, not just using shorter words and sentences, but really communicating to a wider audience) can be done without empirical knowledge. I’m willing to bet a small amount that “plain talk” is the wrong thing to call it.
(nods) Yeah, I sympathize. I am famous locally for the phrase “I have long since resigned myself to the fact that I’m the sort of person who, well, says things like ‘I have long since resigned myself to’.”
I’m willing to bet a small amount that “plain talk” is the wrong thing to call it.
Mostly I think it’s not an “it”; there are dozens of different “plaintalks”. Communicating successfully to any audience requires knowing a fair amount about that specific audience. When Gabriel (above) talks about plain talk, he means his particular formulation of it, which will be different from other people’s.
I don’t remember what was mentioned (in a different conversation) as a respect-worthy SAT score, I just remember being shocked and horrified at how low it was and drawing on reserves of tact to (I hope) not show how I felt.
My friend had just gotten to college, and was half listening to his randomly assigned roommates talking about their SAT scores. He overhears: “Yeah, I got a 790”. “Holy shit!” my friend interjected. “That’s fantastic! Which section?”
I’m pretty sure it’s more like the 0.1%. I went to a fairly competitive private university (one that consistently makes the top 50 schools list in the US). Nevertheless, I was briefly anointed with my SAT score as a nickname freshman year, after mistakenly assuming that it wouldn’t stand out that much and being willing to tell people what it was.
At my high school, someone retook the SAT after he got a 1580 and not a 1600, someone who got a perfect score on the PSAT retook the SAT too. (I’m not sure what her original SAT score was. It’s more likely she bubbled incorrectly or something than bubbled correctly and got a score much below 1600, and there could have been a problem like that.)
I’m skeptical of this story. Even taking for granted that this was when the test was still normalized to 1600 as the max, if one looks at even a mediocre state school a total of 790 would be clearly in the very bottom. Note that in this data, the bottom 1 percent for both is slightly over a 400 for both sections. So someone scoring in that range is possible but extremely unlikely. This is around the 15th percentile for anyone taking the test, but the very bottom don’t generally go to real colleges at all.
The average SAT score for a men’s basketball player at that school is 916, for football it is 926, over 250 points lower than the average of non-athletes. Consider that there are about 100 football players per school, and not all excel at athletics enough that admission departments change their standards for them equally. If 50 of them average 1050 (about bottom 20th percentile), the other 50 would have to average 790 for the average for all of them to be as low as 920. If 90 average as high as 940 (about bottom 5th percentile), the other ten would have to average 790 for their collective average to be 925. A single student, who might or might not only be a marginal football player, who scored 1140 (not an outlandishly high score, 40th percentile at that school) would raise the football average about two points.
Considering that average football player SAT scores are tracked and schools desire their admissions standards to be perceived as high, both as part of the NCAA certification process and to justify their money-making programs, Goodhart’s law should probably be applied an additional time. Not only are SAT scores imperfect proxies for intelligence, average SAT scores for a sport are imperfect proxies of their admission standards, which are probably even lower than implied. This means it is very likely that some individuals have far less than the average program SAT score.
Lowest was 200 per section, and that was when it was out of 1600. So 400 was the lowest possible.
Perhaps someone considering a three section test said “600 is the lowest possible” to someone who applied that to what they considered a two section test, and concluded “300 is the lowest per section”, which you picked up.
Thank you for providing examples—that makes it much easier to understand what you’re proposing.
If those are examples of writing that just-meets the target threshold, then I agree with you completely that the writing on LW—especially in comments, like what you were replying to initially—completely fails to even approach that threshold.
I also estimate that most contributors here would have to devote between one and two orders of magnitude more time to even get in the same ballpark as the threshold.
Look for good examples of plain-enough talk on LW. 1b. If I can’t find any, lower my standard of “plain enough” and try again.
Upvote those examples.
Comment on those examples, praising their plain-talk-ness. Be as specific as I can about what makes them plain talk and why that’s good. Suggest ways to make them even more plain talk.
When I find enough examples, write a discussion post that praises them as exemplifying plain talk and demonstrates why that’s good.
When I find subsequent examples, upvote with an “upvoted for plain talk” comment and a link to that post.
When people ask for feedback on their writing, suggest specific ways to make it more plain talk; include a link to that post.
Perhaps also, find examples of otherwise good posts that are not plain talk and attempt to paraphrase in plain talk. We need some protocol to mitigate offense, though.
Yeah, that’s tricky. For example, I considered pointing out that a plainer-talk version of “Any suggestions as to how that work might be incentivized?” might be “How do I encourage people to do that?” but wasn’t sure how that would be taken.
In general, the sentiment we want to convey seems to be, “That was interesting, informative, and precise. Here’s an attempt to make it more approachable:”
Though one good thing about this approach is that if other people don’t consider my plaintalkified version of X to be superior to X, and I do, that can be very educational… I may discover, for example, that what I consider to be virtues of plain talk aren’t universal and I’ve been other-optimizing all along.
34. Obey these rules before you obey grammarians, who say things like “Don’t split infinitives” or “Don’t begin sentences with And or But” and “Don’t end a sentence with a preposition.”
Real grammarians, i.e. linguists who study the grammar of English as it is, teach us that these aren’t actually rules of grammar anyway, so much as prescriptions that were made up out of whole cloth for various reasons and that never had much to do with the way English was spoken or written. Here, for example, is an index of postings on Language Log (a group blog run by several professional linguists) about the split-infinitive issue. (The well-known story of this silly prescription was that it was decided in the 18th century that, since you can’t split infinitives in Latin [Latin infinitives are a single word], you shouldn’t split them in English either.)
Relatedly, the passive in English has a bad reputation that is not very well deserved. See here for a full explanation by the author of the Cambridge Grammar of the English Language.
You’d think this was just so much nitpicking—and to some extent it is—but understanding these issues fully can help you make better rhetorical use of English. This is particularly true of the passive—the article I linked above explains how passive and active versions of the same clause help us place emphasis in a sentence exactly where it will do us the most good. (As such, I think the strongest version of your point 13 that I could endorse would be “Understand clearly the difference between active and passive, and choose between them advisedly.”)
One more point which I raise not least because it’s a stunningly entertaining read: the same author’s (Geoff Pullum’s) “The Land of the Free and The Elements of Style” (PDF), an utter demolition of the grammar advice given in Strunk and White’s book. This is NOT to say that S&W’s stylistic advice should be thrown out as well, but Pullum certainly establishes that (a) they have absolutely no idea what they’re talking about when grammar is concerned, and that (b) they follow almost none of their own grammatical or stylistic prescriptions, so the whole thing should be taken with a grain of salt. Read Pullum’s article if you enjoy a well-deserved poison pen book review and would like to learn a few things about English grammar in the process.
I hesitate to counter your nitpicking with more nitpicking, but I do agree that “understanding these issues fully can help you make better rhetorical use of English”. And so, I’d like to correct some of what you write about the split infinitive. The story is somewhat more subtle and interesting.
The well-known story of this silly prescription was that it was decided in the 18th century that, since you can’t split infinitives in Latin [Latin infinitives are a single word], you shouldn’t split them in English either.
This well-known story is actually a myth that has no factual basis. It is not true that the prohibition against split infinitives was decided in the 18th century (they started debating it mid-19th century), and more importantly none of the grammarians railing against it in those times based their arguments on anything to do with Latin. Never happened. The story seems to be a modern 20th-century invention, and has spread widely among those who oppose prescriptive grammarians because it makes them look very silly. It is repeated in many popular articles and books (e.g. Pinker’s The Language Instinct), but for all that is completely untrue.
The interesting question, then, is—why did prescriptive grammarians of the 19th century start railing against the split infinitive, whereas the grammarians of the 18th century didn’t much care about it? And the answer is, in the 18th century the split infinitive largely wasn’t there. There are some examples we can find going back all the way to the 14th century, but they are rare examples. In fact, if you just read some random 18th century prose, you’re likely to quickly run into phrases that sound a little awkward to the modern ear, because they seem to intentionally avoid splitting the infinitive. But those authors didn’t try to write awkwardly or intentionally avoid the split infinitive (which wasn’t known as a prohibition). They were using the conventions of their time in which it was a rarity.
In the 19th century the split infinitive started occurring more often (perhaps became a fad of sorts), and that’s why the grammarians noticed it. Ever since then, despite all their efforts, it has only grown more popular and accepted. And yet minding your split infinitives is not bad advice to a writer (although wholesale rejection is decidedly silly), because, when overused, they tend to sound gimmicky and tinny (to forestall the obvious objection “anything is bad when overused”: true, but split infinitives get there faster. You can’t easily go wrong with sentences filled with “to X Y-ly”, but do just a few “to Y-ly X” in a sequence, and it begins to look weird).
(I also disagree with your praise of Pullum’s persistent critique of S&W; there’s much criticism that can be made of that book, but it deserves criticism made in good faith. This blog post (not by me) offers a few clear examples of what I found distasteful in Pullum’s bombastic approach.)
Thanks for the interesting comment and my apologies for having passed along an evident falsehood.
But do also note that a lot of people do believe those prescriptions to be valid, and view breaking them to be low status. All the “singular they is fine” blog posts in the world are irrelevant if using singular they will annoy half your readers.
Of course, I tend to use singular they anyway. It’s often the best alternative and I doubt that many people in my likely target audience will really care. But you should still know the biggest things that will annoy people, so that if you use them, it will be out of conscious choice and not of ignorance.
Could stand more emphasis, in my opinion; this seems to be the overarching goal which subsumes the other advice. If your intended audience doesn’t like in media res, for instance, don’t do it.
I once had a professor that insisted that the construction “X. However, Y” was grammatically incorrect and forbade anyone in her class from using it.
The mind, it boggles.
Agree with all this. Style: Lessons in Clarity and Grance also has pretty decent coverage of what you say above.
Also, I’ve removed the comma after “grammarians,” which compactly addresses some of your “nitpicks.”
Why was this downvoted?
Edit: Why was this downvoted?
.
I’m confused. Was grouchymusicologist’s comment significantly different prior to editing? I don’t see any issues with the way it is now. (I also don’t see anything that isn’t covered in Intro to Linguistics, but the links are good resources and the material generally bears repeating for a wider audience.)
I’m confused. I wouldn’t call the above comment an example of some of the clearer writing on this site, but I don’t find that anything about it significantly impedes my comprehension.
Although come to think of it, I’ve heard more or less the same points before, so maybe my perception of its clarity is corrupted by prior knowledge.
I seem to have read your mind. (Three seconds!)
.
Was it the construction of the paragraph that you’re found confusing, or the assumed prior knowledge of various grammatical disputes (splitting infinitives, passive vs. active, singular they)?
.
You’re not helping to clarify what aspect of the comment made it seem like “Yes, yes, but [long, extremely detailed nitpick in academic-ese]” for people who didn’t perceive it that way.
.
Provisionally agree in the general sense, but… should linguistics? (And what about physics?) I guess my objection is: if someone has an academic nitpick, why shouldn’t it be phrased in the dialect of academia?
A lot of things (most things) on LW are about rationality and clear thinking, but some are about (and require) specialized knowledge. Conflating the two subjects by applying the same standards of discourse seems counterproductive.
.
There is a cost to simplicity in terms of precision. There’s a lot to be said about finding ways to convey your ideas with “beautiful simplicity”—in the way often attributed to Feynman—but some ideas just cannot be reduced to such a level, and some of those ideas are important.
Case in point: the differences between what a frequentist means by “probablity” and what a Bayesian means by “probability”. The existential significance of the lack of curvature to the universe. (Sure, I could say, “Why its a big deal that spacetime is flat”—but that’s conveying a different range of meanings than the other statement, which if I hadn’t already ‘primed’ you to that same understanding might’ve lead you to another conclusion.)
.
MWI, Aumann’s Agreement Theorem, Great Filter concerns for existential risk, anthropic arguments in general, Bayes’s Theorem in the non-finite case. But even these are not in general high priority issues for rationality. I think it is fair to say that most of the important ideas can have bumpersticker size statements. But, the level of unpacking may be so large from the bumpersticker forms that they only reason the bumpersticker form seems to do anything useful is just illusion of transparency.
.
If you want the “back cover blurb” for a 600-page book, that’s an entirely sensible request… but it seems weird to criticize a 600-page book on the grounds that it isn’t as accessible as a back-cover blurb. Back-cover blurbs can exist in addition to the books; they needn’t be instead of.
.
Agreed.
What I challenge is the idea that most posts/comments here ought to make good cover blurbs.
If I need a cover blurb, it seems more productive to say “Hey, I need a cover blurb, any recommendations?” than to point to arbitrary contributions and say “This isn’t a very good cover blurb.”
.
Cool; glad we got that cleared up.
As for Blurb Ninjas… see comment elsewhere for my thoughts on how to encourage that.
Ok. If they are that large, say a one paragraph blurb, then I really don’t think there’s anything generally discussed here that could not if carefully phrased get the primary points across if someone is willing to read the paragraph and then actually think about it.
.
Off the top of my head, the first thing that comes to mind is: supergoals and how to assess them. Second: the process of figuring out how to parse a true utility function from a fake utility function.
.
Requiring rationality to be restricted to an aversion to edge-cases limits its usefulness to the point of being almost entirely without value.
To relate this more directly: that flat-spacetime thing is very relevant to understanding how “something” can come from “nothing”. Which touches on how we all got here—a very important, existentially speaking, question. One that can have an impact on even the ‘ordinary’ person’s ‘average day’. After all; if it turns out there’s no reason for anyone to believe in a God, then many of the things many people do or say on a daily basis become… extraneous at best.
Furthermore: one of the things that instrumental rationality as an approach needs to have in its “toolkit” is the ability to deeply examine thoughts, ideas, and events in advance and from those examinations create heuristics (“rules of thumb”) that enable us to make better decisions. That requires the use of sometimes very ‘technical’ turns of phrase. It’s simply unavoidable.
That gets all the more true when you’re trying to convey a very precise thought about a very nuanced topic. The thing is, regardless of where one looks in life there are more levels of complexity than we normally pay attention to. But that doesn’t make those levels of complexity irrelevant; it just means that we abstract that complexity away in our ‘average’ lives. Enter said heuristics.
Part of instrumental rationality as an approach, I believe, is the notion of at least occassionally breaking down into their constituent parts the various forms of complexity we usually ignore, in order to try to come up with better abstractions with which to ignore said complexity when it shouldn’t be a focus of our attention. I’ve gotten in “trouble” here on lesswrong for making similar statements before, however -- (though to add nuance that was more about whether generalizations are appropriate in a given ‘depth’ of conversation.)
.
… Defining a few terms:
“ever”: within the projected remaining longevity of anyone currently alive.
“average person”: A sufficient portion of people who are no more than 1 standard deviation away from the mode of any given manner of behavior as to be representative of the whole.
-- that being said: no, no I do not.
A different set of definitions:
“ever”: throughout the remainder of history
“an average person”: at least one person who is validly described as ‘average’ at the time it happens
-- Yes, yes I do.
Even explaining that took more nuance than you’d like, I suspect. Please note how radically different the two statements are, even though they both conform very closely to what you said. THIS is why nuance is sometimes indispensable.
.
Not all conversations, no—but if an average person is unprepared for legalese then he’d better always have a lawyer with him when he signs anything, ever. This has an unhappy context for our conversatoin: is there a rationality-equivalent of a lawyer?
Relevant
The Order of Silent Confessors, maybe?
.
It is by my will alone that I set my mind in motion.
.
So far.
.
It may have been worthwhile for it to have been split into several posts, perhaps?
Could you do me a favor and elaborate? One thing I know for sure is that the quicker I’m writing, the longer my sentences are (a terrible habit). But I don’t know if that’s what you’re talking about or if it’s something else.
.
No apology needed, I appreciate the feedback. My comments often come out looking longer or wordier than they seemed while I was composing them, and I’ll try to remember that tendency and keep a lid on it when possible.
.
I assume there are also limits to the amount of cognitive effort anyone wants to spend writing comments.
.
I’m not being sarcastic. Sometimes writing in a way that’s easy for other people to understand is just hard. Speaking for myself, normally when my own comments aren’t clear it’s because I’ve spent as much time as I’m willing to spend on writing a comment trying to come up with clearer ways to convey my idea, not because it feels gross or because I’m not trying. (For example, I rewrote that last sentence at least 4 times and it’s still pretty clunky.) This seems to come as second nature to some people, the rest of us have to struggle a bit.
None of this is intended to detract from your point. Clearer writing is better.
Yeah, what saturn said, pretty much. And as comments from Desrtopa and pedanterrific in this thread suggest, not everyone finds my writing as opaque as you do. If I can make my writing 10% clearer by spending double the effort on it, I’m only occasionally going to think that’s a good tradeoff (particularly when the writing in question is blog comments and not, say, my professional work).
.
Forgive me, but this seems like a little bit of an overreaction. You’re the only one who’s called me out for writing style (although I have no trouble believing that others have thought the same thing and not said it). Frankly, I don’t comment much, but when I do, my comments tend to be reasonably highly rated.
The incomprehensible-to-outsiders thing strikes me as a reach. LW by all appearances is growing rapidly without noticeable worsening in the quality of discourse or community, which is a remarkable accomplishment. When outsiders do complain about LW being unapproachable, it’s not because of people like me writing long sentences. It’s because of jargon, a lot of shared background that takes time to catch up on, and the novelty of some of the ideas.
I’ve already said I will make a reasonable effort to do better. So, respectfully, with that promise, I think I’ve shouldered enough responsibility for improving colloquy around here for the time being.
(Because I don’t know how well in control of my tone I am, I want to clarify that I appreciate your feedback on my commenting style, and I very much do not want to come across as annoyed or snippy.)
.
It sounds like you’re implying that a typical comment/post on LW should be accessible, in terms of rhetoric and content, to everyone on the Internet. That idea, I dismiss out of hand.
The principle of charity moves me to look for an alternative reading. The best one I can come up with is that there’s some threshold of accessibility that you have in mind, which you assert a typical LW comment/post should and does not achieve.
So, OK. Can you be somewhat more concrete about what that threshold is? For example, can you point to some examples of writing you think just-meets that threshold?
A sad story about plain talk.....
In the summer between high school and college, I took a couple of courses at a parochial school. At some point some of the other students said something, not unkindly, about the way I talked. I asked them what they meant, specifically. They nearly fell over laughing. After a couple of repetitions of my question and laughter, one of them managed to get out that they wouldn’t ever have said “specifically”.
I explained that I could hear the words they used, but I didn’t know how I could tell what words they didn’t use.
I don’t remember what was mentioned (in a different conversation) as a respect-worthy SAT score, I just remember being shocked and horrified at how low it was and drawing on reserves of tact to (I hope) not show how I felt.
In retrospect, I now know that it’s possible to acquire a feeling for what vocabulary set people use. It was also the only school or summer camp environment I was in (it got better in college) where people didn’t harass me, and I wish I had observed enough to get some idea of what made the difference.
Ultimately, I don’t think actual plain talk (in other words, not just using shorter words and sentences, but really communicating to a wider audience) can be done without empirical knowledge. I’m willing to bet a small amount that “plain talk” is the wrong thing to call it.
(nods) Yeah, I sympathize. I am famous locally for the phrase “I have long since resigned myself to the fact that I’m the sort of person who, well, says things like ‘I have long since resigned myself to’.”
Mostly I think it’s not an “it”; there are dozens of different “plaintalks”. Communicating successfully to any audience requires knowing a fair amount about that specific audience. When Gabriel (above) talks about plain talk, he means his particular formulation of it, which will be different from other people’s.
.
Picking a register appropriate to my audience will move that audience.
You gots to talk to people in their language.
.
I wouldn’t, no. If I want to preach to the unconverted pagans, I do best to learn their language first.
.
My friend had just gotten to college, and was half listening to his randomly assigned roommates talking about their SAT scores. He overhears: “Yeah, I got a 790”. “Holy shit!” my friend interjected. “That’s fantastic! Which section?”
“What do you mean which section?”
It’s things like that which make me mentally apply the ‘We Are The 1%’ slogan… to IQ.
I’m pretty sure it’s more like the 0.1%. I went to a fairly competitive private university (one that consistently makes the top 50 schools list in the US). Nevertheless, I was briefly anointed with my SAT score as a nickname freshman year, after mistakenly assuming that it wouldn’t stand out that much and being willing to tell people what it was.
At my high school, someone retook the SAT after he got a 1580 and not a 1600, someone who got a perfect score on the PSAT retook the SAT too. (I’m not sure what her original SAT score was. It’s more likely she bubbled incorrectly or something than bubbled correctly and got a score much below 1600, and there could have been a problem like that.)
That’s also a quote from “Perks of Being A Wallflower”, incidentally. Which doesn’t mean it’s not a true story.
I’m skeptical of this story. Even taking for granted that this was when the test was still normalized to 1600 as the max, if one looks at even a mediocre state school a total of 790 would be clearly in the very bottom. Note that in this data, the bottom 1 percent for both is slightly over a 400 for both sections. So someone scoring in that range is possible but extremely unlikely. This is around the 15th percentile for anyone taking the test, but the very bottom don’t generally go to real colleges at all.
The average SAT score for a men’s basketball player at that school is 916, for football it is 926, over 250 points lower than the average of non-athletes. Consider that there are about 100 football players per school, and not all excel at athletics enough that admission departments change their standards for them equally. If 50 of them average 1050 (about bottom 20th percentile), the other 50 would have to average 790 for the average for all of them to be as low as 920. If 90 average as high as 940 (about bottom 5th percentile), the other ten would have to average 790 for their collective average to be 925. A single student, who might or might not only be a marginal football player, who scored 1140 (not an outlandishly high score, 40th percentile at that school) would raise the football average about two points.
Considering that average football player SAT scores are tracked and schools desire their admissions standards to be perceived as high, both as part of the NCAA certification process and to justify their money-making programs, Goodhart’s law should probably be applied an additional time. Not only are SAT scores imperfect proxies for intelligence, average SAT scores for a sport are imperfect proxies of their admission standards, which are probably even lower than implied. This means it is very likely that some individuals have far less than the average program SAT score.
That’s an excellent set of points. I clearly underestimated the chance of such an event occurring.
Is it even possible to get a 790 total? I thought the lower bound was 900!
Lowest was 200 per section, and that was when it was out of 1600. So 400 was the lowest possible.
Perhaps someone considering a three section test said “600 is the lowest possible” to someone who applied that to what they considered a two section test, and concluded “300 is the lowest per section”, which you picked up.
Oh, okay. (I’m looking it up on the wiki now; I actually wasn’t aware it used to be a 1600 point scale.)
Nevermind then. So 790 would be… 13th percentile. Ouch.
(Wikipedia gives 890 as the lowest point on the chart here, though it is for the new system.)
.
Thank you for providing examples—that makes it much easier to understand what you’re proposing.
If those are examples of writing that just-meets the target threshold, then I agree with you completely that the writing on LW—especially in comments, like what you were replying to initially—completely fails to even approach that threshold.
I also estimate that most contributors here would have to devote between one and two orders of magnitude more time to even get in the same ballpark as the threshold.
.
Sure.
Look for good examples of plain-enough talk on LW.
1b. If I can’t find any, lower my standard of “plain enough” and try again.
Upvote those examples.
Comment on those examples, praising their plain-talk-ness. Be as specific as I can about what makes them plain talk and why that’s good. Suggest ways to make them even more plain talk.
When I find enough examples, write a discussion post that praises them as exemplifying plain talk and demonstrates why that’s good.
When I find subsequent examples, upvote with an “upvoted for plain talk” comment and a link to that post.
When people ask for feedback on their writing, suggest specific ways to make it more plain talk; include a link to that post.
Perhaps also, find examples of otherwise good posts that are not plain talk and attempt to paraphrase in plain talk. We need some protocol to mitigate offense, though.
Yeah, that’s tricky. For example, I considered pointing out that a plainer-talk version of “Any suggestions as to how that work might be incentivized?” might be “How do I encourage people to do that?” but wasn’t sure how that would be taken.
In general, the sentiment we want to convey seems to be, “That was interesting, informative, and precise. Here’s an attempt to make it more approachable:”
.
Though one good thing about this approach is that if other people don’t consider my plaintalkified version of X to be superior to X, and I do, that can be very educational… I may discover, for example, that what I consider to be virtues of plain talk aren’t universal and I’ve been other-optimizing all along.
I don’t suppose you’ve considered becoming a bad-ass Iron-Age hoplite?
Considered.
This particular comment seemed just fine to me...