In your case, you are claiming that no signalling behaviours are bad. You probably intended to say that at least some signalling behaviours are not bad.
you are claiming that no signalling behaviours are bad
Actually, Stabilizer may not be making any such claim. There’s a linguistic phenomenon where the population can basically be split into people who can take a sentence like “All X are not Y” and only get the interpretation “No X are Y”, and people who can get both that interpretation and also “[not all] X are Y”. I would be willing to wager that Stabilizer is in the latter group, since it’s pretty clear from the post that they’re not trying to claim that no signalling behaviour is bad.
They’re not, that’s not how language works. I can agree that there are better ways to express oneself that are not ambiguous, but calling an interpretation “mistaken” which is perfectly fine for a decent chunk of the population is pointlessly prescriptivist.
It is not pointless at all. When there is one way that is unambiguous, and another that creates an unnecessary ambiguity, then the ambiguous way may reasonably be considered wrong, and people who use it corrected as a way to improve the language.
You know, just between you and me, I sometimes worry that there is a naive view loose out there — most students come to linguistics believing it, and there appear to be some professional linguists who regard it as central and explanatory — that language has something to do with purposes of efficiently conveying information from a speaker to a hearer. What a load of nonsense. I’m sorry, I don’t want to sound cynical and jaded, but language is not for informing. Language is for accusing, adumbrating, attacking, attracting, blustering, bossing, bullying, burbling, challenging, concealing, confusing, deceiving, defending, defocusing, deluding, denying, detracting, discomfiting, discouraging, dissembling, distracting, embarassing, embellishing, encouraging, enticing, evading, flattering, hinting, humiliating, insulting, interrogating, intimidating, inveigling, muddling, musing, needling, obfuscating, obscuring, persuading, protecting, rebutting, retorting, ridiculing, scaring, seducing, stroking, wondering, … Oh, you fools who think languages are vehicles for permitting a person who is aware of some fact to convey it clearly and accurately to some other person. You simply have no idea.
That is an interesting essay. For me, Raymond’s arguments don’t really stand up. This is the core of his argument that the “popular usage” position, apparently common among linguists, is not well-grounded:
At the bottom of it, for most people, is the belief that popular usage always wins in the end, so why fight it? But this isn’t actually even remotely true; as far back as Middle English, academic grammarians imported Latin and French words into English wholesale, and they often displaced more popular “native” words. The anti-populist effect of class stratification has been taken over in our time by mass media, especially television and movies, which have enormous power to ratify minority usages and pronunciations and make them normative.
It misses the fact that the academic grammarians and mass media he mentions are an influential part of the popular-usage process, not in opposition to it. News networks don’t announce that their usages are normatively correct: it is (a certain segment of) the population who make that argument.
If there’s no distinction to be made between elite prescription and mass usage, then what is the point of appealing to “common usage via Google” at all? By your argument, I’m just as much an “influential part of the popular-usage process” as the Google results that were being used as an argument against my position. Either there is a distinction between common and elite usage, or there isn’t. If there is, we can argue about which is more important in what circumstances. If not, then we’re back to arguing about function and ambiguity.
No, there is a distinction here, but it’s not between common and elite usage. It’s about whether the authority is normatively correct even when the people disagree. If an authority is against a usage and most people continue using it, most linguists (holding the “popular usage” position) will be for that usage. If an authority is against a usage and most people are also against it (whether influenced by the authority or not), most linguists will be against it.
I’m just as much an “influential part of the popular-usage process”
Yes! If you’re influential, that is. Google certainly is.
If an authority is against a usage and most people continue using it, most linguists (holding the “popular usage” position) will be for that usage.
ESR’s argument is precisely that this is not true; that linguists will generally not approve of a popular usage as against an elite usage when, and only when, the elite usage is less ambiguous. Do you have any data (anecdotes will do, that’s what he’s basing his assertion on) that shows otherwise?
If I told you that most linguists think that Strunk and White is a load of crap, would that help? Or how about that most linguists I know will happily admit that these days there’s little or no difference between “fewer” and “less”, or complementiser usage of “which” and “that”, because the vast majority of people don’t make a principled distinction between the two? I’m pretty sure I’ve also heard at least one of them using “less” in a classroom context that prescriptively ought to have been “fewer”
(Actually, I’m not sure if there was ever a really principled distinction between fewer and less—it seems like one of those things that teachers have always been complaining about our misuse of)
Well, “most linguists” is a phrase that really cries out for some Wiki tags. “Citation needed”, “who”, and “weasel words” come to mind. That aside, I do not see what Strunk and White has to do with it; they were giving advice on writing style, not on how to express yourself un-ambiguously. As for fewer and less, and which and that, I don’t see where these gave rise to any actual precision of language. Saying ‘fewer people’ is not actually needed to inform you that people are countable; you already know that. So the alleged additional information is redundant. Which is, indeed, why people don’t bother with the distinction, and why linguists merely catalog the usage. Your examples are quite different both from the original “not all are/all are not” distinction, and from the ones in the essay, and thus don’t actually carry your point.
You did not give anecdotes; you made assertions. There’s a difference. If I say “Person such-and-such, who is a linguist, told me this-and-that”, this is anecdotal evidence that linguists hold such a position. If I say “Most linguists think”, that is assertion.
I see. In that case, let me rephrase: every member of the class of linguists that I am aware of, including but not limited to the ones on Language Log, the ones at my old department and the ones at my current department, think that Strunk and White and other similar prescriptivism is a load of crap and are in favour of usage-based grammaticality.
I also request that any subsequent comments I make in this thread be downvoted, because I am clearly having problems disengaging.
Again, what do Strunk and White have to do with it? They were giving advice on writing style, saying “If you say it this way your readers will like your writing better”, not “This is the correct way to say it”. Now perhaps they gave bad advice, it is a point on which reasonable men might differ, but what of it? To beat up on Strunk and White may be popular, but it has nothing to do with prescriptivism in linguistics.
As for the wiki tags, Language Log provides some examples. I don’t think these examples have to include the precision aspect of the question to support the claim that Raymond is over-reaching in his attack on the “popular usage” position.
In common usage (based on a Google search for “all * are not *”) you are wrong: in fact, most usages of the phrase seem to mean “not all X are Y”. Probably the phrase is ambiguous, but then we should not use it at all, and either say “No X are Y” or “Not all X are Y”. And in that case it is silly to criticize a use of the phrase which you admit that you have correctly parsed.
Most people also understand “if” to mean “if and only if”; it does not follow that we ought not to correct such ambiguous and context-dependent use. I’m down with common usage in most cases, but not when it comes to making logical distinctions in writing. There is a place for prescriptivist precision in language, and this is it.
Nope. “All that glisters is not gold.” You are probably implicitly assuming that “not” in English only negates what’s after it and not what’s in front of it, but English isn’t that simple—cf “You must not do X” (where “not” negates “do”) and “You need not do X” (where “not” negates “need”).
A quirk of the English language means that the former is often interpreted as though it was the latter (IOW, the scope of a negation in English isn’t always everything after it in the sentence and nothing in front of it—e.g. “All that glisters is not gold”. To unambiguously express the former meaning you have to say “No X is Y.”
(Hadn’t seen this had already been discussed—never mind.)
Please observe the following distinction:
is not the same as
In your case, you are claiming that no signalling behaviours are bad. You probably intended to say that at least some signalling behaviours are not bad.
Actually, Stabilizer may not be making any such claim. There’s a linguistic phenomenon where the population can basically be split into people who can take a sentence like “All X are not Y” and only get the interpretation “No X are Y”, and people who can get both that interpretation and also “[not all] X are Y”. I would be willing to wager that Stabilizer is in the latter group, since it’s pretty clear from the post that they’re not trying to claim that no signalling behaviour is bad.
Well yes, and the latter group is just mistaken, which is what I’m pointing out.
They’re not, that’s not how language works. I can agree that there are better ways to express oneself that are not ambiguous, but calling an interpretation “mistaken” which is perfectly fine for a decent chunk of the population is pointlessly prescriptivist.
It is not pointless at all. When there is one way that is unambiguous, and another that creates an unnecessary ambiguity, then the ambiguous way may reasonably be considered wrong, and people who use it corrected as a way to improve the language.
In practice, human language isn’t precision-oriented technical jargon.
That’s a bug, not a feature. ;)
Actually, it just might be a feature.
Geoff Pullum
Very well, I will thus ignore any information in your comment.
Bah. Joseph Conrad picked English for its interesting ambiguities!
Perhaps you’ll find this interesting, it touches on how language works and corrects your apparent misconception that it’s all about usage:
http://esr.ibiblio.org/?p=737
That is an interesting essay. For me, Raymond’s arguments don’t really stand up. This is the core of his argument that the “popular usage” position, apparently common among linguists, is not well-grounded:
It misses the fact that the academic grammarians and mass media he mentions are an influential part of the popular-usage process, not in opposition to it. News networks don’t announce that their usages are normatively correct: it is (a certain segment of) the population who make that argument.
If there’s no distinction to be made between elite prescription and mass usage, then what is the point of appealing to “common usage via Google” at all? By your argument, I’m just as much an “influential part of the popular-usage process” as the Google results that were being used as an argument against my position. Either there is a distinction between common and elite usage, or there isn’t. If there is, we can argue about which is more important in what circumstances. If not, then we’re back to arguing about function and ambiguity.
No, there is a distinction here, but it’s not between common and elite usage. It’s about whether the authority is normatively correct even when the people disagree. If an authority is against a usage and most people continue using it, most linguists (holding the “popular usage” position) will be for that usage. If an authority is against a usage and most people are also against it (whether influenced by the authority or not), most linguists will be against it.
Yes! If you’re influential, that is. Google certainly is.
ESR’s argument is precisely that this is not true; that linguists will generally not approve of a popular usage as against an elite usage when, and only when, the elite usage is less ambiguous. Do you have any data (anecdotes will do, that’s what he’s basing his assertion on) that shows otherwise?
If I told you that most linguists think that Strunk and White is a load of crap, would that help? Or how about that most linguists I know will happily admit that these days there’s little or no difference between “fewer” and “less”, or complementiser usage of “which” and “that”, because the vast majority of people don’t make a principled distinction between the two? I’m pretty sure I’ve also heard at least one of them using “less” in a classroom context that prescriptively ought to have been “fewer”
(Actually, I’m not sure if there was ever a really principled distinction between fewer and less—it seems like one of those things that teachers have always been complaining about our misuse of)
Well, “most linguists” is a phrase that really cries out for some Wiki tags. “Citation needed”, “who”, and “weasel words” come to mind. That aside, I do not see what Strunk and White has to do with it; they were giving advice on writing style, not on how to express yourself un-ambiguously. As for fewer and less, and which and that, I don’t see where these gave rise to any actual precision of language. Saying ‘fewer people’ is not actually needed to inform you that people are countable; you already know that. So the alleged additional information is redundant. Which is, indeed, why people don’t bother with the distinction, and why linguists merely catalog the usage. Your examples are quite different both from the original “not all are/all are not” distinction, and from the ones in the essay, and thus don’t actually carry your point.
You ask for anecdotal evidence and then demand citations when given some? I’m tapping out of this conversation for good.
You did not give anecdotes; you made assertions. There’s a difference. If I say “Person such-and-such, who is a linguist, told me this-and-that”, this is anecdotal evidence that linguists hold such a position. If I say “Most linguists think”, that is assertion.
I see. In that case, let me rephrase: every member of the class of linguists that I am aware of, including but not limited to the ones on Language Log, the ones at my old department and the ones at my current department, think that Strunk and White and other similar prescriptivism is a load of crap and are in favour of usage-based grammaticality.
I also request that any subsequent comments I make in this thread be downvoted, because I am clearly having problems disengaging.
Again, what do Strunk and White have to do with it? They were giving advice on writing style, saying “If you say it this way your readers will like your writing better”, not “This is the correct way to say it”. Now perhaps they gave bad advice, it is a point on which reasonable men might differ, but what of it? To beat up on Strunk and White may be popular, but it has nothing to do with prescriptivism in linguistics.
As for the wiki tags, Language Log provides some examples. I don’t think these examples have to include the precision aspect of the question to support the claim that Raymond is over-reaching in his attack on the “popular usage” position.
In common usage (based on a Google search for “all * are not *”) you are wrong: in fact, most usages of the phrase seem to mean “not all X are Y”. Probably the phrase is ambiguous, but then we should not use it at all, and either say “No X are Y” or “Not all X are Y”. And in that case it is silly to criticize a use of the phrase which you admit that you have correctly parsed.
Most people also understand “if” to mean “if and only if”; it does not follow that we ought not to correct such ambiguous and context-dependent use. I’m down with common usage in most cases, but not when it comes to making logical distinctions in writing. There is a place for prescriptivist precision in language, and this is it.
Nope. “All that glisters is not gold.” You are probably implicitly assuming that “not” in English only negates what’s after it and not what’s in front of it, but English isn’t that simple—cf “You must not do X” (where “not” negates “do”) and “You need not do X” (where “not” negates “need”).
Fixed. Thanks.
(Hadn’t seen this had already been discussed—never mind.)