Sneaking in Connotations
Yesterday, we saw that in Japan, blood types have taken the place of astrology—if your blood type is AB, for example, you’re supposed to be “cool and controlled”.
So suppose we decided to invent a new word, “wiggin”, and defined this word to mean people with green eyes and black hair—
A green-eyed man with black hair walked into a restaurant.
“Ha,” said Danny, watching from a nearby table, “did you see that? A wiggin just walked into the room. Bloody wiggins. Commit all sorts of crimes, they do.”
His sister Erda sighed. “You haven’t seen him commit any crimes, have you, Danny?”
“Don’t need to,” Danny said, producing a dictionary. “See, it says right here in the Oxford English Dictionary. ‘Wiggin. (1) A person with green eyes and black hair.’ He’s got green eyes and black hair, he’s a wiggin. You’re not going to argue with the Oxford English Dictionary, are you? By definition, a green-eyed black-haired person is a wiggin.”
“But you called him a wiggin,” said Erda. “That’s a nasty thing to say about someone you don’t even know. You’ve got no evidence that he puts too much ketchup on his burgers, or that as a kid he used his slingshot to launch baby squirrels.”
“But he is a wiggin,” Danny said patiently. “He’s got green eyes and black hair, right? Just you watch, as soon as his burger arrives, he’s reaching for the ketchup.”
The human mind passes from observed characteristics to inferred characteristics via the medium of words. In “All humans are mortal, Socrates is a human, therefore Socrates is mortal”, the observed characteristics are Socrates’s clothes, speech, tool use, and generally human shape; the categorization is “human”; the inferred characteristic is poisonability by hemlock.
Of course there’s no hard distinction between “observed characteristics” and “inferred characteristics”. If you hear someone speak, they’re probably shaped like a human, all else being equal. If you see a human figure in the shadows, then ceteris paribus it can probably speak.
And yet some properties do tend to be more inferred than observed. You’re more likely to decide that someone is human, and will therefore burn if exposed to open flame, than carry through the inference the other way around.
If you look in a dictionary for the definition of “human”, you’re more likely to find characteristics like “intelligence” and “featherless biped”—characteristics that are useful for quickly eyeballing what is and isn’t a human—rather than the ten thousand connotations, from vulnerability to hemlock, to overconfidence, that we can infer from someone’s being human. Why? Perhaps dictionaries are intended to let you match up labels to similarity groups, and so are designed to quickly isolate clusters in thingspace. Or perhaps the big, distinguishing characteristics are the most salient, and therefore first to pop into a dictionary editor’s mind. (I’m not sure how aware dictionary editors are of what they really do.)
But the upshot is that when Danny pulls out his OED to look up “wiggin”, he sees listed only the first-glance characteristics that distinguish a wiggin: Green eyes and black hair. The OED doesn’t list the many minor connotations that have come to attach to this term, such as criminal proclivities, culinary peculiarities, and some unfortunate childhood activities.
How did those connotations get there in the first place? Maybe there was once a famous wiggin with those properties. Or maybe someone made stuff up at random, and wrote a series of bestselling books about it (The Wiggin, Talking to Wiggins, Raising Your Little Wiggin, Wiggins in the Bedroom). Maybe even the wiggins believe it now, and act accordingly. As soon as you call some people “wiggins”, the word will begin acquiring connotations.
But remember the Parable of Hemlock: If we go by the logical class definitions, we can never class Socrates as a “human” until after we observe him to be mortal. Whenever someone pulls a dictionary, they’re generally trying to sneak in a connotation, not the actual definition written down in the dictionary.
After all, if the only meaning of the word “wiggin” is “green-eyed black-haired person”, then why not just call those people “green-eyed black-haired people”? And if you’re wondering whether someone is a ketchup-reacher, why not ask directly, “Is he a ketchup-reacher?” rather than “Is he a wiggin?” (Note substitution of substance for symbol.)
Oh, but arguing the real question would require work. You’d have to actually watch the wiggin to see if he reached for the ketchup. Or maybe see if you can find statistics on how many green-eyed black-haired people actually like ketchup. At any rate, you wouldn’t be able to do it sitting in your living room with your eyes closed. And people are lazy. They’d rather argue “by definition”, especially since they think “you can define a word any way you like”.
But of course the real reason they care whether someone is a “wiggin” is a connotation—a feeling that comes along with the word—that isn’t in the definition they claim to use.
Imagine Danny saying, “Look, he’s got green eyes and black hair. He’s a wiggin! It says so right there in the dictionary!—therefore, he’s got black hair. Argue with that, if you can!”
Doesn’t have much of a triumphant ring to it, does it? If the real point of the argument actually was contained in the dictionary definition—if the argument genuinely was logically valid—then the argument would feel empty; it would either say nothing new, or beg the question.
It’s only the attempt to smuggle in connotations not explicitly listed in the definition, that makes anyone feel they can score a point that way.
- The noncentral fallacy—the worst argument in the world? by 27 Aug 2012 3:36 UTC; 418 points) (
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 225 points) (
- A Parable On Obsolete Ideologies by 13 May 2009 22:51 UTC; 185 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 114 points) (
- Conceptual Analysis and Moral Theory by 16 May 2011 6:28 UTC; 94 points) (
- SotW: Be Specific by 3 Apr 2012 6:11 UTC; 86 points) (
- Arguing “By Definition” by 20 Feb 2008 23:37 UTC; 85 points) (
- Mutual Information, and Density in Thingspace by 23 Feb 2008 19:14 UTC; 69 points) (
- Pluralistic Moral Reductionism by 1 Jun 2011 0:59 UTC; 64 points) (
- Claiming Connotations by 9 Dec 2012 23:40 UTC; 37 points) (
- Morality Isn’t Logical by 26 Dec 2012 23:08 UTC; 35 points) (
- Being Foreign and Being Sane by 25 May 2013 0:58 UTC; 35 points) (
- The Implicit Association Test by 25 Mar 2009 0:11 UTC; 31 points) (
- Book Review: Free Will by 11 Oct 2021 18:41 UTC; 28 points) (
- 11 Dec 2011 10:12 UTC; 23 points) 's comment on [POLL] LessWrong census, mindkilling edition [closed, now with results] by (
- 12 May 2011 15:42 UTC; 22 points) 's comment on Designing Rationalist Projects by (
- 27 Aug 2012 0:47 UTC; 22 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 23 Jan 2014 17:47 UTC; 18 points) 's comment on Polling Thread by (
- 7 Jun 2013 18:40 UTC; 14 points) 's comment on Rationality Quotes June 2013 by (
- 19 Mar 2014 22:47 UTC; 13 points) 's comment on Open thread, 18-24 March 2014 by (
- Erroneous Visualizations by 19 Jan 2011 1:44 UTC; 13 points) (
- 29 Apr 2011 1:33 UTC; 11 points) 's comment on Politics is a fact of life by (
- 9 Sep 2013 19:16 UTC; 11 points) 's comment on Open thread, September 2-8, 2013 by (
- Words are Imprecise Telepathy by 24 Jan 2022 1:45 UTC; 11 points) (
- 4 Jun 2012 5:42 UTC; 11 points) 's comment on “Progress” by (
- 10 May 2010 21:41 UTC; 10 points) 's comment on Rationality quotes: May 2010 by (
- 14 Jun 2012 1:15 UTC; 9 points) 's comment on Glenn Beck discusses the Singularity, cites SI researchers by (
- 15 Apr 2012 23:28 UTC; 9 points) 's comment on Reversed Stupidity Is Not Intelligence by (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 9 points) (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 22 Jun 2015 1:38 UTC; 8 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 11 Oct 2011 3:15 UTC; 8 points) 's comment on Rationality Lessons Learned from Irrational Adventures in Romance by (
- 5 Jul 2019 16:01 UTC; 7 points) 's comment on Everybody Knows by (
- [SEQ RERUN] Sneaking in Connotations by 4 Feb 2012 1:50 UTC; 7 points) (
- 5 Jan 2013 11:13 UTC; 6 points) 's comment on A Fable of Science and Politics by (
- 24 Dec 2012 12:25 UTC; 6 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 14 Mar 2017 11:08 UTC; 6 points) 's comment on Am I Really an X? by (
- 16 Mar 2012 8:24 UTC; 6 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 10 by (
- 14 Apr 2015 8:16 UTC; 5 points) 's comment on Open Thread, Apr. 13 - Apr. 19, 2015 by (
- 27 Mar 2023 18:51 UTC; 5 points) 's comment on The salt in pasta water fallacy by (
- Positive Information Diet, Take the Challenge by 1 Mar 2013 14:51 UTC; 4 points) (
- 12 Aug 2012 20:29 UTC; 4 points) 's comment on [Link] Admitting to Bias by (
- 4 Nov 2011 4:51 UTC; 4 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 23 Jul 2011 6:09 UTC; 4 points) 's comment on Religion’s Claim to be Non-Disprovable by (
- 22 Feb 2008 0:25 UTC; 3 points) 's comment on Where to Draw the Boundary? by (
- 15 Sep 2012 5:21 UTC; 2 points) 's comment on Experimental psychology on word confusion by (
- 30 Jun 2015 4:50 UTC; 2 points) 's comment on Open Thread, Jun. 29 - Jul. 5, 2015 by (
- Meetup : West LA—Inflation of Terminology by 21 Sep 2014 16:07 UTC; 2 points) (
- 16 Oct 2012 4:09 UTC; 2 points) 's comment on Causal Diagrams and Causal Models by (
- 27 Jul 2009 18:50 UTC; 2 points) 's comment on Bayesian Utility: Representing Preference by Probability Measures by (
- 14 Jul 2014 0:54 UTC; 2 points) 's comment on Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult by (
- 19 Sep 2024 15:24 UTC; 2 points) 's comment on Pronouns are Annoying by (
- 12 Mar 2019 16:46 UTC; 1 point) 's comment on How tractable is cluelessness? by (EA Forum;
- 25 Dec 2012 23:40 UTC; 1 point) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 29 Dec 2013 21:22 UTC; 1 point) 's comment on Open thread for December 17-23, 2013 by (
- 19 Jan 2020 2:51 UTC; 1 point) 's comment on Risk and uncertainty: A false dichotomy? by (
- 29 Apr 2013 15:30 UTC; 1 point) 's comment on LW Women Entries- Creepiness by (
- 1 Jul 2009 4:32 UTC; 1 point) 's comment on What’s In A Name? by (
- 31 Oct 2012 1:53 UTC; 0 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 14 Dec 2011 20:22 UTC; 0 points) 's comment on How to Not Lose an Argument by (
- 15 Aug 2012 19:37 UTC; 0 points) 's comment on Let’s be friendly to our allies by (
- 30 Sep 2012 22:31 UTC; 0 points) 's comment on Elitism isn’t necessary for refining rationality. by (
- 1 Aug 2012 17:12 UTC; 0 points) 's comment on Is Politics the Mindkiller? An Inconclusive Test by (
- 21 Apr 2012 11:10 UTC; 0 points) 's comment on Defense Against The Dark Arts: Case Study #1 by (
- 12 Apr 2013 18:32 UTC; 0 points) 's comment on LW Women Submissions: On Misogyny by (
- 23 Nov 2011 20:56 UTC; -1 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 29 Apr 2011 1:59 UTC; -1 points) 's comment on What are the leftover questions of metaethics? by (
- 15 Aug 2012 17:33 UTC; -7 points) 's comment on Let’s be friendly to our allies by (
- 9 Dec 2011 2:48 UTC; -10 points) 's comment on Value evolution by (
It is very insensitive to refer to people using the W word the way you do.
Finally someone has come up with a word for those awful people.
If there’s one thing I hate about wiggins, it’s how they use their military genius to utterly destroy their enemies, be they small children or hive-minded bug-eyed monsters.
I just now understood why Eliezer Yudkowsky chose Harry Potter for his character with such qualities, he’s a typical Wiggin!
Mainly I see categories as useful only as “shorthand”, and then only along very particular vectors.
For example, one category that includes people like me (at least along one particular axis) is “female”. To me, all this really means is that I’m physiologically configured in a particular way that influences what kinds of bathrooms I can use and what kinds of doctors I need to see. In that respect, “female” is a useful and descriptive category.
But in other respects, it isn’t at all useful. As a youngster I went through a phase of “not seeing myself as female”—not because I hated my physical form (I don’t) but because everything that people seemed to associate with “females” didn’t fit me. As a female, I was expected (by my surrounding culture) to like pink things, to want to wear dresses, to prefer “domestic” games to construction toys or computers. I was also expected to have certain kinds of social skills I didn’t have, as well as certain cognitive tendencies. Etc. So my initial reaction was to wonder whether or not I was a “real girl” in the first place.
Eventually, though, my brain did a sort of flip and I realized that the problem wasn’t that I was “inauthentically female”, but that people were taking the things about me that were actually female (e.g., aspects of my physiology) and using those things as a basis for assuming a whole bunch of other things. And my reaction was one of indignance at that point: why can’t a Real Girl play with the spaceship Lego and wear pants on special occasions (instead of annoying, uncomfortable dresses)?
So, I’m quite familiar with the phenomenon described in this post. It’s actually kind of surprising to learn (as I have fairly recently) that many people actually memorize a category definition and then attempt to force-fit reality into it, rather than just gathering a lot of data over time and then (when necessary for the sake of practicality or shorthand) applying category-labels to some members of that data set along particular specified vectors.
In other words, if I’m applying for a job, the fact that I have ovaries shouldn’t be a factor (unless the job happens to be something like “egg donor”, but that’s not something I really see myself getting into). But if I suddenly start experiencing weird abdominal pain, the fact that I have ovaries (and other female internals) becomes pertinent information. The category is context-specific and I think a lot of problems come in when people try to “universalize” categories across all contexts and along all vectors.
AnneC: Mainly I see categories as useful only as “shorthand”, and then only along very particular vectors.
All thinking is done in shorthand—the brain can’t actually contain a 1:1 map of the universe—but some hands are much shorter than others; and I quite agree that there’s no point in trying to make someone match the average (or mere stereotype) of the female-human cluster if you already have access to more detailed information about her than that.
What you’re objecting to isn’t so much the shortcut, it seems to me, as the way-too-short, much-shorter-than-necessary cut. “Playing with spaceship Lego” isn’t an atomically detailed description of you either, but it’s more information than “female (human)”.
there’s no point in trying to make someone match the average (or mere stereotype) of the female-human cluster if you already have access to more detailed information about her than that.
I would say that there’s little to no point in trying to make someone match the average/stereotype about someone even if you don’t have access to more detailed information about her than that. Or, at the very least, people should be capable of maintaining awareness of the information that someone is female without their connotations of what “female” means blocking their ability to take in new data about that person.
As an engineer, I’ve come across an unsettling number of assumptions that “engineering needs women because they’re so much better at multitasking and working in groups”—e.g., my presence in engineering is welcomed on the basis of supposed “positives” that I don’t actually provide. So while patting themselves on the back for earning Diversity Points, some folks are simultaneously holding female engineers responsible for providing the Wanted Stereotypical Ability. And meanwhile, the real (and useful) abilities that J. Random Engineer Who Happens To Be Female might provide get ignored, or not believed to exist until the engineer in question performs a sufficient number of Extraordinary Superhero Feats to get branded “The Exception”.
What you’re objecting to isn’t so much the shortcut, it seems to me, as the way-too-short, much-shorter-than-necessary cut. “Playing with spaceship Lego” isn’t an atomically detailed description of you either, but it’s more information than “female (human)”.
Yes, exactly. I remember always feeling kind of weird as a kid because I tended to identify more with male characters in stories (because I had more in common with them interest-wise and personality-wise), and yet, I knew I supposedly belonged to a category called “female”. Hence, I really liked it when I came across “tomboy” characters or girls who were good at math and science (like Meg Murry from “A Wrinkle In Time”), because reading about those characters gave me a bit of a “cognitive dissonance vacation”. I know some people dismiss the impact of fiction on culture, but since fiction is both a thing that culture both produces and is influenced by, I have always appreciated it when authors can successfully manage to realistically portray a character that subverts particular stereotypes—such works can have the curious effect of reassuring particular segments of the population that yes, they do, in fact, exist.
Also, this post makes me think of this entry in the TV Tropes wiki: “You Know What They Say About X” (a corollary of which could be Positive Discrimination)
I would say that there’s little to no point in trying to make someone match the average/stereotype about someone even if you don’t have access to more detailed information about her than that.
Oh… sorry for assuming that you’re vulnerable to hemlock, then; I shouldn’t have assumed that without feeding you some.
Perhaps you mean that, in characteristics where humans are known to vary, one should suspend judgment / assume the default probability distribution, rather than assuming the person is known to be average?
Sorry for what seems like nitpicking, but this kind of quiet background categorization is necessary to human cognition. I’m not trying to say “Don’t categorize” but rather, “Since you have no choice but to categorize, do it right.” You can just visualize someone saying, “Oh, I have no choice to assume that Anne’s a female” and then assuming that you, I don’t know, own 20 pairs of shoes, when this is not so much forbidden categorization as bad categorization—if you say “Anne is a member of the ‘likes spaceship Lego’ class”, that’s also categorization, but it’s more detailed categorization, and it screens off any default (stereotypical?) inferences one might make from the now superseded, higher-level ‘female human’ category. But I’m still licensed to assume you’ve got red blood, because that aspect of the ‘female human’ category hasn’t been overridden.
Again, I think it’s import to see the kind of categorization you dislike as ‘inept categorization’, including attempts to infer from the category things that have already been observed and hence ought properly to be screened off; rather than ‘forbidden categorization’. As you know, “AnneC” itself is a category, since you are not exactly the same person at different times; and a category on a high level of abstraction, because people change quite a bit.
Exactly. One thing that I’ve found helps, is to remember to pick up and put down categories based on what particular decision I’m trying to make.
For example, let’s say I’m going to plan a group outing to see a cool sci-fi movie, and I need to decide whether to invite Anne along. (Let’s say I only have 8 tickets, and I want to maximize the chances that the other 7 tickets go to the seven of my friends who will most enjoy the movie, because I’m that kind of maximizer. To further constrain, let’s say the outing’s going to be a surprise, so I can’t just call up Anne and ask her; I have to go on facts I know about her.)
If I know that Anne is female, but don’t know anything about whether she likes spaceship Legos or not, then that’s actually relevant information, and indicates that she might need to go lower down on my list. (This isn’t a chauvanism thing; it’s just a bare fact that females in our culture tend to not like cool sci-fi movies as much as guys. If I don’t like that, I can do something about it, but the moment of deciding how to allocate movie tickets is not the optimal time to do something, given the kind of optimizer I am.)
Now, if I know that Anne likes spaceship Legos, but not that Anne is female, that indicates that they need to go higher on my list. “Liking spaceship Legos” and “liking cool sci-fi movies” tend to correlate pretty strongly.
Now, if I know that Anne likes spaceship Legos, AND I know that Anne is female, that actually places them higher on my list than merely knowing that they like spaceship Legos, even though knowing that Anne is female by itself would place them lower on my list than not knowing. Because my stereotype of “female AND likes spaceship Legos”, as a sub-class, happens to contain cached information about how the “likes spaceship Legos” and “likes sci-fi movies” data happen to clump together inside the “female” super-class.
One of the things that Bayesian analysis has been helping me with, is learning how to back-propagate new information about a particular sub-class into its containing super-class, and then how to forward-propagate the update to the super-class into its remaining sub-classes.
Perhaps you mean that, in characteristics where humans are known to vary, one should suspend judgment / assume the default probability distribution, rather than assuming the person is known to be average?
Yes. I put notions like “humans are generally vulnerable to Death by Hemlock” in a different class than notions like “Girls don’t like science”. For one thing, the stakes are a lot higher in the former case: you don’t harm a female by not assuming she doesn’t like science, but you might kill a human by feeding them hemlock under the assumption that you “need more data”. There’s plenty of empirical data on the effects of hemlock poisoning in entities you’d likely classify as “human” (for the purpose of this exercise), after all, and it seems pretty clear that hemlock ingestion is much more hazardous than not being subjected to the assumption that you hate science because you have a uterus.
Again, I think it’s import to see the kind of categorization you dislike as ‘inept categorization’, including attempts to infer from the category things that have already been observed and hence ought properly to be screened off; rather than ‘forbidden categorization’.
No argument from me there.
So if we have 100 pieces of information about phenomenon A, then we have 100 separate, weaker or stronger, potential categorisations, each with its own set of potential, weaker or stronger, inferences. All legit. and above board, nothing sneaky about it. One could imagine the interactions of these 100 sets of inferences as a multi-dimensional interference pattern, with some nodes glowing brightly as inferences re-inforce, others vanishing completely. The 101st piece of information will bring its own potential categorisation and an additional set of potential inferences. The alternative, I suppose, is just buying a whole truckload of hemlock and going round paying calls on all my friends......
Agree, agree, agree, but the fact that we do it so much tells its own story. Big, clumsy categorisation must be a good strategy for not getting eaten by a tiger or finding the herds of woolly mammoth. [Ponders]
PS − 100% wiggin and proud.
I think a recent XKCD is relevant here.
AnneC, I am russian, but I hate cold weather, I don’t play chess well and I cannot hold my liquor nearly as well as I should to fit the stereotype. I am fairly sure though, that statistically speaking, russians are more tolerant to cold and can drink more, simply as a result of natural selection and percentage of people playing reasonable chess is bigger for historical reasons. You have mentioned, how much pressure you felt when child, to fit in with “female” stereotypes, so wouldn’t it be reasonable to assume, that due to this pressure, percent of girls who actually like science might be less then percent of boys who like science? Boys, who are frequently even encouraged to like science/engineering activities. Intuitively though, I think, that correlation between “girls” and “don’t like science” is smothered into irrelevancy by the “people” and “don’t like science” correlation.
Speaking of shortcuts and connotations, it always amazed me, that a single person might “always give money to homeless people” and “hate bums” :)
There’s no law against helping people you hate.
But it does raise interest in one’s motivation.
[comment deleted]
I started writing a post called ‘smuggling in connotations’ and then I remembered that this post existed. :)
Wow, I’ve been calling this “Argument by Insinuation.” It’s certainly in widespread use and deserves a name.
I note that the smuggled connotations usually aren’t emotionally neutral. Smuggling in “negative connotations” rather than just connotations. It’s similar to ad hominem, but aimed at your opponent’s position rather than at their person. Does applying negative labels sway an audience more powerfully than revealing flaws in an argument? If so, then even more persuasive is to employ subtle smears: smuggled connotations.
Also, perhaps the above example would be clearer if applied to concepts rather than people: remove any conflation with group stereotyping or race bigotry.
It is very easy to forget where the actual difference is located: Imagine for example that in some group, on average, boys are 5% more interested in science, that is a fairly useless piece of information if the spread within both girls and boys is very large. I belive humans (only featherless ones though), on average, are to quick to try to derive answers from to little data: in the example above, knowing that someone is a boy or girl has no practical bearing on whether they are likely to be scientists: One should conclude “not enough data” instead. Saying to ones brain that something “has this effect but only very weakly” very often lends the effect to much weight.
Why am I incapable of seeing green-eyed black-haired people in my mind’s eye? Why do I always see black-eyed green-haired people?
I can only offer a retelling of a retelling of a course on the subject, but the answer seems to be “somewhat”. They are taught to list hyperonyms and hyponyms of whatever it is they are trying to define, and then isolate the most typical ones. Of course, a perfect implementation of this idea alone is not the OED, it’s WordNet.
Apparently, the authors of the NIST Dictionary of Algorithms and Data Structures were quite aware of this approach as well.