Avoiding Jargon Confusion
Previous discussion on jargon:
Against Naming Things (by whales)
Common vs Expert Jargon (by me)
If you’re proposing a new jargon term that is highly specialized, and you don’t want people to misuse it…
...it’s important to also discuss more common concepts that people are likely to want to refer to a lot, and make sure to give those concepts their own jargon term (or refer to an existing one).
Periodically I see people introduce a new concept, only to find that:
People are motivated to use fancy words to sound smart.
People are motivated to use words to exaggerate, for rhetorical punch or political gain.
People just have multiple nearby concepts that they want to refer to, that they don’t have a word for.
Jargon is useful because it lets you cache out complex ideas into simple words, which then become a building block for higher level conversation. It’s less useful if the words get diluted over time.
Examples
Schelling Point
The motivating example was “Schelling Point”, originally intended to mean “a place or thing people could agree on and coordinate around without communicating.”
Then I observed people starting to use “Schelling Point” to mean “any place they wanted to coordinate to meet at.” Initially this was a joke, or it referred to a location that probably would have been a real Schelling Point if you hadn’t communicated (i.e. if you want to meet later at a park, saying ’The central fountain is the schelling point”. It’s true that the fountain would have been the natural place to meet if you hadn’t been able to coordinate in advance)
And then people started just using it to mean any random thing, and it got harder to tell who actually knew what “Schelling Point” meant.
Affordances and Signifiers
The Design of Everyday Things is a book, originally published in 1988, which introduced a term “affordance”, meaning basically “an action a design allows you to take.” For example, a lightweight chair can be sat it, or moved around. A heavy chair gives less affordance for lifting.
But the author found that designers were misusing “affordance”, and so in the 2013 edition of the book he introduced a second term, “signifier.”
Affordances exist even if they are not visible. For designers, their visibility is critical: visible affordances provide strong clues to the operations of things. A flat plate mounted on a door affords pushing. Knobs afford turning, pushing, and pulling. Slots are for inserting things into. Balls are for throwing or bouncing. Perceived affordances help people figure out what actions are possible without the need for labels or instructions. I call the signaling component of affordances signifiers.
Designers have practical problems. They need to know how to design things to make them understandable. They soon discovered that when working with the graphical designs for electronic displays, they needed a way to designate which parts could be touched, slid upward, downward, or sideways, or tapped upon. The actions could be done with a mouse, stylus, or fingers. Some systems responded to body motions, gestures, and spoken words, with no touching of any physical device. How could designers describe what they were doing? There was no word that fit, so they took the closest existing word—affordance. Soon designers were saying such things as, “I put an affordance there,” to describe why they displayed a circle on a screen to indicate where the person should touch, whether by mouse or by finger. “No,” I said, “that is not an affordance. That is a way of communicating where the touch should be.
You are communicating where to do the touching: the affordance of touching exists on the entire screen: you are trying to signify where the touch should take place. That’s not the same thing as saying what action is possible.” Not only did my explanation fail to satisfy the design community, but I myself was unhappy. Eventually I gave up: designers needed a word to describe what they were doing, so they chose affordance. What alternative did they have? I decided to provide a better answer: signifiers. Affordances determine what actions are possible. Signifiers communicate where the action should take place. We need both.
Norman, Donald A.. The Design of Everyday Things (pp. 13-14). Basic Books. Kindle Edition.
Difficulties
Exaggeration and rhetorical punch are the hardest to fight
People will always be motivated to use the most extreme sounding version of a thing. (See “really”, “verily”, “literally”, as well as “Concussions are an Existential Threat to Football.”)
I’m not sure you can do much about this. But if you’re introducing a new concept that’s especially “powerful sounding”, maybe look for ways to distinguish it from other more generally powerful sounding words. I dunno.
Making things sound good or bad
A related failure is when people want to shift the meanings of words for political reasons, to form an association with something “good” or “bad”. Kaj Sotala said in a previous thread:
It feels like for political concepts, they are more likely to drift because people have an incentive to make them shift. For instance, once it gets established that “gaslighting” is something bad, then people have an incentive to shift the definition of “gaslighting” so that it covers things-that-they-do-not-like.
That way they can avoid the need to *actually* establish those things that bad: it’s already been established that gaslighting is bad, and it’s easier to shift an existing concept than it is to create an entirely new concept and establish why it is a bad thing. (It’s kind of a free riding on the work of the people who paid the initial cost of establishing the badness.) I would guess that less loaded terms would be less susceptible to it.
I think this is slightly easier to address than “exaggeration.” If you’re creating a word with negative valence (such as ‘gaslighting’), you could introduce other words that also sound bad that apply in more contexts, so that at least the people who want to sneak negative connotations onto things are less tempted to also dilute the language.
You could do similar things in the opposite direction – if you’re creating a word with positive valence that you don’t want people to glom onto, maybe also create other positive-valenced words.
(Some people try to fight this sort of thing by punishing people whenever they misuse words, and… I dunno man I just don’t think that fight is winnable. Or at least, it seems like we should aim to things up so that we have to spend less energy on that fight in the first place.)
Most people don’t learn jargon by reading the original source for a term or phrase, they learn it from other people. Therefore one of the best ways to stop your jargon from being misused is to coin it in such a way that the jargon is a compressed representation of the concept it refers to. Authors in this milieu tend to be really bad at this. You yourself wrote about the concept of a ‘demon thread’, which I would like to (playfully) nominate for worst jargon ever coined on LessWrong. Its communicated meaning without the original thread boils down to ‘bad thread’ or ‘unholy thread’, which means that preserving the meaning you wanted it to have is a multi-front uphill battle in snow.
Another awful example from the CFAR handbook is the concept of ‘turbocharging’, which is a very specific thing but the concept handle just means ‘fast electricity’ or ‘speedy movement’. Were it not for the context, I wouldn’t know it was about learning at all. Even when I do have that context, it isn’t clear what makes it ‘turbo’. If it were more commonly used it would be almost instantly diluted without constant reference back to the original source.
For a non-LessWrong example, consider the academic social justice concept of ‘privilege’, which has (or had) a particular meaning that was useful to have a word for. However mainstream political commentary has diluted this phrase almost to the point of uselessness, making it a synonym for ‘inequality’.
It’d be interesting to do a study of say, 20-50 jargon terms and see how much level of dilution corresponds to degree-of-self-containment. In any case I suspect that trying to make jargon more self contained in its meaning would reduce misuse. “Costly Signaling” is harder to misinterpret than “Signaling”, for example.
Another option might be to use a word without any baggage. For example, Moloch seems to have held onto its original meaning pretty well but then maybe that’s because the source document is so well known.
EDIT: I see The sparkly pink ball thing makes a similar point.
That post is a fairly interesting counterargument, thanks for linking it. This passage would be fun to try out:
My problem with s1 and s2 is that it’s very difficult to remember which is which unless you’ve had it reinforced a bunch of times to remember. I tend to prefer good descriptive names to nondescript ones, but certainly nondescriptive names are better than bad names which cause people to infer meaning that isn’t there.
On the s1/s2 thing, there are alternatives and I try to promote them when possible, especially since around these parts people tend to use s1/s2 for a slightly different but related purpose to their original formulation anyway. The alternative names for the clusters (not all the source names line up exactly, though):
s1: near, concrete, id, fast, yin, hot, elephant, unconscious, machine, outside
s2: far, abstract, superego, slow, yang, cold, rider, conscious, monkey/homunculus, inside
I think near/far the best, but I think we’re stuck with s1/s2 at this point due to momentum.
The fact that there are subtly different purposes for the alternative naming schema could be a strength.
If I’m talking about biases I might talk about s1/s2. If I’m talking about motivation I might go for elephant/rider. If I’m talking about adaptations being executed I’d probably use blue minimising robot/side module.
I’m not sure whether others do something similar but I find the richness of the language helpful to distinguish in my own mind the subtly different dichotomies which are being alluded to.
Data point: I remember that System 1 is the fast, unconscious process by associating it with firstness—it’s more primal than slow thinking. This is probably somewhat true, but it defeats the purpose (?).
Eliezer also mentioned this in his old article on writing advice:
Unfortunately following Eliezer’s advice seems to perhaps do the most to create the issues being considered about jargon here, because the more readily comprehensible jargon seems on first hearing it the more likely it is that it will be misremember and misapplied later (though “Schelling point” seems a notable case of something with no false-friend interpretation that gets misused anyway).
You mean to say that deliberate anti-epistemology, which combines dehumanization with anthropomorphism, turns out to be bad?
I do quite agree on the “the best jargon is self explanatory” thing, just noting that it’s often fairly hard. (I’m interested if you have alternate suggestions for demon thread, although fwiw I find “unholy thread” a bit more intuitive than ‘uphill battle in snow’, since there’s a lot of reasons something might be like an uphill battle in snow, and one feature of the demon thread is ‘everyone is being subtly warped into more aggressive, hostile versions of themselves’. I agree that connotation still pretty culture dependent though)
“Uphill battle” is a standard English idiom, such idioms are often fairly nonsensical if you think about them hard enough (e.g, “have your cake and eat it too”), but they get a free pass because everyone knows what they mean.
See that’s obvious in your mind, but I don’t think it’s obvious to others from the phrase ‘demon thread’. In fact, hearing it put like that the name suddenly makes much more sense! However, it would never be apparent to me from hearing the phrase. I would go for something like “Escalation Spiral” or “Reciprocal Misperception” or perhaps “Retaliation Bias”.
One thing I like to do before I pick a phrase in this vein, is take the most likely candidates and do a survey with people I know where I ask them, before they know anything else, what they think when they hear the phrase. That’s often steered me away from things I thought conveyed the concept well but actually didn’t.
Flame war. Don’t invent new words ;-)
It’s importantly different from a flame war – flame war implies things are already gone to hell, and people are all-out hostile at each other.
Escalation Spiral feels closest to what I was aiming for there (although it still feels a bit off to me, or at least I feel like I have a harder time using it in sentences for some reason. It felt kind of important to have the word “thread” in there, or to refer more directly to a forum discussion in some way)
The key point of a demon thread/escalation spiral/whatever is that it means things are subtly but noticeably bending towards confusion and hostility, even when everyone is well intentioned and on the same side, and you can see it happening in advance but it’s still real hard to do anything about.
If your product has subtle differences from existing products, that’s not a benefit. To buyers it’s a cost, and your product is supposed to have some benefit that compensates for that cost. For new words, that benefit is usually clarity, but the words “demon thread” are the opposite of clarity.
Is “escalation spiral” opposite of clarity?
The whole point of jargon is to point to fine distinctions in things IMO
(not defending “demon thread” as a term, just the necessity of having a phrase for that concept. If I imagine calling a given LW demon thread a “flame war” I imagine people being like ’huh? it’s not a flame war?”)
Not objecting to the concept—having more concepts is good. But I think if you want to contribute to language, concepts are less than half of the work. Most of the work is finding the right words and making them work well with other words. Here’s a programming analogy: if you come up with a cool new algorithm and want to add it to a system that already has a billion lines of code, most of your effort should be spent on integrating with the system. Otherwise the whole system becomes crap over time. That’s how I think about these things: coining an ugly new word is affixing an ugly shed to the cathedral of language.
Still curious if “escalation spiral” feels more or less clear.
Also wanted to flag that I think your most recent argument seems quite different from your initial one (i.e. “flame war. don’t invent new words.”)
“Escalation spiral” is mixing two spatial metaphors, both far removed from the thing we’re talking about. That’s too abstract for me: being in a bad online argument doesn’t feel like walking up a spiral staircase. I prefer words that say how I feel about the thing—something like “quarreling”, “petty disagreement”, or “argumentative black hole”.
And for several months before writing the demon thread post, the entire ontology of how I thought about online discussion depended heavily on the demon-thread concept (which I still think is quite important). So, whenever I’d explain why I thought a given interaction was going poorly or how to improve it, I’d first have to explain a bunch of relevant concepts about the ontology, which made it harder to have a conversation.
I don’t know how much double-illusion of transparency has been going on. Maybe a lot. But my impression is that I’m now able to refer to “Demon Threads” as a small bite sized chunk of an argument, and
a) many people in the discussion have read the post, and even if the phrase was unintuitive to them, they know enough of what it means for me to make my point
b) the worst case scenario is that they think it means “bad thread”, which is, in fact, often good enough. (and in situations where the precise mechanics of demon threads matter, if the subject comes up and people seem to be missing subtleties there’s a post I can link to now that explains it in more detail)
[edit: I actually think it’s an important enough subgoal of the term to gracefully degrade into “bad thread”, which I’m less confident escalation spiral does, although unsure]
(I’ve updated the original demon thread article to begin with a much more succinct and hopefully clear definition at the top. I’ll try out “escalation spiral” and similar terms in conversation and see if they feel like they work, and consider updating the name)
For what it’s worth, I don’t feel like ‘escalation spiral’ is particularly optimal. The concept you’re going for is hard to compress into a few words because there are so many similar things. It was just the best I could come up with without spending a few hours thinking about it.
Seconded; this interpretation didn’t ever occur to me before reading Raemon’s comment just now.
So I’ve run into this issue myself because sometimes you need to put a name to a concept that no one ever seems to have bothered to name before, or maybe someone did name the concept in the past but in a way that it got confused by processes described in the post and comments such that in the current context a new term is needed, and my general preference is to yank in a foreign word with no currency in modern English. Usually I do this by finding some handy Greek or Latin word since those have extra gravitas among certain circles, but any language will do. I like this because the word has a meaning already that is close to what I want, but also has no meaning at first to your reader so you get to set that up as you wish and avoids being misunderstood out of context since it will instead generate a “what?” response.
Since usually I’m doing this because I need to make a more precise category than any that is handy in the service of philosophy I avoid some of the other ways new jargon fails, although given time all jargon may rot it seems, so it seems we must forever refresh our jargon on some cycle as its meaning drifts through usage.
Context is everything. You can use precise words (including topic-domain jargon) when talking with people who have some expertise in the topic, and who know the jargon in that technical meaning. EVEN WHILE laypeople are expanding and drifting it’s use into something much less precise, useful, or even meaningful.
Who cares? You can still use “schelling point” to discuss coordination by unstated shared background knowledge, even if it’s ALSO used to mean any piece of common knowledge.
You can, but then it’ll be unclear whether you’re using the “common” or “true jargon” meaning whenever you could legitimately mean either. (In the OP’s examples, both the common and true-jargon meanings of “Schelling point” were potentially relevant.) Even if you build a reputation for always using the original meanings of words, there will be people who don’t know the original meaning, and people who don’t know of your reputation. Some people will misinterpret you unless you explicitly state “the Schelling point, as in the original sense of an unstated but agreed-upon point” each time you use it for the first time in a given context.
In short, having two words in the same semantic space causes misunderstandings and frustration. You can get around it by essentially assigning the technical term to a longer word (“Schelling point but, you know, the actual one” instead of simply “Schelling point”), but this has its costs. (See: how shorter words feel more fundamental. Calling the rapid-takeoff intelligence explosion “FOOM” was probably wise, naming “coordination failures” Moloch was probably the single most effective way of getting people to fight them, etc.)
Language drift can introduce confusions but it also has advantages. The original definition of a concept is unlikely to be the most useful definition. It is good if words shift to the definitions that the community finds useful. Let me give an example.
Bostrom’s original defintion of ‘infohazard’ includes information that is dangerous in the wrong hands: “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” However most people use infohazard to mean something like “information that is intrinsically harmful to those who have it or will cause them to do harm to others” (this is how it used in the SCP stories for example). As Taymon points out Bostrom didn’t distinguish between “things you don’t want to know” and “things you don’t want other people to know”.
I think the SCP definition is more useful. It’s probably actively good that the definition of infohazard has shifted over time. Insisting on Bostrom’s definition is usually just confusing things.
Was there supposed to be more text here?
I suspect the majority of drift in concepts comes from simple misunderstandings/playing a game of telephone and memetic selection. This leads to more politically charged and simple definitions, without any need for negative intent. I suspect providing other related concepts won’t help with this besides maybe delaying the drift by a few weeks, but it at least seems worth trying.
However, when actually trying to put your suggestions into practice I’m at a loss as to how to do that. For instance, I’m writing a post naming and explaining different types of risk. It’s not clear to me how to make a type of risk sound good. I’m also unsure how to go about introducing other words that are clearly bad in the same post that are more general, without having the post become completely off-topic and rambling.
The “deal with political muddying” is definitely the hardest one and I’m least confident you can do anything about it.
Edit: I mis-remembered something, so I won’t leave false info up. Comment re-written.
I think there’s a related way jargon can get confused, which is where the central example used to convey it is selected for controversy, not accuracy. I have an example, but I’m not sure of it.
Claim: ‘Nudge’ is a fairly general idea, but the most common example used is one that has been selected for controversy rather than centrality to the concept.
I remember once seeing a talk by Cass Sunstein where he expressed irritation with the fact that everyone thinks of ‘nudge’ as the thing where you change organ donation from opt-in to opt-out. I recall him being quite irritated that it is the central example used, and wishing he’d never used it, though I don’t remember precisely what reason he gave at the time.
I looked around, and there’s a post by him here expressing that he prefers ‘mandated choice’ for organ donation, where you don’t opt-in or opt-out, you’re just forced to explicitly make the decision (i.e. it’s a required question when you renew your drivers license).
Another example Sunstein uses is ‘putting fruit at eye level’ in a store, or adding the image of a housefly to men’s urinals to ‘improve aim’. I think the issue with the opt-in / opt-out example is that it’s strongly trying to route around your agency to get you to make a choice that isn’t obviously what you want. And the opt-in/opt-out example has been fairly controversial (Muslim groups opposed it in the UK) which I can image contributing to it being the most widespread example.
Regarding routing around agency: I know that every time I get an ‘opt-out of this newsletter’ box when signing up on a website, I feel like they’re acting adversarially, in a way I wouldn’t if they’d said “choose from the drop-down whether you’d like our newsletter”.
Can anyone whose looked into this with depth confirm the above account of ‘nudge’ and whether opt-in / opt-out is non-central and has been selected for non-epistemic reasons?
Huh? I am sufficiently surprised/confused by this example to want a citation.
Edit: The surprise/confusion was in reference to the pre-edit version of the above comment, and does not apply to the current edition.
Sure, I’ll try to find one later today.
Edit: Added some more detail.
I agree that it’s good to avoid fights by making good behaviour more easy and natural, but man, I feel very depressed by the idea that the rationality community can’t coordinate on the norm that one should use words precisely and clearly to preserve useful meanings. Which is to say, I think that the fight could maybe be winnable (at least in certain contexts), but also that if it isn’t it’s worth spending more time thinking about how we could have failed so badly.
I do think it’s a fine, achievable goal for the rationalsphere… in places that have relatively longterm expectations and norms. I think it may be achievable on LW. I don’t think it’s especially achievable on Open FaceBook and places where the social boundaries are porous and people come and go and don’t have much common expectations (or common knowledge that they have common expectations).
(And, it seems like a lot of good jargon here should be making the rounds outside of the rationalsphere, and you’d want it to be able to survive once it reaches a mass audience)
I would draw a line between “fighting change by punishing defection” and “coordination to maintain a meaning”.