Terminology Thread (or “name that pattern”)
I think there’s widespread assent on LW that the sequences were pretty awesome. Not only do they elucidate upon a lot of useful concepts, but they provide useful shorthand terms for those concepts which help in thinking and talking about them. When I see a word or phrase in a sentence which, rather than doing any semantic work, simply evokes a positive association to the reader, I have the useful handle of “applause light” for it. I don’t have to think “oh, there’s one of those...you know...things where a word isn’t doing any semantic work but just evokes a positive association the reader”. This is a common enough pattern that having the term “applause light” is tremendously convenient.
I would like this thread to be a location where people propose such patterns in comments, and respondents determine (a) whether this pattern actually exists and / or is useful; (b) whether there is already a term or sufficiently-related concept that adequately describes it; and (c) what a useful / pragmatic / catchy term might be for it, if none exists already.
I would like to propose some rules suggested formatting to make this go more smoothly.
(ETA: feel free to ignore this and post however you like, though)
When proposing a pattern, include a description of the general case as well as at least one motivating example. This is useful for establishing what you think the general pattern is, and why you think it matters. For instance:
General Case:
When someone uses a term without any thought to what that term means in context, but to elicit a positive association in their audience.
Motivating Example:
I was at a conference where someone said AI development should be “more democratic”. I didn’t understand what they meant in context, and upon quizzing them, it turned out that they didn’t either. It seems to me that they just used the word “democratic” as decoration to make the audience attach positive feelings to what they were saying.
When I think about it, this seems like quite a common rhetorical device.
When responding to a pattern, please specify whether your response is:
(a) wrangling with the definition, usefulness or existence of the pattern
(b) making a claim that a term or sufficiently-related concept exists that adequately describes it
(c) suggesting a completely fresh, hitherto-uncoined name for it
(d) other
(ETA: or don’t, of you don’t want to)
Obviously, upvote suggestions that you think are worthy. If this post takes off, I may do a follow-up with the most upvoted suggestions.
- 7 Aug 2014 12:41 UTC; 3 points) 's comment on Open thread, August 4 − 10, 2014 by (
- 15 Aug 2014 5:32 UTC; -1 points) 's comment on Inquiry into community standards by (
General case:
Small differences in the means of a normal distributions cause large differences at the tails.
Motivating example:
East Africans are slightly better at distance running than the rest of the world population, so if a randomly-picked Ethiopian and a randomly-picked someone-else compete in a marathon, the Ethiopian has a better chance of winning, but not by very much. But at the extreme right tail of the distribution (i.e. at Olympic-level running competitions), the top runners are almost all Ethiopians and Kenyans.
In my head I call it “threshold amplification” but I wonder if there’s an official name for this.
I would love a name for this too since the observation is important for why ‘small’ differences in means for normally distributed populations can have large consequences, and this occurs in many contexts (not just IQ or athletics).
Also good would be a quick name for log-normal distribution-like phenomenon.
The normal distribution can be seen as the sum of lots of independent random variables; so for example, IQ is normally distributed because the genetics is a lot of small additive variables. The log-normal is when it’s the multiple of lots of independent variables; so any process where each step is necessary, as has been proposed for scientific productivity in having multiple steps like ideas->research->publication.
The normal distribution has the unintuitive behavior that small changes in the mean or variance have large consequences out on the thin tails. But the log-normal distribution has the unintuitive behavior that small improvements in each of the independent variables will yield large changes in their product, and that the extreme datapoints will be far beyond the median or average datapoints. (‘Compound interest’ comes close but doesn’t seem to catch it because it refers to increase over time.)
IQ is normally distributed because the distribution of raw test scores is standardized to a normal distribution.
And why was the normal distribution originally chosen? Most of intelligence seems explained by thousands of alleles with small additive effects—and such a binomial situation will quickly converge to a normal distribution.
The phrase “additive effects” doesn’t make sense except in reference to some metric. If your metric is IQ, then that’s circular.
No, it’s not, because IQ is itself extracted from a wide variety of cognitive measures.
You seem to be claiming that there are some unspecified underlying other metrics of which IQ is simply a linear combination. If so, then IQ is not the ultimate metric. Which doesn’t contradict my claim (claiming that P is not true does not contradict the claim that P → Q). It does raise the question of what those metrics are.
To expand on what I just said: IQ is a factor extracted from a wide variety of cognitive measures, whose genetic component is largely explained by additive effects from a large number of alleles of small effect with important but relatively small nonlinear contributions. That is, intelligence is largely additive because additive models explain much of observed variance and things like the positive manifold of cognitive tests.
Please be more precise in your comments, or stop wasting my time due to your lack of reading comprehension and obtuseness like you did before in my Parable post.
And what are those measures?
As I ALREADY SAID, the word “additive” only makes sense with respect to a particular metric. Saying that intelligence is additive because it’s measured by metrics in which there are additive effects is circular, unless you can show some non-arbitrary source of the metrics. How about you actually address my posts?
Given that YOU are failing to be precise, and to articulate what specifically you find erroneous about my posts, that is rather hypocritical thing to say. And I don’t think that personal insults are appropriate.
I have posted another response in that thread (even though you refused to respond to my previous one). In short, you are confusing your inability to write a coherent sentence with a lack of reading comprehension on my part, and you need to get the fuck over yourself. If in cases of miscommunication, you’re not willing to even consider the possibility that you are even partly responsible, then you need to find somewhere else to post, because this website is not for people like you.
You appear to be downvoting my posts due to a vendetta against me from another article, which is rather similar to behavior that got another poster banned. I am not entirely clear on what the community standards are here, but it appears to me that you are likely flouting them.
Do you really not know anything like what tests routinely load or anything about the historical development? If the latter, please go consult Wikipedia or one of many books on the topic. And if it’s Socratic bullshit, just make your point already.
No, it’s not circular. If all the cognitive tests have large fractions of variance explained by purely additive factors, then that large fractions of variance are explained by purely additive factors. If they didn’t, if for example there were some sort of fixed sum of ‘cognition points’ for every person which are zero-sum spread around various domains like verbal vs spatial, or if there were complex nonlinear relationships, then additive factors wouldn’t explain much of anything in cognitive performance and certainly wouldn’t predict anything in the real world. But they do. The positive manifold exists. The correlations with all sorts of real-world results exist. And the underlying genetics is largely additive for the same reason: the additive models explain a lot of variance in IQ, and hence with real world outcomes.
There must be many charitable and intelligent people here to read all my stuff despite my inability to write a coherent sentence.
What a peculiar claim; quite aside from my karma, I helped make this website from the start.
You flatter yourself that your comments aren’t bad enough that other people will downvote them… I don’t bother with mass downvotes of idiots.
I asked for metrics, not tests.
You made a claim. The burden of proof is on you to support it. “Go read a book” is not a valid citation.
So you think that asking questions to clarify a position is “bullshit”?
So, in other words, if a large fraction is additive, then a large fraction is additive. Do you not understand what the word “circular” means?
You’re arguing for a position by contradiction, but your contradiction is only one alternative hypothesis. That is fallacious. Your responses show you don’t even understand what my objection is, and therefore all your attempts at refutation fall flat.
When someone says “if A, then B”, it’s not very honest to quote them as saying B. And what do you mean by “I help make this website”? Does having a lot of karma give you the right to ignore basic civility? Was this website constructed by going around being rude to people? Or is that a recent development on your part?
I didn’t say that I was dismissing all other hypotheses, only noting that of all the posters, you are the most likely candidate to have downvoted.
Tests yield metrics. More quibbling. Good job there convincing me you’re asking questions in good faith. I can really see that you’ve bothered to read anything on the topic.
Yes, it is, when you’re criticizing an entire century-old well-developed field with an abundance of materials online. At this point, the burden is not on the person talking about intelligence. Go educate yourself, stop wasting my time with your captious quibbling about whether ‘tests’ are ‘metrics’ (to point out your latest crap); if you actually cared about the topic, you wouldn’t be saying any of this, you’d be reading Jensen’s textbooks or hell, even a Wikipedia article.
Given all your previous comments, yes.
I see you didn’t understand the point of that. Think a little harder, and also think a little bit about what circular arguments are. (Hint: they don’t take the form ‘A, therefore, A’.)
Sigh.
Let me try again: when a newcomer and an oldtimer disagree on what is appropriate for a site, when the oldtimer was around before the site existed, helped make it, and is a major contributor by comments, articles, and karma, which is more likely to be correct? I’m thinking… it’s probably not the newcomer, and that arguing that is astoundingly presumptuous of them.
Nice walk back there. ‘I never said he was a communist, I was merely noting he was the most likely candidate to be a communist.’
So to reiterate my previous question—you know, since you’re totally not trolling or anything, and you’re definitely arguing in good faith, and you’re surely not going to reply with just some more rhetoric and attempts to shame or nitpick irrelevant wording, in this thread or others—what is your actual problem with these concepts? Do you have data which refutes the relevant concepts entirely? Or what?
What metric to apply to a test is a completely nontrivial issue, and the fact that you refer to such a crucial issue as “quibbling” shows how little you understand about the issue.
I’m not criticizing the field. I’m asking you to answer a simple question, and you’re refusing.
Simply declaring yourself to not have the burden of proof does nothing.
And so, instead of explaining, you’re simply telling me to “think a little harder”.
“A, therefore A” is a circular argument. Most people put more effort into disguising the circular nature of their arguments, but that doesn’t mean that yours is not circular.
I think it is astoundingly presumptuous for you to dismiss any criticism of your behavior with “I’ve been around here longer than you and have lots of karma”. Your behavior is at blatant odds with what I understand to be the goals of this website. Either you are indeed acting contrary to those goals, or I have a deep misunderstanding about the goals of this website.
I am not walking anything back. I deliberately included the word “appear” in my original post in recognition that this was merely the most likely explanation.
So, it’s “bullshit” when I ask you to clarify what you mean, but it’s okay for you to ask me to clarify what I am saying, even though you’ve made it absolutely clear that you have no intention whatsoever of listening to my point of view, have already made up your mind that I am wrong and refuse to listen to any contrary arguments, interpret everything I say through the filter of presuming bad faith, and are here simply to insult me? A discussion is a cooperative process. I can’t explain something to someone whose motive isn’t to understand, but to attack.
Exactly as predicted. I think we’re done here.
Tell you what, tell me what you meant by “Um, no, because the USSR had no reason to think and be correct in thinking it served a useful role for the USA which meant the threats were bluffs that were best ridden out lest it damage both allies’ long-term goals.” and I’ll try to explain what my issue here is.
And no, we’re not done here. You have been extremely rude, and that needs to be addressed.
No, we’re done. Conversation with you has proven on several topics to be a frustrating waste of my time. I think it’s better for both of us if I simply ignore you from now on on all topics. Maybe you’ll improve, but I doubt it.
I think that “multiplicative” or “geometric” describes such phenomena.
I’ve suspected for a long time that that was the insight Carl Sagan had while high and showering with his wife:
(It is a little interesting, & amusing, to see someone inferring the “invalidit[y] of racism” from an observation more often used as a justification for racial hereditarian attitudes!)
Here’s one that I don’t think has a name—the belief that [some desirable thing] should just happen. For example, the belief that people should just have different emotional reactions than they do, or that a government policy should just have good effects.
Eliezer called this believing in the should-universe.
Incidentally, this expression is very intuitive and has an amazingly low inferential distance. Multiple times IRL and online I would reply to someone “it’s too bad we don’t live in a should-universe” in response to a should-statement, and my reply is instantly understood, without having to explain much, except maybe saying “the should-universe is an imaginary one where what you think should happen actually does, every time”.
https://en.wikipedia.org/wiki/Wishful_thinking or perhaps https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem ?
Related notion: in the humongous SSC post Reactionary Philosophy in an Enormous Planet-Sized Nutshell, the section “Reach for the Tsars” deals with proposals to solve problems which could only be implemented by dictatorial fiat, and describes it as a “czar’s-eye view” solution.
Expecting different emotions than the ones actually observed looks to me like typical mind fallacy.
It may be a typical mind fallacy if the person actually has the emotional habits they’re demanding from other people. Now that I think about it, people sometimes demand that their own emotions should just be different.
However, a statement can include more than one fallacy, and I think fantasies of lack of process can also be in play.
The whole FAI project resulted from Eiiezer realizing that a process was needed for AIs to be benevolent rather than a disaster.
As may be obvious, I now think the bias could be named the lack of process bias, though the “it should just happen!” bias might be more intuitive.
I was going to ask, “Do we ever demand emotional habits we don’t have ourselves?”, but then I noticed it was yet another typical mind fallacy on my part.
Meta-comment on this: I had a couple of examples I was going to suggest, but the process of following the above rules made it obvious that they were cases of existing concepts.
General Case:
In an otherwise well-constructed discussion on a subject, the author says something that reveals a significant misunderstanding of the subject, casting doubt on the entire piece, and the ability of the author to think about it sensibly.
Motivating Example:
A few years ago, a lot of public libraries in the UK were closed under austerity measures. Author Philip Pullman (a highly-educated, eloquent and thoughtful man) gave a speech on the subject, which was transcribed and widely circulated online. It was about the non-pecuniary value of libraries, and their value as educational and community resources. It was a very strong speech, but at one point it put forward the proposition that the value of libraries are completely incalculable and beyond measure. This took the wind out of the speech’s sails for me, and my takeaway was “you write and speak very well, but you clearly can’t be trusted to think about this subject in any useful way”.
I experience this quite a lot. I’ll be reading something online, mentally nodding along, thinking “yeah, this makes sense”, and then the author will undermine all their credibility, not by saying something radical or obnoxious or unworkable or ignorant, but by saying something that demonstrates they don’t know how to think properly about the issue.
“Red flag” isn’t exactly what you want but has served me well enough in similar conversations.
That’s similar but possibly not the same as the Gell-Mann amnesia effect.
This could also just be rhetorical. Almost any sufficiently long argument will contain some really wrong or dumb elements, but most will contain some that simply aren’t meant to be taken literally.
I think the format creates a frictional cost that will prevent valuable contributions. Can’t we just post willy-nilly and let the good stuff bubble up to the top of the comments as usual?
If you like. I’ll change “rules” to “suggested format in the post.
We could call the collection … RationalityTropes! Or maybe something catchier: SmartTropes. Not LWTropes, though, that sounds like something RationalWiki would set up as a sucks-site.
Oh god, it’s following me.
...ahem. Strictly speaking, a trope is a storytelling device—a more-or-less stereotyped pattern that an author uses to express theme or symbolism or compactly illustrate character or plot. (There’s an even narrower definition, too, but that’s the common one.) TV Tropes’ habit of giving real-life examples is therefore improper.
This would be something more general, and I’m not sure English has a word for it.
I don’t think it’s improper. There’s a common notion saying that humans perceive life (history, politics, society) in terms of narratives. We construct those narratives using stereotypes; in other words, the narratives have common themes and archetypes. There is a lot of commonality to how we parse a book, and how we parse actual events. For the same reason, people reporting on real events deliberately present them as narratives.
So it’s legitimate for TV Tropes to list tropes “in real life”; what they’re really describing is the narrative through which humans perceive (and remember) that real life.
Saying people perceive life in terms of narratives is correct. Describing motifs in those narratives in a quasi-objective way isn’t. The very reason a stereotype is not a fact is that it doesn’t show up in everyone’s internal narrative in a given situation.
I don’t object to finding an example of, say, dramatic irony in William Schirer’s Rise and Fall of the Third Reich, which explicitly is a narrative covering real events. I do object to saying the same thing about World War 2, the real one.
I’m not sure I understand your point. I’m afraid I may be strawmanning or misconstruing your position in my reply. If I do, please point it out.
Certainly, different people can tell very different stories (both internal and external) about the same events. They can perceive different tropes or motifs at work. And when they talk about these events, each will describe them as he sees them.
Any one story can objectively contain a certain motif. Reality itself doesn’t contain motifs, because it’s not a story. And people can disagree about motifs because they tell different (internal) stories about the same set of facts. If that’s what you’re saying, I completely agree.
Also, sometimes it makes sense to try to be as objective as possible and describe facts without fitting any theory or story to them. That’s not the same as saying those stories don’t exist. We just ignore them some of the time.
However:
World War 2 did not exist in reality. All there was, was a huge amount of individual events. It takes human storytelling to join them into the story of a great global war. To say that the Japanese were part of the war, but it only started in 1939 even though they had been at war with China and the USSR for years before that, because Hitler’s invasion of Poland is more narratively important than his invasion of Czechoslovakia… That is pure narrative storytelling.
Facts from the territory are much smaller-scale than WW2; it exists only in our maps, and it’s inherently a human narrative, which means it can legitimately exhibit irony, although of course people can disagree about the irony in a particular story. The territory doesn’t contain irony, but nobody would say it does, because nobody would say an individual event local in space and time is ironic without reference to a larger narrative.
I see no relevant difference between Schirer’s book and anything you or I might say or think about “World War 2”; one is written down and the other is not, that is all.
General case:
When someone posts links from webpage X, which can be refuted from webpage Y (or vice versa), and so on, without adding anything themselves to the discussion.
Motivating example:
I’ve often seen things posted on climate change, lifted directly from http://wattsupwiththat.com/ , that can be refuted from http://www.skepticalscience.com/ , which can often be re-refuted from the original website, and so on. Since we’re just letting the websites talk to each other, and neither poster has any relevant expertise, this seems a pointless waste.
An argument that halts in disagreement (or fails to halt in agreement) because the interlocutors are each waiting for another to provide a skillful assessment of their own inexpertly-referenced media sounds a lot like a software process deadlock condition in computer science. Maybe there’s a more specific type of deadlock, livelock, resource starvation, …, in the semantic neighborhood of your identified pattern.
Dropping references, while failing to disclaim your ability to evaluate the quality and relevance of topical media, could be called a violation of pragmatic expectations of rational discourse, like Grice’s prescriptive maxims.
Maybe a telecommunications analogy would work, making reference to amplifiers \ repeaters \ broadcast stations that degrade a received signal if they fail to filter \ shape it to the characteristics of the retransmission channel.
“Rhetorical reenactment” sounds like “historical reenactment” and hints at the unproductive, not-directly-participatory role in the debate of the people sharing links.
I’m not sure whether to start a new comment thread on this, but a related phenomenon:
Blog A has a post about some subject. Blog B has a post that is mostly just recapitulating the points of Blog A, and links to Blog A. Blog C has a post also on the subject, and rather than linking to Blog A, links to Blog B. Blog D then comes along and links to Blog C, and so on, and so rather than a bunch of blog posts all linking to the original post, you have a chain of blogs citing blogs citing blogs citing blogs. (This sort of phenomenon shows up a lot of times when Snopes tries to research something, although often it’s print media citing each other). I’m reminded of the phrase “it’s turtles all the way down”, and think of this as “turtle citing”, although perhaps a more descriptive phrase would be “recursive citation”.
Another related phenomenon is people using anchor text for their links that really doesn’t reflect the actual link content.
/u/Morendil calls this ‘leprechauns’; in a Wikipedia context, one might use ‘citogenesis’. I run into this occasionally—most recently: https://en.wikipedia.org/wiki/Talk:Bicycle_face#Serious_sourcing_issues
This happens when the debaters’ personal level of knowledge and expertise has been exceeded by external sources introduced to the debate. Essentially, then, each person is using an appeal to authority with different ideas of what level of authority their sources have, since it well beyond their abilities to verify their sources’ arguments. Terms like Epistemic Closure in political contexts address the related phenomena of conflicting but self-consistent networks of authoritative sources.
I’d call the underlying issue an “Epistemic Divide”
For a year or two I’ve occasionally thought about writing a post about the principle that one shouldn’t try to explain what isn’t true, which feels to me important and deserves a catchy name. (But (1) I couldn’t think of clear-cut, uncontroversial, specific local examples to exhibit, and (2) it’s a special case of “Your Strength as a Rationalist” and “Fake Explanations”, so I haven’t bothered.)
Some people are more rational than others, but no one is “rational” simpliciter, because no one meets the stringent criterion of applying perfect Bayesian reasoning to everything (or even most things). Consequently, calling people “rational” without qualification is an inflationary use of the term.
Nonetheless, people on LW sometimes refer to rationality as if it’s a binary quality some people have and some people don’t, which doesn’t make sense to me. Searching LW for the phrase “rational people” returns similar examples of this. (In fairness, a lot of examples of the phrase refer to hypothetical ideal reasoners, or are ironic uses, which I’m OK with.)
It’d be useful to replace “rational” in these contexts with a word for someone who meets the looser standard of “thinks systematically & impartially about something without labouring under any obvious bias or appealing to fallacies” — basically the kind of ideal a traditional rationalist or sceptic might use. I’ve been using the word “quasi-rational”, but there’s probably a catchier word out there. (Pre-rational? Proto-rational? Sub-rational?)
No. When a word is used “simpliciter” all qualifications that are obviously necessary are implicit. So when somebody is said to be rational it means that with regards to the things that are relevant in the context that you are talking about they are more rational than the usual standard (probably most people, or most people in some group that is obvious from the context).
So the term you are looking for is “rational”.
I don’t think that can be true in general. One of my examples had someone invoking Aumann’s agreement theorem as follows:
Interpreting “rational people” in a quantitative, “more rational than the usual standard” sense there won’t work, because Aumann’s agreement theorem assumes perfect Bayesian rationality, not merely better-than-usual rationality. I reckon the sentence I quoted is just plain false unless one interprets “rational people” in an absolute sense.
Yes, that statement is just plain false. The problem behind this is people referring to game theoretic agents as “[perfectly] rational people”, and then others hearing them assuming that the ‘rational people’ in game theory are the same kind as real ‘rational people’.
Rationality means more that one thing. One of the things it means is taking the pro-science, anti-god side in the Culture Wars. That may well be what it means when used as a binary.
Yes, people sometimes use “rational” to refer to that too. But using the word in that sense on LW has a much bigger risk of muddying the meaning of the term here, since the word’s local canonical meaning is quite different.
There is a pattern that has several names already, but each is problematic:
liberal: has strong conflict between literal meaning (open, permissive), and actual meaning (corresponding to a largely arbitrary political clustering)
leftist: more abstract than “liberal”, and thus without the literal meaning baggage, but tainted by its use by people on the right as a slur against anyone who opposes rightist extremism
political correctness: has been corrupted by its use to mean whatever a person wants it to mean, from anything to hypersensitivity to any sensitivity at all
feminazi: no explanation needed, I think
social justice: even more of a literal meaning conflict that “liberal”. Strongly suggests that the issue actually is social justice, rather than mind-killing in ostensive service of social justice
social justice warrior: the best term I’ve seen, and the added “warrior” term helps convey the sense of irrationality, but still has much of the problems of “social justice”.
I really wish there were some good term for the leftist flavor of anti-rationality. Calling them “social justice warrior” just invites the question “Why are you opposed to social justice?”
There is one, and it’s quite specific: Lysenkoism. If you mean something other than the historical movement based on Lysenko’s theories, why not call it neo-Lysenkoism or something like that?