Extensions and Intensions
“What is red?”
“Red is a color.”
”What’s a color?”
″A color is a property of a thing.”
But what is a thing? And what’s a property? Soon the two are lost in a maze of words defined in other words, the problem that Steven Harnad once described as trying to learn Chinese from a Chinese/Chinese dictionary.
Alternatively, if you asked me “What is red?” I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient, though if you know what the word “No” means, the truly strict would insist that I point to the sky and say “No.”
I think I stole this example from S. I. Hayakawa—though I’m really not sure, because I heard this way back in the indistinct blur of my childhood. (When I was 12, my father accidentally deleted all my computer files. I have no memory of anything before that.)
But that’s how I remember first learning about the difference between intensional and extensional definition. To give an “intensional definition” is to define a word or phrase in terms of other words, as a dictionary does. To give an “extensional definition” is to point to examples, as adults do when teaching children. The preceding sentence gives an intensional definition of “extensional definition”, which makes it an extensional example of “intensional definition”.
In Hollywood Rationality and popular culture generally, “rationalists” are depicted as word-obsessed, floating in endless verbal space disconnected from reality.
But the actual Traditional Rationalists have long insisted on maintaining a tight connection to experience:
“If you look into a textbook of chemistry for a definition of lithium, you may be told that it is that element whose atomic weight is 7 very nearly. But if the author has a more logical mind he will tell you that if you search among minerals that are vitreous, translucent, grey or white, very hard, brittle, and insoluble, for one which imparts a crimson tinge to an unluminous flame, this mineral being triturated with lime or witherite rats-bane, and then fused, can be partly dissolved in muriatic acid; and if this solution be evaporated, and the residue be extracted with sulphuric acid, and duly purified, it can be converted by ordinary methods into a chloride, which being obtained in the solid state, fused, and electrolyzed with half a dozen powerful cells, will yield a globule of a pinkish silvery metal that will float on gasolene; and the material of that is a specimen of lithium.”
— Charles Sanders Peirce
That’s an example of “logical mind” as described by a genuine Traditional Rationalist, rather than a Hollywood scriptwriter.
But note: Peirce isn’t actually showing you a piece of lithium. He didn’t have pieces of lithium stapled to his book. Rather he’s giving you a treasure map—an intensionally defined procedure which, when executed, will lead you to an extensional example of lithium. This is not the same as just tossing you a hunk of lithium, but it’s not the same as saying “atomic weight 7” either. (Though if you had sufficiently sharp eyes, saying “3 protons” might let you pick out lithium at a glance...)
So that is intensional and extensional definition., which is a way of telling someone else what you mean by a concept. When I talked about “definitions” above, I talked about a way of communicating concepts—telling someone else what you mean by “red”, “tiger”, “human”, or “lithium”. Now let’s talk about the actual concepts themselves.
The actual intension of my “tiger” concept would be the neural pattern (in my temporal cortex) that inspects an incoming signal from the visual cortex to determine whether or not it is a tiger.
The actual extension of my “tiger” concept is everything I call a tiger.
Intensional definitions don’t capture entire intensions; extensional definitions don’t capture entire extensions. If I point to just one tiger and say the word “tiger”, the communication may fail if they think I mean “dangerous animal” or “male tiger” or “yellow thing”. Similarly, if I say “dangerous yellow-black striped animal”, without pointing to anything, the listener may visualize giant hornets.
You can’t capture in words all the details of the cognitive concept—as it exists in your mind—that lets you recognize things as tigers or nontigers. It’s too large. And you can’t point to all the tigers you’ve ever seen, let alone everything you would call a tiger.
The strongest definitions use a crossfire of intensional and extensional communication to nail down a concept. Even so, you only communicate maps to concepts, or instructions for building concepts—you don’t communicate the actual categories as they exist in your mind or in the world.
(Yes, with enough creativity you can construct exceptions to this rule, like “Sentences Eliezer Yudkowsky has published containing the term ‘huragaloni’ as of Feb 4, 2008”. I’ve just shown you this concept’s entire extension. But except in mathematics, definitions are usually treasure maps, not treasure.)
So that’s another reason you can’t “define a word any way you like”: You can’t directly program concepts into someone else’s brain.
Even within the Aristotelian paradigm, where we pretend that the definitions are the actual concepts, you don’t have simultaneous freedom of intension and extension. Suppose I define Mars as “A huge red rocky sphere, around a tenth of Earth’s mass and 50% further away from the Sun”. It’s then a separate matter to show that this intensional definition matches some particular extensional thing in my experience, or indeed, that it matches any real thing whatsoever. If instead I say “That’s Mars” and point to a red light in the night sky, it becomes a separate matter to show that this extensional light matches any particular intensional definition I may propose—or any intensional beliefs I may have—such as “Mars is the God of War”.
But most of the brain’s work of applying intensions happens sub-deliberately. We aren’t consciously aware that our identification of a red light as “Mars” is a separate matter from our verbal definition “Mars is the God of War”. No matter what kind of intensional definition I make up to describe Mars, my mind believes that “Mars” refers to this thingy, and that it is the fourth planet in the Solar System.
When you take into account the way the human mind actually, pragmatically works, the notion “I can define a word any way I like” soon becomes “I can believe anything I want about a fixed set of objects” or “I can move any object I want in or out of a fixed membership test”. Just as you can’t usually convey a concept’s whole intension in words because it’s a big complicated neural membership test, you can’t control the concept’s entire intension because it’s applied sub-deliberately. This is why arguing that XYZ is true “by definition” is so popular. If definition changes behaved like the empirical nullops they’re supposed to be, no one would bother arguing them. But abuse definitions just a little, and they turn into magic wands—in arguments, of course; not in reality.
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 229 points) (
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 124 points) (
- Disputing Definitions by 12 Feb 2008 0:15 UTC; 117 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 116 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- Where to Draw the Boundary? by 21 Feb 2008 19:14 UTC; 84 points) (
- Rationalists, Post-Rationalists, And Rationalist-Adjacents by 13 Mar 2020 20:25 UTC; 80 points) (
- Maybe Lying Doesn’t Exist by 14 Oct 2019 7:04 UTC; 70 points) (
- Superexponential Conceptspace, and Simple Words by 24 Feb 2008 23:59 UTC; 65 points) (
- The Argument from Common Usage by 13 Feb 2008 16:24 UTC; 62 points) (
- Schelling Categories, and Simple Membership Tests by 26 Aug 2019 2:43 UTC; 59 points) (
- Reductive Reference by 3 Apr 2008 1:37 UTC; 59 points) (
- Seeing Red: Dissolving Mary’s Room and Qualia by 26 May 2011 17:47 UTC; 54 points) (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- No Universally Compelling Arguments in Math or Science by 5 Nov 2013 3:32 UTC; 49 points) (
- The Generalized Anti-Zombie Principle by 5 Apr 2008 23:16 UTC; 49 points) (
- Five examples by 14 Feb 2021 2:47 UTC; 42 points) (
- How to get value learning and reference wrong by 26 Feb 2019 20:22 UTC; 37 points) (
- Bridging syntax and semantics, empirically by 19 Sep 2018 16:48 UTC; 25 points) (
- Free Will as Unsolvability by Rivals by 28 Mar 2011 3:28 UTC; 25 points) (
- Web of connotations: Bleggs, Rubes, thermostats and beliefs by 19 Sep 2018 16:47 UTC; 20 points) (
- Consciousness of abstraction by 21 Dec 2020 21:35 UTC; 20 points) (
- Words, Locally Defined by 3 May 2018 23:26 UTC; 19 points) (
- Sequence Exercise: “Extensions and Intensions” from “A Human’s Guide to Words” by 17 Apr 2011 20:22 UTC; 19 points) (
- [Incomplete] What is Computation Anyway? by 14 Dec 2022 16:17 UTC; 16 points) (
- Learning Normativity: Language by 5 Feb 2021 22:26 UTC; 14 points) (
- Can few-shot learning teach AI right from wrong? by 20 Jul 2018 7:45 UTC; 13 points) (
- Effective Altruism as Global Catastrophe Mitigation by 8 Jun 2018 4:17 UTC; 9 points) (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 9 points) (
- TSR #7: Universal Principles by 27 Dec 2017 1:54 UTC; 8 points) (
- 21 Mar 2016 12:17 UTC; 8 points) 's comment on Lesswrong Potential Changes by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- What’s the difference between Wisdom and Rationality? by 14 Apr 2023 6:22 UTC; 8 points) (
- 27 Nov 2011 17:32 UTC; 8 points) 's comment on (Subjective Bayesianism vs. Frequentism) VS. Formalism by (
- Effective Altruism as Global Catastrophe Mitigation by 8 Jun 2018 4:35 UTC; 7 points) (EA Forum;
- [SEQ RERUN] Extensions and Intensions by 8 Jan 2012 6:48 UTC; 7 points) (
- 13 Nov 2013 0:23 UTC; 5 points) 's comment on No Universally Compelling Arguments in Math or Science by (
- 2 Jun 2011 12:06 UTC; 5 points) 's comment on [SEQ RERUN] Your Rationality is My Business by (
- 31 Jan 2010 11:35 UTC; 5 points) 's comment on Deontology for Consequentialists by (
- 7 May 2018 5:32 UTC; 4 points) 's comment on Words, Locally Defined by (
- 14 Mar 2023 18:41 UTC; 4 points) 's comment on What is a definition, how can it be extrapolated? by (
- 30 Sep 2019 16:48 UTC; 4 points) 's comment on Noticing Frame Differences by (
- 1 Aug 2013 14:37 UTC; 4 points) 's comment on More “Stupid” Questions by (
- 13 Mar 2020 4:05 UTC; 4 points) 's comment on orthonormal’s Shortform by (
- 26 May 2010 16:05 UTC; 3 points) 's comment on Open Thread: May 2010, Part 2 by (
- Extensive and Reflexive Personhood Definition by 29 Sep 2017 21:50 UTC; 3 points) (
- 21 Jan 2021 18:13 UTC; 2 points) 's comment on purrtrandrussell’s Shortform by (
- 18 Feb 2024 2:34 UTC; 2 points) 's comment on A Hill of Validity in Defense of Meaning by (
- 20 Mar 2011 22:31 UTC; 2 points) 's comment on Less Wrong Rationality and Mainstream Philosophy by (
- 15 Jun 2013 7:08 UTC; 0 points) 's comment on Open Thread, June 2-15, 2013 by (
It’s ‘Peirce’, not ‘Pierce’.
Arg. I know that, but my fingers don’t obey.
Alternatively, if you asked me “What is red?” I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area. This would probably be sufficient,
Ah, so that’s what “red” is! Man, that has stumped me for SO long. It all makes sense now! Red is the set: {some stop sign, some guy, some traffic light, some blood on Eliezer_Yudkowsky’s body, a business card, and a cursor on a portion of Eliezer_Yudkowsky’s screen}
But when would I ever need to use that?
the part about the “truly strict”. well that doesnt actually clarify it either..
Silas, that’s actually a pretty good way to capture some of the major theories about color—ostensive definition for a given color solves a lot of problems.
But I wish Eliezer had pointed out that intensional definitions allow us to use kinds of reasoning that extensional definitions don’t … how do you do deduction on an extensional definition?
Also, extensional definitions are harder to interpersonally communicate using. I can wear two shirts, both of which I would call “purple,” and someone else would call one “mauve” and the other “taupe” (or something like that—I’m not even sure what those last two colors are). Whereas if we’d defined the colors on wavelengths of light, well, we know what we’re talking about. It’s harder to get more overlap between people on extensional rather than intensional definitions.
Mauve is a light grayish purple, reasonably likely to appear in the sky soon after sunset. Taupe is some sort of brown. I was bewildered by the top example at the wikipedia article—it’s much darker than what I think of as taupe. It turns out (page down a ways) that what I had in mind was sandy taupe—the Crayola version.
Silas: red is not the set, but what all of those things have in common. The set would be most effective if you presented a sequence of examples that was different in every way except in color. To be extra sure of getting the point across, you could present examples that are exactly the same, except in color, and then say one was “red” and the other was “not red”—a whole educational philosophy has been built up out of this (look up Siegfried Engelmann and Direct Instruction). Of course this method of communication assumes that the audience is sighted, not colorblind, understands the concept of “same” and “different”, etc.
I think Silas brings up a fair point. Ostensive definition in isolation can be pretty darn hard. It has to compete with every other likely usage-interpretation.
These last couple of posts on definitions have been very good.
Another definitional strategy prone to abuse is coinage or creation of neologisms, sometimes used to sneak assumptions into a debate that would require significant support otherwise.
For one example, I have noticed the use of the term ‘technoscience’ or ‘technoscientific’ in rhetoric concerning science and technology. The use of this term is striking given the pretty obvious differences between science and technology as domains and activities in the real world. One must be making a very imprecise point for it to apply equally well to both science and technology in one breath. Use of this term might be nothing more than a symptom of this imprecision, but can also be thought of as stipulating an unsupported conclusion in itself. That is, anyone meeting the argument on its terms implicitly agrees that technology and science are identical for purposes of reasoning about them.
There are many other examples, I’m sure.
Wow, this is the most response I’ve ever gotten to an Overcoming_Bias comment O.o
My point was just, as Benquo noted, that definition that way (extensive) competes with every other conceivable interpretation. The success of such definitions in conveying the meaning suggests sufficient common understanding between the people to rule out the infinity of (“obviously” ridiculous) solutions, and therefore that the describer hasn’t actually excluded all the wrong answers. But, that was close enough to Eliezer_Yudkowsky’s point in the rest of the post, so, go fig.
I just mentioned it because his post reminded me of a passage I recently read in Steven Pinker’s The Stuff of Thought, where he mentions an exchange between a child and his father, showing that parental corrections do not suffice to define (rule out all wrong-sounding syntax) all of the rules we naturally use when speaking languages.
child: I turned the raining off. father: You mean, you turned the sprinkler off? child: I turned the raining off of the sprinkler.
that should read Stevan Harnad
For some reason, I’m reminded of the passage from the opening of Augustine’s Confessions—in the true spirit of autobiography, he describes how he learned words and ideas as an infant by being shown extensional definitions:
The temptation to create a Wikipedia article for “Sentences Eliezer Yudkowsky has published containing the term ‘huragaloni’ as of Feb 4, 2008” is very strong, but I will resist.
“Trying to learn Chinese from a Chinese/Chinese dictionary”. I first tried to learn Chinese from Children book. I learned “thatched cottage” before “house”… funny when speaking with my Chinese friends.
By the way, two nice Chinese dictionaries:
http://www.chinese-tools.com/tools/dictionary.html (with audio + examples + calligraphy)
http://www.chinese-dictionary.org (multilingual, chinese vs english, french, spanish...)
It’s easy to teach a dog what words mean, provided the dog has some interest you can quickly show in the thing meant.
I wrote out on a napkin, one day when she was two, all the words and phrases that my Doberman Susie definitely knew in context, and came up with 200.
All of them were for things that involved her somehow. The most direct naming of things was for toys ; but commands and so forth, and the ever-versitile ``fetch the …″ where … is something fetchable, provided a link to lots of items you could name. Her interest was then in fetching, and indirectly to the name of the thing.
People are no different. To teach what red is, you need some interest in red.
I once saw a person from Korea discover, much to her surprise, that pennies are not red. She had been able to speak English for a while and could correctly identify a stop sign or blood as red, and she had seen plenty of pennies before discovering this.
In Korea they put the color of pennies and the color of blood in the same category and give that category a Korean name.
And in Hungarian they put the colour of stop signs and the colour of blood in different categories.
Pretty late on this, but just in case, a few of points:
someone’s already sort of mentioned this, but your first example (defining “red”), is by ostension, not by extension. Defining something by extension, especially something like “red”, would require pointing out an infinite number of things.
You were probably just careles in your choice of words, but “the neural pattern (in my temporal cortex) that inspects an incoming signal from the visual cortex to determine whether or not it is a tiger” is a good example of Betty Crocker’s Theory of Microwave Cooking (cf. http://books.google.ca/books?id=9JGOmd66jGsC&pg=PA121&lpg=PA121&dq=churchland+betty+crocker+microwave&source=bl&ots=5JDuIkAIe6&sig=6KK3pE7xUE12U5O5C5DKaP2gz_c&hl=en&ei=HRsbS46YDsWKlQf7_rm6BA&sa=X&oi=book_result&ct=result&resnum=1&ved=0CA0Q6AEwAA#v=onepage&q=&f=false
your final point about redefining words “any way I like” being the same as changing/reassigning beliefs is exactly right...our words can only “mean” our intensions of them, and since we can’t/don’t communicate intensions, the chain ends there.
An important algorithm for attempting to translate extensional definition into intensional definition is Mitchell’s version spaces:
http://en.wikipedia.org/wiki/Version_space
S.I. Hayakawa is mentioned in this article instead of Alfred Korzybski, but the Intensional vs. Extensional distinction was one of the fundamental distinctions of AK, along with The Map is Not the Territory, The Word is not the Thing, etc.
What is the benefit of knowing this?
It’s a trivial distinction.
Why do you feel the need to do this?
Correction for future note: The extensional definition is the complete set of objects obeying a definition. To define a thing by pointing out some examples (without pointing out all possible examples) has the name “ostensive definition”. H/t @clonusmini on Twitter. Original discussion in “Language in Thought and Action” here.
“you only communicate maps to concepts, or instructions for building concepts—you don’t communicate the actual categories as they exist in your mind or in the world.”
Really? I’ve always defined definition as either the sorting algorithm between, or description of the boundary between, two categories, yes-thing and not-thing.
Of course you’re not giving me your neuronal arrangement, but you ought to be giving me one of two things that any agent, on any substrate, can use to sort every possible thing into yes-thing and not-thing, the same way you do.
If, after receiving a “definition”, I (and any/every rational agent) am not able to apply the algorithm or description of the boundary between, and sort everything into yes-thing and not-thing the same way as you do, then what you’ve given me isn’t really a definition (by my definition).
Using this definition of definition cuts down lots of useless (or anti-useful) definitions people try to give. I find that bad definitions are at the root of most stupid disagreements, both on the internet and IRL.
I find myself always struggling with these concepts, coming back to this post, kinda-sorta understanding it, but still rather confused. Some questions and comments:
I’ve never heard this before. Where does it come from? Like, the idea of extensions and intensions, does it come from linguistics? Philosophy? English? Is it widely agreed on/applied?
Why use the word “extension”? How does it relate to the typical use of the word? Like, you can extend a 10 chapter book by writing an 11th chapter. More generally, when you extend something, you add more of a thing to that thing. But here, with extensional definitions, you’re not making a thing bigger. You’re just… giving examples of it. So I have a sense that something like “example-oriented definition” would be more appropriate than “extensional definition”.
Same with “intensional definition”. I think “intensional” relates to “intense” rather than “intent”. But I don’t see what it has to do with intensity or intentionality.
The idea of distinguishing between symbols and referents here makes sense to me. Like, yes, definitions are symbols not referents. So in some sense they are maps not territory. But I feel like an actual definition is supposed to be complete. Yes, especially for extensional definitions, the set of examples that fit is usually going to be too large to fit on a piece of paper, but I feel like that just means that the definition is incomplete. Not that definitions themselves are supposed to be incomplete.
“you can’t control the concept’s entire intension because it’s applied sub-deliberately”. I really like this. I’m thinking about it as opposed to a computer program. In a computer program, you could have one statement saying
let name = 'alice'
and then, later on, have another statement sayingname = 'bob'
. And boom: you just changed the “definition” ofname
. But with humans, it’s not so simple. It’s more wishy-washy. Using this analogy, if computers behaved like humans, it’d be something like ”name
? That’s gotta be'alice'
. I’ve accessedname
so many times and it’s always been'alice'
.” Or other times, ”name
? Hmmm. I know it used to be'alice'
, but I remember it being reassigned to something else. What was it reassigned to? Oh yeah,'bob'
.” In other words, the computer would have a hard time finding the correct value. Sometimes it’d mistakenly use an old, incorrect value. Other times it’d find the correct value, but only after some time and effort. Sorta like a busted cache. So yeah, when you redefine an English word, I guess you gotta keep in mind that people already use caches, you’d need to invalidate all of these caches, but in practice that won’t happen, so you’re gonna get people who use the old and now incorrect value a lot, and even when you avoid this, it’s going to mean that people can’t read from the cache anymore, so reads are going to take longer, and it’s especially going to take longer to populate the cache with the new value.The concept of “extensional“ and “intentional” definitions is a traditional distinction in philosophy and logic.
This is really elegant. Worth taking a beat to digest
Of course, it turned out that LLMs do this just fine, thank you.
I don’t think LLMs do the equivalent of that. It’s more like, learning Chinese from a Chinese/Chinese dictionary stapled to a Chinese encyclopedia.
It is not obvious to me that using a Chinese/Chinese dictionary, purged of example sentences, would let you learn, even in theory, even things a simple n-grams or word2vec model trained on a non-dictionary corpus does and encodes into embeddings. For example, would a Chinese/Chinese dictionary let you plot cities by longitude & latitude? (Most dictionaries do not try to list all names, leaving that to things like atlases or gazetteers, because they are about the language, and not a specific place like China, after all.)
Note that the various examples from machine translation you might think of, such as learning translation while having zero parallel sentences/translations, are usually using corpuses much richer than just an intra-language dictionary.
I don’t doubt that LLMs could do this, but has this exact thing actually been done somewhere?
I’ve not read the paper but something like https://arxiv.org/html/2402.19167v1 seems like the appropriate experiment.