Feel the Meaning
When I hear someone say, “Oh, look, a butterfly,” the spoken phonemes “butterfly” enter my ear and vibrate on my ear drum, being transmitted to the cochlea, tickling auditory nerves that transmit activation spikes to the auditory cortex, where phoneme processing begins, along with recognition of words, and reconstruction of syntax (a by no means serial process), and all manner of other complications.
But at the end of the day, or rather, at the end of the second, I am primed to look where my friend is pointing and see a visual pattern that I will recognize as a butterfly; and I would be quite surprised to see a wolf instead.
My friend looks at a butterfly, his throat vibrates and lips move, the pressure waves travel invisibly through the air, my ear hears and my nerves transduce and my brain reconstructs, and lo and behold, I know what my friend is looking at. Isn’t that marvelous? If we didn’t know about the pressure waves in the air, it would be a tremendous discovery in all the newspapers: Humans are telepathic! Human brains can transfer thoughts to each other!
Well, we are telepathic, in fact; but magic isn’t exciting when it’s merely real, and all your friends can do it too.
Think telepathy is simple? Try building a computer that will be telepathic with you. Telepathy, or “language”, or whatever you want to call our partial thought transfer ability, is more complicated than it looks.
But it would be quite inconvenient to go around thinking, “Now I shall partially transduce some features of my thoughts into a linear sequence of phonemes which will invoke similar thoughts in my conversational partner...”
So the brain hides the complexity—or rather, never represents it in the first place—which leads people to think some peculiar thoughts about words.
As I remarked earlier, when a large yellow striped object leaps at me, I think “Yikes! A tiger!” not “Hm… objects with the properties of largeness, yellowness, and stripedness have previously often possessed the properties ‘hungry’ and ‘dangerous’, and therefore, although it is not logically necessary, auughhhh CRUNCH CRUNCH GULP.”
Similarly, when someone shouts “Yikes! A tiger!”, natural selection would not favor an organism that thought, “Hm… I have just heard the syllables ‘Tie’ and ‘Grr’ which my fellow tribe members associate with their internal analogues of my own tiger concept, and which they are more likely to utter if they see an object they categorize as aiiieeee CRUNCH CRUNCH help it’s got my arm CRUNCH GULP”.
Considering this as a design constraint on the human cognitive architecture, you wouldn’t want any extra steps between when your auditory cortex recognizes the syllables “tiger”, and when the tiger concept gets activated.
Going back to the parable of bleggs and rubes, and the centralized network that categorizes quickly and cheaply, you might visualize a direct connection running from the unit that recognizes the syllable “blegg”, to the unit at the center of the blegg network. The central unit, the blegg concept, gets activated almost as soon as you hear Susan the Senior Sorter say “Blegg!”
Or, for purposes of talking—which also shouldn’t take eons—as soon as you see a blue egg-shaped thing and the central blegg unit fires, you holler “Blegg!” to Susan.
And what that algorithm feels like from inside is that the label, and the concept, are very nearly identified; the meaning feels like an intrinsic property of the word itself.
The cognoscenti will recognize this as yet another case of E. T. Jaynes’s “Mind Projection Fallacy”. It feels like a word has a meaning, as a property of the word itself; just like how redness is a property of a red apple, or mysteriousness is a property of a mysterious phenomenon.
Indeed, on most occasions, the brain will not distinguish at all between the word and the meaning—only bothering to separate the two while learning a new language, perhaps. And even then, you’ll see Susan pointing to a blue egg-shaped thing and saying “Blegg!”, and you’ll think, I wonder what “blegg” means, and not, I wonder what mental category Susan associates to the auditory label “blegg”.
Consider, in this light, the part of the Standard Dispute of Definitions where the two parties argue about what the word “sound” really means—the same way they might argue whether a particular apple is really red or green:
Albert: “My computer’s microphone can record a sound without anyone being around to hear it, store it as a file, and it’s called a ‘sound file’. And what’s stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone’s brain. ‘Sound’ means a pattern of vibrations.”
Barry: “Oh, yeah? Let’s just see if the dictionary agrees with you.”
Albert feels intuitively that the word “sound” has a meaning and that the meaning is acoustic vibrations. Just as Albert feels that a tree falling in the forest makes a sound (rather than causing an event that matches the sound category).
Barry likewise feels that:
sound.meaning == auditory experiences
forest.sound == false
Rather than:
myBrain.FindConcept(“sound”) == concept_AuditoryExperience
concept_AuditoryExperience.match(forest) == false
Which is closer to what’s really going on; but humans have not evolved to know this, anymore than humans instinctively know the brain is made of neurons.
Albert and Barry’s conflicting intuitions provide the fuel for continuing the argument in the phase of arguing over what the word “sound” means—which feels like arguing over a fact like any other fact, like arguing over whether the sky is blue or green.
You may not even notice that anything has gone astray, until you try to perform the rationalist ritual of stating a testable experiment whose result depends on the facts you’re so heatedly disputing...
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 225 points) (
- Dissolving the Question by 8 Mar 2008 3:17 UTC; 144 points) (
- Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning by 7 Jun 2020 7:52 UTC; 131 points) (
- Humans in Funny Suits by 30 Jul 2008 23:54 UTC; 82 points) (
- Where to Draw the Boundary? by 21 Feb 2008 19:14 UTC; 82 points) (
- The Ultimate Source by 15 Jun 2008 9:01 UTC; 78 points) (
- Searching for Bayes-Structure by 28 Feb 2008 22:01 UTC; 63 points) (
- The Argument from Common Usage by 13 Feb 2008 16:24 UTC; 62 points) (
- Probability is Subjectively Objective by 14 Jul 2008 9:16 UTC; 43 points) (
- Three Fallacies of Teleology by 25 Aug 2008 22:27 UTC; 36 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- Strong moral realism, meta-ethics and pseudo-questions. by 31 Jan 2010 20:20 UTC; 26 points) (
- Timeless Beauty by 28 May 2008 4:32 UTC; 23 points) (
- 9 Oct 2012 15:36 UTC; 20 points) 's comment on Rationality Quotes October 2012 by (
- 29 Oct 2012 3:57 UTC; 11 points) 's comment on Causal Reference by (
- Philosophy of Numbers (part 1) by 2 Dec 2017 18:20 UTC; 11 points) (
- 7 Jan 2010 22:04 UTC; 9 points) 's comment on Rationality Quotes January 2010 by (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 9 points) (
- [SEQ RERUN] Feel the Meaning by 20 Jan 2012 6:09 UTC; 7 points) (
- 8 Jul 2011 1:18 UTC; 4 points) 's comment on Derek Parfit, “On What Matters” by (
- Hypotheses For Dualism by 9 Jan 2010 8:05 UTC; 3 points) (
- Meetup : Houston Meetup − 1/29 by 23 Jan 2012 23:21 UTC; 1 point) (
- 27 Dec 2008 13:01 UTC; 0 points) 's comment on Nonsentient Optimizers by (
- 7 Jul 2011 18:45 UTC; -1 points) 's comment on A Defense of Naive Metaethics by (
Albert and Barry’s different usages of the word ‘sound’ are both perfectly testable. Once they’ve taken the reasonable and sufficient step of looking ‘sound’ up in a dictionary, and having identified the two (out of many) possible meanings they were using, then one can go off and test for the presence of pressure waves in the air, while the other tests for auditory perceptions in the humans (and/or other animals doted with hearing) nearest to the event. They can later compare their results and Albert will say ’there was sound according to the definition that I was using (Webster : sound(1) 1a), while Barry can happily agree while saying there wasn’t, according to the definition that he was using (Webster : sound(1) 1b). Having got that over, they will go off for a beer at the nearest bar and have a good laugh over that time-travelling guy’s not even knowing how to use a dictionary....
It would certainly facilitate communication, though, if people could agree on what words mean rather than having personal definitions. No doubt it’s unrealistic to expect everyone to agree on precisely where the boundary between yellow and orange lies, but tigers aren’t even a yellowish orange.
The words stand for abstractions and abstractions suffer from the abstraction uncertainty principle i.e. an abstraction cannot be simultaneously, very useful/widely applicable and very precise. The more useful a word is, the less precise it will be and vise versa. Dictionary definitions are a compromise—They never use the most precise definitions even when such are available (e.g. for scientific terms) because such definitions are not useful for communication between most users of the dictionary. For example, If we defined red to be light with a frequency of exactly 430THz, it would be precise but useless but if were to define it as a range then it will be widely useful but will almost certainly overlap with the ranges for other colours thus leading to ambiguity.
(I think EY may even have a wiki entry on this somewhere)
It seems like you would claim that there is “meaningness” to a word. I would claim that you are essentializing lack of process; namely, just because people do not process a difference between word and content does not mean that that process is not possible, or that the lack of a process itself deserves a title.
This is a subtle point. I would like to clarify. My keyboard has “whiteness” in the sense that when I am looking at it I experience “white.” The claim that a word has “meaningness” would state that while using a word we “feel meaning.” But perhaps this “feeling of meaning” is just equivalent to the feeling of “using a word.”
My main point of (personal) evidence is that I am currently learning Japanese and have had significant experience (and failure) in attempting to directly absorb words. I find that to actually understand the language I must respond in the latter manner of the hypothetical language learner responding to hearing “Blegg” for the first time. There are elements of Japanese that are impossible to understand as having meaning—e.g. “particles” such as “ga,” “ha,” “wo,” etc. What is the definition of the word “the?” As a slightly less simplistic example, certain words like “omiyage” which have no english synonym can only be understood by a cultural outside through precise comprehension of the relation of the word to the greater cultural context. If this is not done self-consciously (by asking “what are the mental/cultural processes which give meaning to this word?”) then it takes to long. So I do it consciously. Thus, Japanese words (and, increasingly, English words) do not have “meaningness.”
Once you start performing the processing that you have not been, the illusionary “feeling” of word-as-meaning disappears.
Yelsgib, for “feels that” you may also read “falsely believes that” or “mistakenly intuits that”. I am claiming that words do not have meanings, but, rather, labels associate to concepts (cognitive patterns that can (among other things) perform membership tests).
If labels associate to concepts, what does the label “word” associate to?
You should be very careful when using terms like “falsely believes that” when referring to the way people are thinking. “False” as a label only has an association in the context of “verifiable fact.” This places the onus on you to show that the claim “words have meanings” lies in the context of “verifiable fact.” You must show that an entity is claiming implicitly or explicitly that the assertion “words have meanings” is “true” (a.k.a. consistent with the axioms of the context in which it is expressed). My claim would be that the statement “words have meanings” is actually the basis of a context—that the claim is “hollow” in the sense that the axioms of math are “hollow” (neither true nor false) but that it is useful in the very same sense—we can generate a set of deductively consistent (and more “powerful”) claims from the claim.
I hope you’ll forgive my constant use of quotes—I use them when I fear that my definition of a word might significantly vary from yours. I also hope that you’ll forgive my somewhat idiosyncratic use of language—I expect that we are coming at the question of human intelligence from at least slightly different intellectual backgrounds.
Is it sad that I mentally replaced ‘forest.sound == false’ with ‘!forest.sound’?
I’m loving these semantics/logic posts. Well done.
The easy solution is just to realize that words are labels and nothing more—end of story. It’s just that that’s quite a hard lesson to internalize.
I am new to this wiki (first post even) so I might be missing something, but is it really that hard a lesson to process? If I called a monkey a garp it’d still be exactly the same creature, therefore words are labels and have no meaning of themselves. Quite a simple train of thought. And I can’t think of a single emotional reason why anyone wouldn’t want to adopt this belief, since most people don’t care about words. Right?
I am going to assume that by now you’ve read enough of the Sequences to recognize your possible hindsight bias, in your post.
In any case, merely saying that “words are labels” is akin to the guessing the teacher’s password; people have said it for ages (e.g., “a rose by any other name” from Romeo and Juliet), yet most people (in my opinion) do not truly understand it.
Korzybski is particularly good on The Word is Not the Thing and Consciousness of Abstracting, which resolves these kinds of issues immediately.
Well, you describe language somewhat as if it were designed for communication. If, as Chomsky et al. argue, it was not, if it is a thought machine with communication hastily and inconveniently added later, then:
1)it is a bad—no, really bad—idea to try and teach computers speak language the way humans do—they should do better and probably start with a different (functional) architecture;
2)sound 2b and sound 2c may have a different underlying structure which is simply compressed by the hasty externalization (aka communication) module.