Words as Hidden Inferences
Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand. I reach in, and feel a small, curved object. I pull the object out, and it’s blue—a bluish egg. Next I reach in and feel something hard and flat, with edges—which, when I extract it, proves to be a red cube. I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red.
Now I reach in and I feel another egg-shaped object. Before I pull it out and look, I have to guess: What will it look like?
The evidence doesn’t prove that every egg in the barrel is blue, and every cube is red. The evidence doesn’t even argue this all that strongly: 19 is not a large sample size. Nonetheless, I’ll guess that this egg-shaped object is blue—or as a runner-up guess, red. If I guess anything else, there’s as many possibilities as distinguishable colors—and for that matter, who says the egg has to be a single shade? Maybe it has a picture of a horse painted on.
So I say “blue”, with a dutiful patina of humility. For I am a sophisticated rationalist-type person, and I keep track of my assumptions and dependencies—I guess, but I’m aware that I’m guessing… right?
But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, “Yikes! A tiger!” Not, “Hm… objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties ‘hungry’ and ‘dangerous’, and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh CRUNCH CRUNCH GULP.”
The human brain, for some odd reason, seems to have been adapted to make this inference quickly, automatically, and without keeping explicit track of its assumptions.
And if I name the egg-shaped objects “bleggs” (for blue eggs) and the red cubes “rubes”, then, when I reach in and feel another egg-shaped object, I may think: Oh, it’s a blegg, rather than considering all that problem-of-induction stuff.
It is a common misconception that you can define a word any way you like.
This would be true if the brain treated words as purely logical constructs, Aristotelian classes, and you never took out any more information than you put in.
Yet the brain goes on about its work of categorization, whether or not we consciously approve. “All humans are mortal, Socrates is a human, therefore Socrates is mortal”—thus spake the ancient Greek philosophers. Well, if mortality is part of your logical definition of “human”, you can’t logically classify Socrates as human until you observe him to be mortal. But—this is the problem—Aristotle knew perfectly well that Socrates was a human. Aristotle’s brain placed Socrates in the “human” category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment: Swiftly, silently, and without conscious approval.
Aristotle laid down rules under which no one could conclude Socrates was “human” until after he died. Nonetheless, Aristotle and his students went on concluding that living people were humans and therefore mortal; they saw distinguishing properties such as human faces and human bodies, and their brains made the leap to inferred properties such as mortality.
Misunderstanding the working of your own mind does not, thankfully, prevent the mind from doing its work. Otherwise Aristotelians would have starved, unable to conclude that an object was edible merely because it looked and felt like a banana.
So the Aristotelians went on classifying environmental objects on the basis of partial information, the way people had always done. Students of Aristotelian logic went on thinking exactly the same way, but they had acquired an erroneous picture of what they were doing.
If you asked an Aristotelian philosopher whether Carol the grocer was mortal, they would say “Yes.” If you asked them how they knew, they would say “All humans are mortal, Carol is human, therefore Carol is mortal.” Ask them whether it was a guess or a certainty, and they would say it was a certainty (if you asked before the sixteenth century, at least). Ask them how they knew that humans were mortal, and they would say it was established by definition.
The Aristotelians were still the same people, they retained their original natures, but they had acquired incorrect beliefs about their own functioning. They looked into the mirror of self-awareness, and saw something unlike their true selves: they reflected incorrectly.
Your brain doesn’t treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity. Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories. Notice how I said “you” and “your brain” as if they were different things?
Making errors about the inside of your head doesn’t change what’s there; otherwise Aristotle would have died when he concluded that the brain was an organ for cooling the blood. Philosophical mistakes usually don’t interfere with blink-of-an-eye perceptual inferences.
But philosophical mistakes can severely mess up the deliberate thinking processes that we use to try to correct our first impressions. If you believe that you can “define a word any way you like”, without realizing that your brain goes on categorizing without your conscious oversight, then you won’t take the effort to choose your definitions wisely.
- Diseased thinking: dissolving questions about disease by 30 May 2010 21:16 UTC; 528 points) (
- 37 Ways That Words Can Be Wrong by 6 Mar 2008 5:09 UTC; 229 points) (
- The Tails Coming Apart As Metaphor For Life by 25 Sep 2018 19:10 UTC; 158 points) (
- Decision Theories: A Less Wrong Primer by 13 Mar 2012 23:31 UTC; 110 points) (
- Were atoms real? by 8 Dec 2010 17:30 UTC; 93 points) (
- Sneaking in Connotations by 19 Feb 2008 19:41 UTC; 84 points) (
- Searching for Bayes-Structure by 28 Feb 2008 22:01 UTC; 65 points) (
- The Argument from Common Usage by 13 Feb 2008 16:24 UTC; 62 points) (
- Feel the Meaning by 13 Feb 2008 1:01 UTC; 61 points) (
- Typicality and Asymmetrical Similarity by 6 Feb 2008 21:20 UTC; 55 points) (
- Empty Labels by 14 Feb 2008 23:50 UTC; 50 points) (
- Reply to Nate Soares on Dolphins by 10 Jun 2021 4:53 UTC; 46 points) (
- The Pascal’s Wager Fallacy Fallacy by 18 Mar 2009 0:30 UTC; 44 points) (
- Sequence Exercise: first 3 posts from “A Human’s Guide to Words” by 16 Apr 2011 17:21 UTC; 43 points) (
- Blood Is Thicker Than Water 🐬 by 28 Sep 2021 3:21 UTC; 37 points) (
- Bayesianism for humans: prosaic priors by 2 Sep 2014 21:45 UTC; 30 points) (
- Escaping Your Past by 22 Apr 2009 21:15 UTC; 28 points) (
- Nick Attains Aligntenment by 6 Apr 2022 4:24 UTC; 20 points) (
- 20 Jan 2011 14:39 UTC; 18 points) 's comment on Theists are wrong; is theism? by (
- 24 Sep 2019 1:44 UTC; 15 points) 's comment on Novum Organum: Preface by (
- 1 Apr 2012 21:05 UTC; 12 points) 's comment on Open Thread, April 1-15, 2012 by (
- Should we stop using the term ‘Rationalist’? by 29 May 2020 15:11 UTC; 12 points) (
- 8 Dec 2010 17:55 UTC; 11 points) 's comment on Were atoms real? by (
- Rationality Reading Group: Part N: A Human’s Guide to Words by 18 Nov 2015 23:50 UTC; 9 points) (
- 26 Sep 2012 16:44 UTC; 8 points) 's comment on Diseased thinking: dissolving questions about disease by (
- Learned Blankness and Expectations of Solubility by 18 Mar 2022 12:42 UTC; 8 points) (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 24 Jun 2010 0:32 UTC; 8 points) 's comment on A Rational Education by (
- The Map of the Map by 29 Jan 2022 10:03 UTC; 7 points) (
- 2 Jan 2024 20:34 UTC; 5 points) 's comment on AI Is Not Software by (EA Forum;
- [SEQ RERUN] Words as Hidden Inferences by 7 Jan 2012 5:30 UTC; 5 points) (
- Meetup : Seattle Biweekly meetup by 3 Aug 2011 18:26 UTC; 4 points) (
- 22 Jan 2013 17:36 UTC; 4 points) 's comment on Update on Kim Suozzi (cancer patient in want of cryonics) by (
- 19 Nov 2010 20:51 UTC; 3 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
- 29 Aug 2012 13:54 UTC; 3 points) 's comment on Expected utility and utility after time by (
- 14 Feb 2011 23:36 UTC; 3 points) 's comment on Subjective Relativity, Time Dilation and Divergence by (
- 9 Jun 2009 18:40 UTC; 3 points) 's comment on You can’t believe in Bayes by (
- 9 Dec 2010 15:46 UTC; 3 points) 's comment on Were atoms real? by (
- 22 Apr 2012 22:14 UTC; 3 points) 's comment on Stupid Questions Open Thread Round 2 by (
- Meetup : West LA: Linguistic Relativity by 14 Nov 2014 9:46 UTC; 2 points) (
- 17 May 2009 21:51 UTC; 2 points) 's comment on “What Is Wrong With Our Thoughts” by (
- 17 Apr 2009 3:36 UTC; 2 points) 's comment on Practical rationality questionnaire by (
- 12 May 2015 3:19 UTC; 1 point) 's comment on Debunking Fallacies in the Theory of AI Motivation by (
- 7 Jul 2008 4:06 UTC; 0 points) 's comment on Is Morality Given? by (
- 15 Dec 2012 13:07 UTC; 0 points) 's comment on By Which It May Be Judged by (
- 20 Dec 2010 17:06 UTC; -1 points) 's comment on Luminosity (Twilight fanfic) Part 2 Discussion Thread by (
- 10 Feb 2011 14:02 UTC; -2 points) 's comment on The UFAI among us by (
Incorrect. It is not a misconception. There are consequences of choosing to define a word that can lead to error if they are ignored, but that does not constrain the definition.
Also incorrect. Mortality can be a trait possessed by all humans, yet not be needed to identify something as human. If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.
It is a trivial objection to say that the definition of human might not reflect the nature of the world. That is the case with all definitions: we can label concepts as we please, but it requires justification to assert that the concepts are present in reality.
You’re absolutely right. You can define a word any way you like. Almost all definitions are useless or even anti-useful.
I think this is in the context of somebody insisting that Socrates is human so he must be mortal.
If you are trying to prove mortality by claiming he’s human, then all humans must be mortal for you to assume this.
I agree, though, that, perhaps the statement was a little vague.
Replying loooong after the fact (as you did, for that matter) but I think that’s exactly the problem that the post is talking about. In logical terms, one can define a category “human” such that it carries an implication “mortal”, but if one does that, one can’t add things to this category until determining that they conform to the implication.
The problem is, the vast majority of people don’t think that way. They automatically recognize “natural” categories (including, sometimes, of unnatural things that appear similar), and they assign properties to the members of those categories, and then they assume things about objects purely on the bases of appearing to belong to that category.
Suppose you encountered a divine manifestation, or a android with a fully-redundant remote copy of its “brain”, or a really excellent hologram, or some other entity that presented as human but was by no conventional definition of the word “mortal”. You would expect that, if shot in the head with a high-caliber rifle, it would die; that’s what happens to humans. You would even, after seeing it get shot, fall over, stop breathing, cease to have a visible pulse, and so forth, conclude that it is dead.. You probably wouldn’t ask this seeming corpse “are you dead?”, nor would you attempt to scan its head for brain activity (medically defining “dead” today is a little tricky, but “no brain activity at all” seems like a reasonable bar).
All of this is reasonable; you have no reason to expect immortal beings walking among us, or non-breathing headshot victims to be capable of speech, or anything else of that nature. These assumptions go so deep that it is hard to even say where they come from, other than “I’ve never heard of that outside of fiction” (which is an imperfect heurisitic; I learn of things I’d never heard about every day, and I even encountered some of the concepts in fiction before learning they really exist). Nobody acknowledges that it’s a heuristic, though, and that can lead to making incorrect assumptions that should be consciously avoided when there’s time to consider the situation.
@Caledonian2 said “If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.”, but this statement is self-contradictory unless the implication “human” → “mortal” is logically false. Otherwise, mortality itself is part of “the necessary criteria for identification as human”.
Eliezer said: “Your brain doesn’t treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity.”
What alternative model would you propose? I’m not quite ready yet to stop using words that imperfectly place objects into categories. I’ll keep the fact that categories are imperfect in mind.
I really don’t mean this in a condescending way. I’m just not sure what new belief this line of reasoning is supposed to convey.
I think I would agree with Charlie Munger that more mistakes have been made from inferential (“run from the tiger”) shortcuts than from the use of logic. Such shortcuts as proximity bias, following perceived leaders, doing things because people around us are doing them,loving similar-looking people and hating different-looking people, and similar errors are most likely caused by evolutionary hard-wiring, not by philosophical ponderings. I have dedicated a section of my blog to Munger here: http://www.blogger.com/posts.g?blogID=36218793&searchType=ALL&txtKeywords=&label
Now I reach in and I feel another egg-shaped object. … So I say “blue”
Ah, an understandable mistake. Those of us paying attention know though that after all of those blue eggs the next egg almost certainly must be red.
Mathematics and probability theory are completely worthless. You never get out anything except what you put in!
On the other hand, some of us find it extremely useful to get out what we put it, even by mere logical reasoning.
I am distinct from my brain. My brain does a lot of stuff without consulting me at all.
JESUS CHRIST IT’S A LION GET IN THE CAR!
the brain uses holistic processing to bypass the logical process of identifying say a face which is not near as effective
Aristotle needed Socrates. Maybe Plato was listening.
Anna
Reactions to 500lb stripy feline things jumping unexpectedly come from pre-verbal categorisations(the ‘low road’, in Daniel Goleman’s terms), so have nothing to do with word definitions. The same is true for many highly emotionally charged categorisations (e.g. for a previous generation, person with skin colour different from mine....). Words themselves do get their meanings from networks of associations. The content of these networks can drift over time, for an individual as for a culture. Words change their meanings. A deliberate attempt to change the meaning of a word by introducing new associations (e.g. via the media) can be successful. Changes in the meanings of political labels, or the associations with a person’s name, are good examples. Whether the direct amygdala circuit can be reprogramed is a different matter. Certainly not as easily as the neocortex. If you lived in the world of Calvin and Hobbes for six months, would you start to instinctively see large stripy feline things jumping out at you unexpectedly as an invitation to play ?
I suppose I should add, for those who are really stuck in maths or formal logic, that changing the definition of a symbol in a formal system is not the same thing as changing the meaning of a word in a language. In fact you can’t, individually and as a decision of will, change the meaning of a word in a language. It either changes, as per my previous comment, or it doesn’t.
Unless you’re Dan Savage, of course.
New phrases are coined constantly, and people change the meanings of existing words also: ‘gay’ being a good example as it’s changed twice in recent history. Presumably there was some person that started that particular definition-shift, does that not count as “individually as a decision of will”?
The tiger, on the other hand, is a committed Platonist.
Our tendency to unconsciously draw inferences through inductive thought is a real problem.
The issue of word definitions is just a red herring.
We are very imprecise in this way because it is very rare that we split the sign into signified and signifier. If you know that a ‘Tiger’ thing can kill, it is perhaps best not to worry about the signification of the form and the entropy of its relations—its best to run.
I have created an exercise that goes with this post. Use it to solidify your knowledge of the material.
I was reading Nietzsche and found something striking. Compare this, from Eliezer:
and this, from Nietzsche:
Nietzsche doesn’t have a modern grasp of how evolution works, but his intuitions on cognition were far sharper than any of his contemporaries. That’s partially why I think he still has something to offer.
I kind-of doubt that Aristotelians saw many banana-like objects edible or otherwise, anyway. ;-)
I think this is exciting. I’m going to start making my own words for groups of things. I’m a java/.net programmer so I’m used to object-oriented so it’s natural for me to group things that may be used again!