[Epistemic status: exploratory exercise in naming and concept-formation.]
Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings.
I previously described this as a distinction between promise-keeping “Quakers” and impulsive “Actors,” but I think this missed a key distinction. There’s “telling the truth,” and then there’s a more specific thing that’s more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts. This leaves out some other types – I’m not exactly sure how it relates to engineers and diplomats, for instance – but I think I have the right names for these two things now.
Summary
Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That’s all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?
I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by “words have meanings.” For instance, people who try to use promises like magic spells to bind their future behavior don’t seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.
Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.
Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive.
Enactive language
Some uses of words are enactive: ways to build or reveal momentum. Others denote the position of things on your world-map.
In the denotative framing, words largely denote concepts that refer to specific classes of objects, events, or attributes in the world, and should be parsed as such. The meaning of a sentence is mainly decomposable into the meanings of its parts and their relations to each other. Wordshave distinctmeanings that can be composed together in structures to communicate complex and nonobvious messages, or just uses and connotations.
In the enactive mode, the function of speech is to produce some action or disposition in your listener, who may be yourself. Ideas are primarily associative, reminding you of the perceptions with which the speech-act is associated. Other uses of language are structural. When you speak in this mode, it’s to describe models—relationships between concepts, which refer to classes of objects in the world.
Recently, at the gym, I overheard some group of exercise buddies admonishing their buddy on some machine to keep going with each rep. My first thought was, “why are they tormenting their friend? Why can’t they just leave him alone? Exercise is hard enough without trying to parse social interactions at the same time.”
And then I realized—they’re doing it because, for them, it works. It’s easier for them to do the workout if someone is telling them, “Keep going! Push it! One more!”
In the same post, I quoted Wittgenstein’s thought experiment of a language where words are only ever used as commands, with a corresponding action, never to refer to an object. Wittgenstein gives the example of a language used for nothing but military orders, and then elaborates on a hypothetical language used strictly for work orders. For instance, a foreman might use the utterance “Slab!” to direct a worker to fetch a slab of rock. I summarized the situation thus:
When I hear “slab”, my mind interprets this by imagining the object. A native speaker of Wittgenstein’s command language, when hearing the utterance “Slab!”, might—merely as the act of interpreting the word—feel a sense of readiness to go fetch a stone slab.
Wittgenstein’s listener might think of the slab itself, but only as a secondary operation in the process of executing the command. Likewise, I might, after thinking of the object, then infer that someone wants me to do something with the slab. But that requires an additional operation: modeling the speaker as an agent and using Gricean implicature to infer their intentions. The word has different cognitive content or implications for me, than for the speaker of Wittgenstein’s command language.
Military drills are also often about disintermediating between a command and action. Soldiers learn that when you receive an order, you just do the thing. This can lead to much more decisive and coordinated action in otherwise confusing situations – a familiar stimulus can lead to a regular response.
When someone gives you driving directions by telling you what you’ll observe, and what to do once you make that observation, they’re trying to encode a series of observation-action linkages in you.
This sort of linkage can happen to nonverbal animals too. Operant conditioning of animals gets around most animals’ difficulty understanding spoken instructions, by associating a standardized reward indicator with the desired action. Often, if you want to train a comparatively complex action like pigeons playing pong, you’ll need to train them one step at a time, gradually chaining the steps together, initially rewarding much simpler behaviors that will eventually compose into the desired complex behavior.
Crucially, the communication is never about the composition itself, just the components to be composed. Indeed, it’s not about anything, from the perspective of the animal being trained. This is similar to an old-fashioned army reliant on drill, in which, during battle, soldiers are told the next action they are to take, not told about overall structure of their strategy. They are told to, not told about.
Indeterminacy of translation
It’s conceivable that having what appears to be a language in common does not protect against such differences in interpretation. Quine also points to indeterminacy of translation and thus of explicable meaning with his “gavagai” example. As Wikipedia summarizes it:
Indeterminacy of reference refers to the interpretation of words or phrases in isolation, and Quine’s thesis is that no unique interpretation is possible, because a ‘radical interpreter’ has no way of telling which of many possible meanings the speaker has in mind. Quine uses the example of the word “gavagai” uttered by a native speaker of the unknown language Arunta upon seeing a rabbit. A speaker of English could do what seems natural and translate this as “Lo, a rabbit.” But other translations would be compatible with all the evidence he has: “Lo, food”; “Let’s go hunting”; “There will be a storm tonight” (these natives may be superstitious); “Lo, a momentary rabbit-stage”; “Lo, an undetached rabbit-part.” Some of these might become less likely – that is, become more unwieldy hypotheses – in the light of subsequent observation. Other translations can be ruled out only by querying the natives: An affirmative answer to “Is this the same gavagai as that earlier one?” rules out some possible translations. But these questions can only be asked once the linguist has mastered much of the natives’ grammar and abstract vocabulary; that in turn can only be done on the basis of hypotheses derived from simpler, observation-connected bits of language; and those sentences, on their own, admit of multiple interpretations.
Everyone begins life as a tiny immigrant who does not know the local language, and has to make such inferences, or something like them. Thus, many of the difficulties in nailing down exactly what a word is doing in a foreign language have analogues in nailing down exactly what a word is doing for another speaker of one’s own language.
Mimesis, association, and structure
Not only do we all begin life as immigrants, but as immigrants with no native language to which we can analogize our adopted tongue. We learn language through mimesis. For small children, language is perhaps more like Wittgenstein’s command language than my reference-language. It’s a commonplace observation that children learn the utterance “No!” as an expression of will. In The Ways of Naysaying: No, Not, Nothing, and Nonbeing, Eva Brann provides a charming example:
Children acquire some words, some two-word phrases, and then no. […] They say excited no to everything and guilelessly contradict their naysaying in the action: “Do you want some of my jelly sandwich?” “No.” Gets on my lap and takes it away from me. […] It is a documented observation that the particle no occurs very early in children’s speech, sometimes in the second year, quite a while before sentences are negated by not.
First we learn language as an assertion of will, a way to command. Then, later, we learn how to use it to describe structural features of world-models. I strongly suspect that this involves some new, not entirely mimetic cognitive machinery kicking in, something qualitatively different: we start to think in terms of pointer-referent and concept-referent relations. In terms of logical structures, where “no” is not simply an assertion of negative affect, but inverts the meaning of whatever follows. Only after this do recursive clauses, conditionals, and negation of negation make any sense at all.
As long as we agree on something like rules of assembly for sentences, mimesis might mask a huge difference in how we think about things. It’s instructive to look at how the current President of the United States uses language. He’s talking to people who aren’t bothering to track the structure of sentences. This makes him sound more “conversational” and, crucially, allows him to emphasize whichever words or phrases he wants, without burying them in a potentially hard-to-parse structure. As Katy Waldman of Slate says:
For some of us, Trump’s language is incendiary garbage. It’s not just that the ideas he wants to communicate are awful but that they come out as Saturnine gibberish or lewd smearing or racist gobbledygook. The man has never met a clause he couldn’t embellish forever and then promptly forget about. He uses adjectives as cudgels. You and I view his word casserole as not just incoherent but representative of the evil at his heart.
But it works. […]
Why? What’s the secret to Trump’s accidental brilliance? A few theories: simple component parts, weaponized unintelligibility, dark innuendo, and power signifiers.
[…] Trump tends to place the most viscerally resonant words at the end of his statements, allowing them to vibrate in our ears. For instance, unfurling his national security vision like a nativist pennant, Trump said:
But, Jimmy, the problem– I mean, look, I’m for it. But look, we have people coming into the country that are looking to do tremendous harm…. Look what happened in Paris. Look what happened in California, with, you know, 14 people dead. Other people are going to die, they’re badly injured, we have a real problem.
Ironically, because Trump relies so heavily on footnotes, false starts, and flights of association, and because his digressions rarely hook back up with the main thought, the emotional terms take on added power. They become rays of clarity in an incoherent verbal miasma. Think about that: If Trump were a more traditionally talented orator, if he just made more sense, the surface meaning of his phrases would likely overshadow the buried connotations of each individual word. As is, to listen to Trump fit language together is to swim in an eddy of confusion punctuated by sharp stabs of dread. Which happens to be exactly the sensation he wants to evoke in order to make us nervous enough to vote for him.
Of course, Waldman is being condescending and wrong here. This is not word salad, it’s high context communication. But high context communication isn’t what you use when you are thinking you might persuade someone who doesn’t already agree with you, it’s just a more efficient exercise in flag-waving. The reason why we don’t see a complex structure here is because Trump is not trying to communicate this sort of novel content that structural language is required for. He’s just saying “what everyone was already thinking.”
But while Waldman picked a poor example, she’s not wholly wrong. In some cases, the President of the United States seems to be impressionistically alluding to arguments or events his audience has already heard of – but his effective rhetorical use of insulting epithets like “Little Marco,” “Lying Ted Cruz,” and “Crooked Hillary,” fit very clearly into this schema. Instead of asking us to absorb facts about his opponents, incorporate them into coherent world-models, and then follow his argument for how we should judge them for their conduct, he used the simple expedient of putting a name next to a descriptor, repeatedly, to cause us to associate the connotations of those words. We weren’t asked to think about anything. These were simply command words, designed to act directly on our feelings about the people he insulted.
We weren’t asked to take his statements as factually accurate. It’s enough that they’re authentic.
This was persuasive to enough voters to make him President of the United States. This is not a straw man. This is real life. This is the world we live in.
You might object that the President of the United States is an unfair example, and that most people of any importance should be expected to be better and clearer thinkers than the leader of the free world. So, let’s consider the case of some middling undergraduates taking an economics course.
Robin Hanson reports that he can get students to mimic an economic way of talking, but not to think like an economist:
After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer. [...] Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.
This is a straightforward description of thinking that is formal but nonconceptual. Hanson’s students have learnt some words, and rules for moving the words around and putting them together, but at no point did they connect the rules for moving around words with regular properties of things that the words point to. The words are the things. When Hanson stops feeding them the right keywords, and asks them questions that require them to understand the underlying structural features of reality that economics is supposed to describe, they come up empty.
Of course, it seems unlikely that many people can’t think structurally at all. It seems to me like nearly everyone can think structurally about physical objects in their immediate environment. But it seems like when talking about abstractions, or the future, some people shift to a mental mode where words don’t carry the same weight of reference.
Even for those of us who habitually think structurally, it would be surprising if the mimetic component to language ever totally went away. Plenty of times, I’ve started saying something, only to stop midway through realizing that I’m just repeating something I heard, not reporting on a feature of my model of the world.
Tendencies towards mimesis are hard to resist, and part of why I think it’s so important to push back against falsehoods in any spaces that are meant to be accreting truth. Why even casual, accidental errors should be promptly corrected. Why I need an epistemic environment that’s not constantly being polluted by adversarial processes.
And we can’t begin to figure out how to do this until it becomes common knowledge that not everyone is doing the same thing with words, that modeling the world is a legitimate and useful thing to do with them, and that not all communication is designed to be friendly to the people who assume it’s composed of words with meanings.
Actors and scribes, words and deeds
[Epistemic status: exploratory exercise in naming and concept-formation.]
Among the kinds of people, are the Actors, and the Scribes. Actors mainly relate to speech as action that has effects. Scribes mainly relate to speech as a structured arrangement of pointers that have meanings.
I previously described this as a distinction between promise-keeping “Quakers” and impulsive “Actors,” but I think this missed a key distinction. There’s “telling the truth,” and then there’s a more specific thing that’s more obviously distinct from even Actors who are trying to make honest reports: keeping precisely accurate formal accounts. This leaves out some other types – I’m not exactly sure how it relates to engineers and diplomats, for instance – but I think I have the right names for these two things now.
Summary
Everyone agrees that words have meaning; they convey information from the speaker to the listener or reader. That’s all they do. So when I used the phrase “words have meanings” to describe one side of a divide between people who use language to report facts, and people who use language to enact roles, was I strawmanning the other side?
I say no. Many common uses of language, including some perfectly legitimate ones, are not well-described by “words have meanings.” For instance, people who try to use promises like magic spells to bind their future behavior don’t seem to consider the possibility that others might treat their promises as a factual representation of what the future will be like.
Some uses of language do not simply describe objects or events in the world, but are enactive, designed to evoke particular feelings or cause particular actions. Even when speech can only be understood as a description of part of a model of the world, the context in which a sentence is uttered often implies an active intent, so if we only consider the direct meaning of the text, we will miss the most important thing about the sentence.
Some apparent uses of language’s denotative features may in fact be purely enactive. This is possible because humans initially learn language mimetically, and try to copy usage before understanding what it’s for. Primarily denotative language users are likely to assume that structural inconsistencies in speech are errors, when they’re often simply signs that the speech is primarily intended to be enactive.
Enactive language
Some uses of words are enactive: ways to build or reveal momentum. Others denote the position of things on your world-map.
In the denotative framing, words largely denote concepts that refer to specific classes of objects, events, or attributes in the world, and should be parsed as such. The meaning of a sentence is mainly decomposable into the meanings of its parts and their relations to each other. Words have distinct meanings that can be composed together in structures to communicate complex and nonobvious messages, or just uses and connotations.
In the enactive mode, the function of speech is to produce some action or disposition in your listener, who may be yourself. Ideas are primarily associative, reminding you of the perceptions with which the speech-act is associated. Other uses of language are structural. When you speak in this mode, it’s to describe models—relationships between concepts, which refer to classes of objects in the world.
When I wrote about admonitions as performance-enhancing speech, I gave the example of someone being encouraged by their workout buddies:
In the same post, I quoted Wittgenstein’s thought experiment of a language where words are only ever used as commands, with a corresponding action, never to refer to an object. Wittgenstein gives the example of a language used for nothing but military orders, and then elaborates on a hypothetical language used strictly for work orders. For instance, a foreman might use the utterance “Slab!” to direct a worker to fetch a slab of rock. I summarized the situation thus:
Wittgenstein’s listener might think of the slab itself, but only as a secondary operation in the process of executing the command. Likewise, I might, after thinking of the object, then infer that someone wants me to do something with the slab. But that requires an additional operation: modeling the speaker as an agent and using Gricean implicature to infer their intentions. The word has different cognitive content or implications for me, than for the speaker of Wittgenstein’s command language.
Military drills are also often about disintermediating between a command and action. Soldiers learn that when you receive an order, you just do the thing. This can lead to much more decisive and coordinated action in otherwise confusing situations – a familiar stimulus can lead to a regular response.
When someone gives you driving directions by telling you what you’ll observe, and what to do once you make that observation, they’re trying to encode a series of observation-action linkages in you.
This sort of linkage can happen to nonverbal animals too. Operant conditioning of animals gets around most animals’ difficulty understanding spoken instructions, by associating a standardized reward indicator with the desired action. Often, if you want to train a comparatively complex action like pigeons playing pong, you’ll need to train them one step at a time, gradually chaining the steps together, initially rewarding much simpler behaviors that will eventually compose into the desired complex behavior.
Crucially, the communication is never about the composition itself, just the components to be composed. Indeed, it’s not about anything, from the perspective of the animal being trained. This is similar to an old-fashioned army reliant on drill, in which, during battle, soldiers are told the next action they are to take, not told about overall structure of their strategy. They are told to, not told about.
Indeterminacy of translation
It’s conceivable that having what appears to be a language in common does not protect against such differences in interpretation. Quine also points to indeterminacy of translation and thus of explicable meaning with his “gavagai” example. As Wikipedia summarizes it:
Everyone begins life as a tiny immigrant who does not know the local language, and has to make such inferences, or something like them. Thus, many of the difficulties in nailing down exactly what a word is doing in a foreign language have analogues in nailing down exactly what a word is doing for another speaker of one’s own language.
Mimesis, association, and structure
Not only do we all begin life as immigrants, but as immigrants with no native language to which we can analogize our adopted tongue. We learn language through mimesis. For small children, language is perhaps more like Wittgenstein’s command language than my reference-language. It’s a commonplace observation that children learn the utterance “No!” as an expression of will. In The Ways of Naysaying: No, Not, Nothing, and Nonbeing, Eva Brann provides a charming example:
First we learn language as an assertion of will, a way to command. Then, later, we learn how to use it to describe structural features of world-models. I strongly suspect that this involves some new, not entirely mimetic cognitive machinery kicking in, something qualitatively different: we start to think in terms of pointer-referent and concept-referent relations. In terms of logical structures, where “no” is not simply an assertion of negative affect, but inverts the meaning of whatever follows. Only after this do recursive clauses, conditionals, and negation of negation make any sense at all.
As long as we agree on something like rules of assembly for sentences, mimesis might mask a huge difference in how we think about things. It’s instructive to look at how the current President of the United States uses language. He’s talking to people who aren’t bothering to track the structure of sentences. This makes him sound more “conversational” and, crucially, allows him to emphasize whichever words or phrases he wants, without burying them in a potentially hard-to-parse structure. As Katy Waldman of Slate says:
Of course, Waldman is being condescending and wrong here. This is not word salad, it’s high context communication. But high context communication isn’t what you use when you are thinking you might persuade someone who doesn’t already agree with you, it’s just a more efficient exercise in flag-waving. The reason why we don’t see a complex structure here is because Trump is not trying to communicate this sort of novel content that structural language is required for. He’s just saying “what everyone was already thinking.”
But while Waldman picked a poor example, she’s not wholly wrong. In some cases, the President of the United States seems to be impressionistically alluding to arguments or events his audience has already heard of – but his effective rhetorical use of insulting epithets like “Little Marco,” “Lying Ted Cruz,” and “Crooked Hillary,” fit very clearly into this schema. Instead of asking us to absorb facts about his opponents, incorporate them into coherent world-models, and then follow his argument for how we should judge them for their conduct, he used the simple expedient of putting a name next to a descriptor, repeatedly, to cause us to associate the connotations of those words. We weren’t asked to think about anything. These were simply command words, designed to act directly on our feelings about the people he insulted.
We weren’t asked to take his statements as factually accurate. It’s enough that they’re authentic.
This was persuasive to enough voters to make him President of the United States. This is not a straw man. This is real life. This is the world we live in.
You might object that the President of the United States is an unfair example, and that most people of any importance should be expected to be better and clearer thinkers than the leader of the free world. So, let’s consider the case of some middling undergraduates taking an economics course.
Robin Hanson reports that he can get students to mimic an economic way of talking, but not to think like an economist:
This is a straightforward description of thinking that is formal but nonconceptual. Hanson’s students have learnt some words, and rules for moving the words around and putting them together, but at no point did they connect the rules for moving around words with regular properties of things that the words point to. The words are the things. When Hanson stops feeding them the right keywords, and asks them questions that require them to understand the underlying structural features of reality that economics is supposed to describe, they come up empty.
Of course, it seems unlikely that many people can’t think structurally at all. It seems to me like nearly everyone can think structurally about physical objects in their immediate environment. But it seems like when talking about abstractions, or the future, some people shift to a mental mode where words don’t carry the same weight of reference.
Even for those of us who habitually think structurally, it would be surprising if the mimetic component to language ever totally went away. Plenty of times, I’ve started saying something, only to stop midway through realizing that I’m just repeating something I heard, not reporting on a feature of my model of the world.
Tendencies towards mimesis are hard to resist, and part of why I think it’s so important to push back against falsehoods in any spaces that are meant to be accreting truth. Why even casual, accidental errors should be promptly corrected. Why I need an epistemic environment that’s not constantly being polluted by adversarial processes.
And we can’t begin to figure out how to do this until it becomes common knowledge that not everyone is doing the same thing with words, that modeling the world is a legitimate and useful thing to do with them, and that not all communication is designed to be friendly to the people who assume it’s composed of words with meanings.
(Cross-posted on my personal blog.)