“We lie all the time, but if everyone knows that we’re lying, is a lie really a lie?”
-- Moral Mazes
A common Bayesian account of communication analyzes signalling games: games in which there is hidden information, and some actions can serve to communicate that information between players. The meaning of a signal is precisely the probabilistic information one can infer from it.
I’ll call this the signalling analysis of meaning. (Apparently, it’s also been called Gricean communication.)
In Maybe Lying Can’t Exist, Zack Davis points out that the signalling analysis has some counterintuitive features. In particular, it’s not clear how to define “lying”!
Either agents have sufficiently aligned interests, in which case the agents find a signalling system (an equilibrium of the game in which symbols bear a useful relationship with hidden states, so that information is communicated) or interests are misaligned, in which case no such equilibrium can develop.
We can have partially aligned interests, in which case a partial signalling system develops (symbols carry some information, but not as much as you might want). Zack gives the example of predatory fireflies who imitate a mating signal. The mating signal still carries some information, but it now signals danger as well as a mating opportunity, making the world more difficult to navigate.
But the signalling analysis can’t call the predator a liar, because the “meaning” of the signal includes the possibility of danger.
Zach concludes: Deception is an ontologically parasitic concept. It requires a pre-existing notion of truthfulness. One possibility is given by Skyrms and Barrett: we consider only the subgame where sender and receiver have common goals. This gives us our standard of truth by which to judge lies.
I conclude: The suggested solution seems OK to me, but maybe we want to throw out the signalling analysis of meaning altogether. Maybe words don’t just mean what they probabilistically imply. Intuitively, there is a distinction between connotation and denotation. Prefacing something with “literally” is more than just an intensifier.
The signalling analysis of meaning seems to match up rather nicely with simulacrum level 3, where the idea that words have meaning has been lost, and everyone is vibing.
Truth-telling. An honest attempt to communicate object-level facts.
Lying. For this to be meaningful and useful, there must be an equilibrium where truth-telling is common. Liars exploit that equilibrium to their advantage.
You say X because you want to sound like the cool kids. Words have lost their inherent meanings, perhaps due to the prevalence of level 2 strategies. However, words still convey political information. Like level 1, level 3 has a sort of honesty to it: a level 3 strategy is conveying true things, but without regard for the literal content of words. We could call this level Humbug (more or less). It’s also been called “signalling”, but that begs the question of the present essay.
Bullshit. Even the indirect meanings of words are corrupted, as dishonest actors say whatever is most advantageous in the moment. This level is parasitic on level 3 in the same way that level 2 is parasitic on level 1.
Here are some facts about the signalling analysis of meaning.
There is no distinction between denotation and connotation.
An assertion’s meaning is just the probabilistic conclusions you can reach from it.
Map/territory fit is just how well those probabilistic conclusions match reality.
If a statement like “guns don’t kill people, people kill people” lets you reliably infer the political affiliation of the speaker, then it has high map/territory fit in that sense.
If a particular lie is common, this fact just gets rolled into the “meaning” of the utterance. If “We should get together more often” is often used as part of a polite goodbye, then it means “I want to indicate that I like you as a person and leave things open without making any commitments” (or something like that).
This sounds an awful lot like level-3 thinking to me.
I’m not saying that signalling theory can only analyze level-three phenomena! On the contrary, I still think signalling theory includes honest communication as a special case. I still think it’s a theory of what information can be conveyed through communication, when incentives are not necessarily aligned. After all, signalling theory can examine cases of perfectly aligned incentives, where there’s no reason to lie or manipulate.
What I don’t think is that signalling theory captures everything that’s going on with truthfulness and deceit.
Signalling theory now strikes me as a level 3 understanding of language. It can watch levels 1 and 2 and come to some understanding of what’s going on. It can even participate. It just doesn’t understand the difference between levels 1 and 2. It doesn’t see that words have meanings beyond their associations.
This is the type of thinking that can’t tell the difference between “a implies b” and “a, and also b”—because people almost always endorse both “a” and “b” when they say “a implies b”.
This is the type of thinking where disagreement tends to be regarded as a social attack, because disagreement is associated with social attack.
This is the type of thinking where we can’t ever have a phrase meaning “honestly” or “literally” or “no really, I’m not bulshitting you on this one” because if such a phrase existed then it would immediately be co-opted by everyone else as a mere intensifier.
The Skyrms & Barrett Proposal
What about the proposal that Zack Davis mentioned:
Brian Skyrms and Jeffrey A. Barrett have an explanation in light of the observation that our sender–receiver framework is a sequential game: first, the sender makes an observation (or equivalently, Nature chooses the type of sender—mate, predator, or null in the story about fireflies), then the sender chooses a signal, then the receiver chooses an action. We can separate out the propositional content of signals from their informational content by taking the propositional meaning to be defined in the subgame where the sender and receiver have a common interest—the branches of the game tree where the players are trying to communicate.
This is the sort of proposal I’m looking for. It’s promising. But I don’t think it’s quite right.
First of all, it might be difficult to define the hypothetical scenario in which all interests are aligned, so that communication is honest. Taking an extreme example, how would we then assign meaning to statements such as “our interests are not aligned”?
More importantly, though, it still doesn’t make sense of the denotation/connotation distinction. Even in cases where interests align, we can still see all sorts of probabilistic implications of language, such as Grice’s maxims. If someone says “frogs can’t fly” in the middle of a conversation, we assume the remark is relevant to the conversation, and form all kinds of tacit conclusions based on this. To be more concrete, here’s an example conversation:
Alice: “I just don’t understand why I don’t see Cedrick any more.”
Bob: “He’s married now.”
We infer from this that the marriage creates some kind of obstacle. Perhaps Cedrick is too busy to come over. Or Bob is implying that it would be inappropriate for Cedrick to frequently visit Alice, a single woman. None of this is literally said, but a cloud of conversational implicature surrounds the literal text. The signalling analysis can’t distinguish this cloud from the literal meaning.
The Challenge for Bayesians
Zach’s post (Maybe Lying Can’t Exist, which I opened with) feels to me like one of the biggest challenges to classical Bayesian thinking that’s appeared on LessWrong in recent months. Something like the signalling theory of meaning has underpinned discussions about language among rationalists since before the sequences.
Like logical uncertainty, I see this as a challenge in the integration of logic and probability. In some sense, the signalling theory only allows for reasoning by association rather than structured logical reasoning, because the meaning of any particular thing is just its probabilistic associations.
Worked examples in the signalling theory of meaning (such as Alice and Bob communicating about colored shapes) tend to assume that the agents have a pre-existing meaningful ontology for thinking about the world (“square”, “triangle” etc). Where do these crisp ontologies come from, if (under the signalling theory of meaning) symbols only have probabilistic meanings?
How can we avoid begging the question like that? Where does meaning come from? What theory of meaning can account for terms with definite definitions, strict logical relationships, and such, all alongside probabilistic implications?
Signalling & Simulacra
A common Bayesian account of communication analyzes signalling games: games in which there is hidden information, and some actions can serve to communicate that information between players. The meaning of a signal is precisely the probabilistic information one can infer from it.
I’ll call this the signalling analysis of meaning. (Apparently, it’s also been called Gricean communication.)
In Maybe Lying Can’t Exist, Zack Davis points out that the signalling analysis has some counterintuitive features. In particular, it’s not clear how to define “lying”!
Either agents have sufficiently aligned interests, in which case the agents find a signalling system (an equilibrium of the game in which symbols bear a useful relationship with hidden states, so that information is communicated) or interests are misaligned, in which case no such equilibrium can develop.
We can have partially aligned interests, in which case a partial signalling system develops (symbols carry some information, but not as much as you might want). Zack gives the example of predatory fireflies who imitate a mating signal. The mating signal still carries some information, but it now signals danger as well as a mating opportunity, making the world more difficult to navigate.
But the signalling analysis can’t call the predator a liar, because the “meaning” of the signal includes the possibility of danger.
Zach concludes: Deception is an ontologically parasitic concept. It requires a pre-existing notion of truthfulness. One possibility is given by Skyrms and Barrett: we consider only the subgame where sender and receiver have common goals. This gives us our standard of truth by which to judge lies.
I conclude: The suggested solution seems OK to me, but maybe we want to throw out the signalling analysis of meaning altogether. Maybe words don’t just mean what they probabilistically imply. Intuitively, there is a distinction between connotation and denotation. Prefacing something with “literally” is more than just an intensifier.
The signalling analysis of meaning seems to match up rather nicely with simulacrum level 3, where the idea that words have meaning has been lost, and everyone is vibing.
Level 3 and Signalling
Out of the several Simulacra definitions, my understanding mainly comes from Simulacra Levels and their Interactions. Despite the risk of writing yet-another-attempt-to-explain-simulacra-levels, here’s a quick summary of my understanding:
Truth-telling. An honest attempt to communicate object-level facts.
Lying. For this to be meaningful and useful, there must be an equilibrium where truth-telling is common. Liars exploit that equilibrium to their advantage.
You say X because you want to sound like the cool kids. Words have lost their inherent meanings, perhaps due to the prevalence of level 2 strategies. However, words still convey political information. Like level 1, level 3 has a sort of honesty to it: a level 3 strategy is conveying true things, but without regard for the literal content of words. We could call this level Humbug (more or less). It’s also been called “signalling”, but that begs the question of the present essay.
Bullshit. Even the indirect meanings of words are corrupted, as dishonest actors say whatever is most advantageous in the moment. This level is parasitic on level 3 in the same way that level 2 is parasitic on level 1.
Here are some facts about the signalling analysis of meaning.
There is no distinction between denotation and connotation.
An assertion’s meaning is just the probabilistic conclusions you can reach from it.
Map/territory fit is just how well those probabilistic conclusions match reality.
If a statement like “guns don’t kill people, people kill people” lets you reliably infer the political affiliation of the speaker, then it has high map/territory fit in that sense.
If a particular lie is common, this fact just gets rolled into the “meaning” of the utterance. If “We should get together more often” is often used as part of a polite goodbye, then it means “I want to indicate that I like you as a person and leave things open without making any commitments” (or something like that).
This sounds an awful lot like level-3 thinking to me.
I’m not saying that signalling theory can only analyze level-three phenomena! On the contrary, I still think signalling theory includes honest communication as a special case. I still think it’s a theory of what information can be conveyed through communication, when incentives are not necessarily aligned. After all, signalling theory can examine cases of perfectly aligned incentives, where there’s no reason to lie or manipulate.
What I don’t think is that signalling theory captures everything that’s going on with truthfulness and deceit.
Signalling theory now strikes me as a level 3 understanding of language. It can watch levels 1 and 2 and come to some understanding of what’s going on. It can even participate. It just doesn’t understand the difference between levels 1 and 2. It doesn’t see that words have meanings beyond their associations.
This is the type of thinking that can’t tell the difference between “a implies b” and “a, and also b”—because people almost always endorse both “a” and “b” when they say “a implies b”.
This is the type of thinking where disagreement tends to be regarded as a social attack, because disagreement is associated with social attack.
This is the type of thinking where we can’t ever have a phrase meaning “honestly” or “literally” or “no really, I’m not bulshitting you on this one” because if such a phrase existed then it would immediately be co-opted by everyone else as a mere intensifier.
The Skyrms & Barrett Proposal
What about the proposal that Zack Davis mentioned:
This is the sort of proposal I’m looking for. It’s promising. But I don’t think it’s quite right.
First of all, it might be difficult to define the hypothetical scenario in which all interests are aligned, so that communication is honest. Taking an extreme example, how would we then assign meaning to statements such as “our interests are not aligned”?
More importantly, though, it still doesn’t make sense of the denotation/connotation distinction. Even in cases where interests align, we can still see all sorts of probabilistic implications of language, such as Grice’s maxims. If someone says “frogs can’t fly” in the middle of a conversation, we assume the remark is relevant to the conversation, and form all kinds of tacit conclusions based on this. To be more concrete, here’s an example conversation:
Alice: “I just don’t understand why I don’t see Cedrick any more.”
Bob: “He’s married now.”
We infer from this that the marriage creates some kind of obstacle. Perhaps Cedrick is too busy to come over. Or Bob is implying that it would be inappropriate for Cedrick to frequently visit Alice, a single woman. None of this is literally said, but a cloud of conversational implicature surrounds the literal text. The signalling analysis can’t distinguish this cloud from the literal meaning.
The Challenge for Bayesians
Zach’s post (Maybe Lying Can’t Exist, which I opened with) feels to me like one of the biggest challenges to classical Bayesian thinking that’s appeared on LessWrong in recent months. Something like the signalling theory of meaning has underpinned discussions about language among rationalists since before the sequences.
Like logical uncertainty, I see this as a challenge in the integration of logic and probability. In some sense, the signalling theory only allows for reasoning by association rather than structured logical reasoning, because the meaning of any particular thing is just its probabilistic associations.
Worked examples in the signalling theory of meaning (such as Alice and Bob communicating about colored shapes) tend to assume that the agents have a pre-existing meaningful ontology for thinking about the world (“square”, “triangle” etc). Where do these crisp ontologies come from, if (under the signalling theory of meaning) symbols only have probabilistic meanings?
How can we avoid begging the question like that? Where does meaning come from? What theory of meaning can account for terms with definite definitions, strict logical relationships, and such, all alongside probabilistic implications?
To hint at my opinion, I think it relates to learning normativity.