That is, actually, what I assumed you meant, so consider my comments to stand unchanged.
I’m confused because it does seem like you think everyone should participate, i.e. you endorse people lying about their state if asked about it. (I didn’t mean “everyone should go out of their way to participate” but rather “everyone should participate at all” e.g. when pressed to by someone else)
Well, all I can tell you is that if we’re casual acquaintances, or coworkers, or similar, and I greet you with “How are you?” and you respond by talking about things that have been going on in your life lately, that will make me quite a bit less likely to want to interact with you at all henceforth. My sense is that this is a very common reaction.
I’m actually fine with this as a filter. Anyway, if someone’s code of honesty is unable to actually resist the pressure of “if I were honest then it would be slightly awkward and some people would talk to me less” then it is doing very little work. I don’t see why anyone would want such a code of “honesty” except to lie about how honest they are.
You wouldn’t base your behavior on a sociological just-so-story which you haven’t actually verified… would you?
The just-so story seems true based on my social models, and I would bet on it if it were possible. That’s enough to base my behavior on.
My main argument here is that (a) saying you’re doing well when you’re not doing well is literally false and (b) in context it’s part of a pattern that optimizes against clarity, rather than something like acting that is clearly tagged as acting. I don’t actually see a counterargument to (b) in the text of your comment.
I’m confused because it does seem like you think everyone should participate, i.e. you endorse people lying about their state if asked about it. (I didn’t mean “everyone should go out of their way to participate” but rather “everyone should participate at all” e.g. when pressed to by someone else)
Ok, back up. What exactly are you talking about, here? “Participate” in what? What am I “participating” in when I respond to “How are you?” with something other than a true accounting of my current state?
Originally, you asked about “participation” in
… a ritual in which one person acts like they care about the other’s emotional state by asking about it, the other one lies about it in order to maintain the narrative that Things Are Fine (even in cases where things aren’t fine and that would be important information to someone who actually cared about them), then sometimes they switch roles
But—as I alluded to in my response—there is a clear difference between the two halves of this “ritual”! Are you treating them as a single unit? If so—why? If not, then I am hard-pressed to parse your remarks. Please clarify.
… if someone’s code of honesty is unable to actually resist the pressure of “if I were honest then it would be slightly awkward and some people would talk to me less” then it is doing very little work. I don’t see why anyone would want such a code of “honesty” except to lie about how honest they are.
You seem to assume, here, that any “code of honesty” must use the same concept of “honesty” as you do. You may reasonably disagree with my understanding of what constitutes honesty, but it is tendentious to suggest that, in fact, I am using the same concept of honesty as you are, except that I am being dishonest about it. (In other words, you speak as if I share your values but am failing to live up to them, while hypocritically claiming otherwise. The obvious alternative account—one which I, in fact, suggested upthread—is that my values simply differ from yours.)
The just-so story seems true based on my social models, and I would bet on it if it were possible.
This is nothing more than a circular restatement that you believe said just-so-story to be true. We already know that you believe this.
(a) saying you’re doing well when you’re not doing well is literally false
Perhaps, but all this means is that the concept of “literal falsehood” which you are using is inadequate to the task of modeling human communication. (Once again, one person’s modus ponens…)
The problem with such naive accounts of “truth” in communication is that they do not work—in a very real and precise sense—to predict the epistemic state of actual human beings after certain communicative acts have been undertaken. That should signal to you that something is wrong with your model.
(b) in context it’s part of a pattern that optimizes against clarity, rather than something like acting that is clearly tagged as acting
You’re going to have to unpack “clarity” a good bit—as well as explaining, at least in brief, why you consider it to be a desirable thing—before I can comment on this (beyond what I’ve already said, which I do not see that you’ve acknowledged).
But—as I alluded to in my response—there is a clear difference between the two halves of this “ritual”! Are you treating them as a single unit?
If you should participate in the ritual as the B role, then you should participate in the ritual at all. This seems like a straightforward logical consequence? Like, “if you should play soccer as defense, then you should play soccer.”
You seem to assume, here, that any “code of honesty” must use the same concept of “honesty” as you do.
What does “honesty” mean to you, and what is it for? Does honesty ever require doing things that are slightly awkward and cause fewer people to want to talk to you?
This is nothing more than a circular restatement that you believe said just-so-story to be true. We already know that you believe this.
It seems like you were implying that there was something illegitimate about me acting based on my social models? If you don’t think this then there is no conflict here.
The problem with such naive accounts of “truth” in communication is that they do not work—in a very real and precise sense—to predict the epistemic state of actual human beings after certain communicative acts have been undertaken.
I agree that you should not update on someone saying “X” by proceeding to condition your beliefs on “X.” You at least have to take pragmatics and deception into account. I think this is a case of deception in addition to pragmatics, rather than pragmatics alone. You would not expect people saying “fine” if they were not fine from a model like the ones on this page where agents are trying to inform each other while taking into account the inferences others make; you would get it if there were pressure to deceive.
You’re going to have to unpack “clarity” a good bit—as well as explaining, at least in brief, why you consider it to be a desirable thing—before I can comment on this (beyond what I’ve already said, which I do not see that you’ve acknowledged).
“Clarity” means something like “information is being processed in a way that is obvious to all parties.” For example, people are able to ask about what state each other are in, and either receive a true answer or a refusal to provide this information. When things aren’t fine, this quickly becomes obvious to everyone. And so on.
This is often desirable for a bunch of reasons. For example, if I can track what state my friends are in, I can think about what would improve their situations. If I know things aren’t fine more generally, then I can investigate the crisis and think about what to do about it.
This is not always desirable. For example, if someone were doing drugs and the police were questioning them about it, then it would probably be correct for them to optimize for unclarity by lying or misdirecting (assuming they can get away with it). But optimizing against clarity is pretty much the same thing as being deceptive, and pretending that this is compatible with acting unusually honesty is meta-dishonest. (Of course, someone can act unusually honestly most of the time while being deceptive other times)
In general clarity is good for enabling positive-sum interactions, and highly situational in situations with a substantial adversarial/zero-sum element.
For saying “fine” when greeted with “how are you?” or “how’s it going?” to be a case of deception in addition to pragmatics, it would need to be the case that the person saying “fine” expects to be understood as saying that their life is in fact going well.
I don’t think people generally expect that.
(Though there’s something kinda a bit like that that people maybe do expect. If my life is really terrible at the moment then maybe my desire for sympathy might outweigh my respect for standard conventions and make me answer “pretty bad, actually” instead of “fine”; so when I don’t do that, I am giving some indication that my life isn’t going toooo badly; so if it actually is but I still say “fine”, maybe I’m being deceptive. But that’s only the case in so far as, in fact, if my life were going badly enough then I would be likely not to do that.)
Having this convention isn’t (so it seems to me) “optimizing against clarity” in any strong sense. That is: sure, there are other possible conventions that would yield greater clarity, but it’s not so clear that they’re better that it makes sense to say that choosing this convention instead is “optimizing against clarity”. (For comparison: imagine that someone proposes a different convention: whenever two people meet, they exchange bank balances and recent medical histories. This would indeed bring greater clarity; I don’t think most of us would want that clarity; but it seems unfair to say that we’re “optimizing against clarity” if we choose not to have it.)
For saying “fine” when greeted with “how are you?” or “how’s it going?” to be a case of deception in addition to pragmatics, it would need to be the case that the person saying “fine” expects to be understood as saying that their life is in fact going well.
Almost no one expects marketers to actually tell the truth about their products, and yet it seems pretty clear that marketing is deceptive. I think this has to do with common knowledge: even though nearly everyone knows marketing is deceptive, this isn’t common knowledge to the point where an ad could contain the phrase “I am lying to you right now” without it being jarring.
Having this convention isn’t (so it seems to me) “optimizing against clarity” in any strong sense. That is: sure, there are other possible conventions that would yield greater clarity, but it’s not so clear that they’re better that it makes sense to say that choosing this convention instead is “optimizing against clarity”
The convention is optimized for preventing people from giving information about their state that would break the narrative that Things Are Fine. People’s mental processes during the conversation will actually be optimizing against breaking this narrative even in cases where it is false. See Ben’s comment here.
You may be right about marketing and common knowledge; if so, then I suggest that the standard “how are you? fine” script is common knowledge; everyone knows that a “fine” answer can be, and likely will be, given even if the person in question is not doing well at all.
I agree that when executing the how-are-you-fine script people are ipso facto discouraged from giving information about their state that contradicts Things Are Fine. That’s because when executing that script, no one is actually giving any information about their state at all. If you actually want to find out how someone’s life is going, that isn’t how you do it; you ask them some less stereotyped question and, if they trust you sufficiently, they will answer it.
Again, if the how-are-you-fine script were taken as a serious attempt to extract (one one side) and provide (on the other) information about how someone’s life is going, then for sure it would be deceptive. But that’s not how anyone generally uses it, and I don’t see a particular reason why it should be.
The convention is optimized for preventing people from giving information about their state that would break the narrative that Things Are Fine.
You have made this sort of assertion several times now; I’d like to see some elaboration on it. What sorts of social contexts do you have in mind, when you say such things? On what basis do you make this sort of claim?
Person A and B are acquaintances. A asks B “how are you?” B is having serious problems at work, will probably be fired, and face serious economic consequences. B says “fine.” Why did B say “fine” when B was in fact not fine?
Suppose B said “I’m going to lose my job and be really poor for the near future.” Prediction: this will be awkward. Why would this be awkward?
Hypothesis: it is awkward because it contradicts the idea that things are fine. While this contradiction exists in the conversation, A and B will feel tension. Tension can be resolved in a few ways. A could say “oh don’t worry, you can get another job,” contradicting the idea that there is a problem in an unhelpful way that nevertheless restores the narrative that things are fine. A could also say “wow that really sucks, let me know if you need help” agreeing that things aren’t fine and resolving the tension by offering assistance. But A might not want to actually offer assistance in some cases. A could also just say “wow that sucks;” this does not resolve the tension as much as in the previous case, but it does at least mean that A and B are currently agreeing that things aren’t fine, and A has sympathy with B, which ameliorates the tension.
Compare: rising action in a story, which produces tension that must be resolved somehow.
The account you present is rather abstract, and seems to be based on a sort of “narrative” view of social interactions. I am not sure I understand this view well enough to criticize it coherently; I also am not sure what motivates it. (It is also not obvious to me what could falsify the hypothesis in question, nor what it predicts, etc. Certainly I would appreciate a link or two to a more in-depth discussion of this sort of view.)
In any case, there are some quite obvious alternate hypotheses, some of which have been mentioned elsethread, viz.:
All of these alternate hypotheses (and similar ones) make use only of simple, straightforward interests and desires of individuals, and have no need to bring in abstract “narrative” concepts.
“Clarity” means something like “information is being processed in a way that is obvious to all parties.” For example, people are able to ask about what state each other are in, and either receive a true answer or a refusal to provide this information. When things aren’t fine, this quickly becomes obvious to everyone. And so on.
…
In general clarity is good for enabling positive-sum interactions, and highly situational in situations with a substantial adversarial/zero-sum element.
I see, thanks.
This is not how I would normally use the word “clarity” (which is why I said “it does no such thing” in response to your claim that the norm in question “optimizes against clarity”). That having been said, your usage is not terribly unreasonable, so I will not quibble with it. So, taking “clarity” to mean what you described…
… I consider this sort of “clarity” to not be clearly desirable, even totally ignoring the sorts of “adversarial/zero-sum” situations you allude to. (In fact, it seems to me that a naive, unreflective dedication to “clarity” of this sort is particularly harmful in many categories of potentially-positive-sum interactions!)
This is a topic which has been much-discussed in the rationalist meme-sphere (and beyond, of course!) over the last decade; I confess to being surprised that you appear to be unaware of what’s been said on the subject. (Or are you aware of it, but merely disagree with it all? But then it seems to me that you would, at least, not have been at all surprised by any of my comments…) I do not, at the moment, have the time to hunt for links to relevant writings, but I will try to make some time in the near future.
Given the clarification in this subthread, let me now go ahead and respond to this bit:
You would not expect people saying “fine” if they were not fine from a model like the ones on this page [i.e., an implicature / Gricean-maxim model. —SA] where agents are trying to inform each other while taking into account the inferences others make
Indeed, you certainly would not; the problem, however, lies in the assumption that someone responding “Fine” to “How are you?” is trying to inform the asker, or that the asker expects to be informed when asking that question.
In any case, this is a point we’ve covered elsethread.
If you should participate in the ritual as the B role, then you should participate in the ritual at all. This seems like a straightforward logical consequence? Like, “if you should play soccer as defense, then you should play soccer.”
This is very bizarre logic, to be frank. The entire conception of such social interactions as coherent “rituals” that both the asker and the asked are willing “participants” in, qua ritual, is quite strange, and does not accord with anything I said, or any of my understanding of the world.
What does “honesty” mean to you, and what is it for?
That question is the genesis of quite a long discussion. I hardly think this is the time and place for it.
Does honesty ever require doing things that are slightly awkward and cause fewer people to want to talk to you?
That is certainly not out of the question.
It seems like you were implying that there was something illegitimate about me acting based on my social models?
I don’t know about “illegitimate”, but basing your social models on unverified just-so-stories is epistemically unwise.
I think this is a case of deception in addition to pragmatics, rather than pragmatics alone.
What do you mean by “deception”, here? If a casual acquaintance greets me with “How are you?” and I respond with “Fine, you?”—in a case when, in fact, a monster truck has just run over my favorite elephant—do you consider this an instance of “deception”? If so, do you view “deception” as undesirable (in some general sense) or harmful (to the said casual acquaintance)?
You would not expect people saying “fine” if they were not fine from a model like the ones on this page where agents are trying to inform each other while taking into account the inferences others make; you would get it if there were pressure to deceive.
That page seems to be some sort of highly technical discussion, involving code in a language I’ve never heard of. Would you care to summarize its core ideas in plain language, or link to such a summary elsewhere? Failing that, I have no comment in response to this.
(rest of your comment addressed in a separate response)
Re: “ritual,” it seems like “social script” might have closer to the right connotations here.
My main point here is that, if you are trying to build a reputation as being unusually honest, yet you lie because otherwise it would be slightly awkward and some people would talk to you less, then your reputation doesn’t actually count for anything. If someone won’t push against slight awkwardness to tell the truth about something only a little important, why would I expect them to push against a lot of awkwardness to tell the truth about something that is very important?
By definitions of “honesty” commonly used in American culture, being unusually honest usually requires doing things that are awkward and might cause people to talk to you less. For example, in the myth about George Washington chopping down a cherry tree, it is in fact awkward for him to admit that he chopped down a cherry tree, and he could face social consequences as a result. But he admits it anyway, because he is honest. (Ironically this didn’t actually happen, but this isn’t that important if we are trying to figure out what concepts of honesty are in common usage)
I would count saying “fine” when you are not fine to be a form of deception, one which is usually slightly harmful to both participants, but only slightly. For someone who is not attempting to be unusually honest as a matter of policy, this is not actually a big deal. It might be worth saying “fine” to minimize tension.
But the situation is very different for someone attempting to be unusually honest as a matter of policy. This type of person is trying to tell the truth almost all the time, even when it is hard and goes against their local incentives. There may be some times when they should lie, but it should have to be a really good reason, not “it would be slightly awkward if I didn’t lie.” If someone is going to lie whenever the cost-benefit analysis looks at least as favorable to lying as it does in the “saying you are fine when you are not fine” case, then they’re going to lie quite a lot about pretty important things, whenever telling the truth about these things would be comparably awkward.
Would you care to summarize its core ideas in plain language, or link to such a summary elsewhere?
Sure. Suppose I have seen a bunch of apples, which may be red or green. I say “some of the apples are red.” Is it correct for you to infer that not all the apples are red? Yes, probably; if I had seen that all the apples were red, I would have instead said “all of the apples are red.” Even if “some of the apples are red” is technically correct if all the apples are red, I would know that you would make less-correct inferences about the proportion of apples that are red if I said “some of the apples are red” instead of “all of the apples are red.” Basically, the idea is that the listener models the speaker as trying to inform the listener, and the speaker to model the listener as making these inferences.
Suppose I have seen a bunch of apples, which may be red or green. I say “some of the apples are red.” Is it correct for you to infer that all the apples are red? Yes, probably …
I assume you mean, infer that not all the apples are red?
In any case, thanks for the summary. It sounds like it’s simply the Gricean maxims / the concept of implicature, which is certainly something I’m familiar with.
Re: “ritual,” it seems like “social script” might have closer to the right connotations here.
I don’t really know that this makes your comments about it any more reasonable-sounding, but in any case this sub-point seems like a tangent, so we can let it go, if you like.
My main point here is that, if you are trying to build a reputation as being unusually honest, yet you lie because otherwise it would be slightly awkward and some people would talk to you less, then your reputation doesn’t actually count for anything. If someone won’t push against slight awkwardness to tell the truth about something only a little important, why would I expect them to push against a lot of awkwardness to tell the truth about something that is very important?
I just don’t think that this identification of “honesty” with “parsing spoken sentences in the most naively-literal possible way and then responding as if the intended meaning of your interlocutor’s utterance coincided with this literal reading” is very sensible. If someone did this, I wouldn’t think “boy, that guy/gal sure is unusually honest!”. I’d think “there goes a person who has, sadly, acquired a most inaccurate understanding, not to mention a most unproductive view, of social interactions”.
Suppose you are asked a question, where all of the following are true:
Your interlocutor neither expects nor desires for you to take the question literally and answer it truthfully.
You know that you are not expected to, and you have no desire to, take the question literally and answer it truthfully.
Your interlocutor would be harmed by you taking the question literally and answering it truthfully.
You would be harmed by you taking the question literally and answering it truthfully.
Do you maintain that, in such a case, “honesty” nevertheless demands that you do take the question literally and answer it truthfully?
If so, then this “honesty” of yours seems to be a supreme undesirable trait to have, and for one’s friends and acquaintances to have. (I maintain the scare quotes, because I would certainly not assent to any definition of “honesty” which had the aforesaid property—and, importantly, I do not think that “honesty” of this type is more predictive of certain actually desirable and prosocial behaviors, of the type that most people would expect from a person who had the as-generally-understood virtue of honesty.)
I would count saying “fine” when you are not fine to be a form of deception, one which is usually slightly harmful to both participants, but only slightly.
I would be interested to hear why you think this. It seems incorrect to me.
But the situation is very different for someone attempting to be unusually honest as a matter of policy. This type of person is trying to tell the truth almost all the time, even when it is hard and goes against their local incentives. There may be some times when they should lie, but it should have to be a really good reason, not “it would be slightly awkward if I didn’t lie.” If someone is going to lie whenever the cost-benefit analysis looks at least as favorable to lying as it does in the “saying you are fine when you are not fine” case, then they’re going to lie quite a lot about pretty important things, whenever telling the truth about these things would be comparably awkward.
Once again, you are relying on a very unrealistic characterization of what is taking place when one person says “How are you?” and another answers “Fine”. However, we can let that slide for now (in any case, I already addressed it, earlier in this comment), and instead deal with the substantive claim that someone who does not respond to “How are you?” with a report of their actual state, is “going to lie quite a lot about pretty important things …”.
I firmly dispute this claim. And given how strong of a claim it is, I should like to see it justified quite convincingly.
If someone did this, I wouldn’t think “boy, that guy/gal sure is unusually honest!”. I’d think “there goes a person who has, sadly, acquired a most inaccurate understanding, not to mention a most unproductive view, of social interactions”.
These are not necessarily mutually exclusive explanations. Sometimes the point of a social transaction is to maintain some particular social fiction.
I just don’t think that this identification of “honesty” with “parsing spoken sentences in the most naively-literal possible way and then responding as if the intended meaning of your interlocutor’s utterance coincided with this literal reading” is very sensible.
I don’t make this identification, given that I think honesty is compatible with pragmatics and metaphor, both of which are attempts to communicate that go beyond this. I would identify honesty more with “trying to communicate in a way that causes the other person to have accurate beliefs, with a significant preference for saying literally true things by default.”
Do you maintain that, in such a case, “honesty” nevertheless demands that you do take the question literally and answer it truthfully?
Depends on the situation. If it’s actually common knowledge that the things I’m saying are not intended to be true statements (e.g. I’m participating in a skit) then of course not. Otherwise it seems at least a little dishonest. Being dishonest is not always bad, but someone trying to be unusually honest should avoid being dishonest for frivolous reasons. (Obviously, not everyone should try to be unusually honest in all contexts)
If you’re pretty often in situations where lying is advantageous, then maybe lying a lot is the right move. But if you are doing this then it would be meta-dishonest to say that you are trying to be unusually honest.
I would be interested to hear why you think this. It seems incorrect to me.
I think saying false things routinely to some extent trains people to stop telling truth from falsity as a matter of habit. I don’t have a strong case for this but it seems true according to my experience.
the substantive claim that someone who does not respond to “How are you?” with a report of their actual state, is “going to lie quite a lot about pretty important things …”.
This is a pretty severe misquote. Read what I wrote.
Most of your comment seems to indicate that we’ve more or less reached the end of how much we can productively untangle our disagreement (at least, without full-length, top-level posts from one or both of us), but I would like to resolve this bit:
the substantive claim that someone who does not respond to “How are you?” with a report of their actual state, is “going to lie quite a lot about pretty important things …”.
This is a pretty severe misquote. Read what I wrote.
Well, first of all, to the extent that it’s a quote (which only part of it is), it’s not a misquote, per se, because you really did write those words, in that order. I assume what you meant is that it is a misrepresentation/mischaracterization of what you said and meant—which I am entirely willing to accept! (It would simply mean that I misunderstood what you were getting at; that is not hard at all to believe.)
So, could you explain in what way my attempted paraphrase/summary mischaracterized your point? I confess it does not seem to me to be a misrepresentation, except insofar as it brackets assumptions which, to me, seem both (a) flawed and unwarranted, and (b) not critical to the claim, per se (for all that they may be necessary to justify or support the claim).
Agreed that further engagement here on the disagreement is not that productive. Here’s what I said:
If someone is going to lie whenever the cost-benefit analysis looks at least as favorable to lying as it does in the “saying you are fine when you are not fine” case, then they’re going to lie quite a lot about pretty important things, whenever telling the truth about these things would be comparably awkward.
I am not saying that, if someone says they are fine when they are not fine, then necessarily they will lie about important things. They could be making an unprincipled exception. I am instead saying that, if they lied whenever the cost-benefit analysis looks at least as favorable to lying as in the “saying you are fine when you are not fine” case, then they’re likely going to end up lying about some pretty important things that are really awkward to talk about.
Yes, this is correct. The exception is entirely principled (really, I’d say it’s not even an exception, in the sense that the situation is not within the category of those to which the rule applies in the first place).
I see. It seems those assumptions I mentioned are ones which you consider much more important to your point than I consider them to be, which, I suppose, is not terribly surprising. (I do still think they are unwarranted.)
I will have to consider turning what I’ve been trying to say here into a top-level post (which may be no more than a list of links and blurbs; as I said, there has been a good deal of discussion about this stuff already).
I’m confused because it does seem like you think everyone should participate, i.e. you endorse people lying about their state if asked about it. (I didn’t mean “everyone should go out of their way to participate” but rather “everyone should participate at all” e.g. when pressed to by someone else)
I’m actually fine with this as a filter. Anyway, if someone’s code of honesty is unable to actually resist the pressure of “if I were honest then it would be slightly awkward and some people would talk to me less” then it is doing very little work. I don’t see why anyone would want such a code of “honesty” except to lie about how honest they are.
The just-so story seems true based on my social models, and I would bet on it if it were possible. That’s enough to base my behavior on.
My main argument here is that (a) saying you’re doing well when you’re not doing well is literally false and (b) in context it’s part of a pattern that optimizes against clarity, rather than something like acting that is clearly tagged as acting. I don’t actually see a counterargument to (b) in the text of your comment.
Ok, back up. What exactly are you talking about, here? “Participate” in what? What am I “participating” in when I respond to “How are you?” with something other than a true accounting of my current state?
Originally, you asked about “participation” in
But—as I alluded to in my response—there is a clear difference between the two halves of this “ritual”! Are you treating them as a single unit? If so—why? If not, then I am hard-pressed to parse your remarks. Please clarify.
You seem to assume, here, that any “code of honesty” must use the same concept of “honesty” as you do. You may reasonably disagree with my understanding of what constitutes honesty, but it is tendentious to suggest that, in fact, I am using the same concept of honesty as you are, except that I am being dishonest about it. (In other words, you speak as if I share your values but am failing to live up to them, while hypocritically claiming otherwise. The obvious alternative account—one which I, in fact, suggested upthread—is that my values simply differ from yours.)
This is nothing more than a circular restatement that you believe said just-so-story to be true. We already know that you believe this.
Perhaps, but all this means is that the concept of “literal falsehood” which you are using is inadequate to the task of modeling human communication. (Once again, one person’s modus ponens…)
The problem with such naive accounts of “truth” in communication is that they do not work—in a very real and precise sense—to predict the epistemic state of actual human beings after certain communicative acts have been undertaken. That should signal to you that something is wrong with your model.
You’re going to have to unpack “clarity” a good bit—as well as explaining, at least in brief, why you consider it to be a desirable thing—before I can comment on this (beyond what I’ve already said, which I do not see that you’ve acknowledged).
If you should participate in the ritual as the B role, then you should participate in the ritual at all. This seems like a straightforward logical consequence? Like, “if you should play soccer as defense, then you should play soccer.”
What does “honesty” mean to you, and what is it for? Does honesty ever require doing things that are slightly awkward and cause fewer people to want to talk to you?
It seems like you were implying that there was something illegitimate about me acting based on my social models? If you don’t think this then there is no conflict here.
I agree that you should not update on someone saying “X” by proceeding to condition your beliefs on “X.” You at least have to take pragmatics and deception into account. I think this is a case of deception in addition to pragmatics, rather than pragmatics alone. You would not expect people saying “fine” if they were not fine from a model like the ones on this page where agents are trying to inform each other while taking into account the inferences others make; you would get it if there were pressure to deceive.
“Clarity” means something like “information is being processed in a way that is obvious to all parties.” For example, people are able to ask about what state each other are in, and either receive a true answer or a refusal to provide this information. When things aren’t fine, this quickly becomes obvious to everyone. And so on.
This is often desirable for a bunch of reasons. For example, if I can track what state my friends are in, I can think about what would improve their situations. If I know things aren’t fine more generally, then I can investigate the crisis and think about what to do about it.
This is not always desirable. For example, if someone were doing drugs and the police were questioning them about it, then it would probably be correct for them to optimize for unclarity by lying or misdirecting (assuming they can get away with it). But optimizing against clarity is pretty much the same thing as being deceptive, and pretending that this is compatible with acting unusually honesty is meta-dishonest. (Of course, someone can act unusually honestly most of the time while being deceptive other times)
In general clarity is good for enabling positive-sum interactions, and highly situational in situations with a substantial adversarial/zero-sum element.
See The Engineer and the Diplomat for more on this model.
You said “it does no such thing” re: optimizing against clarity and I’m contesting that.
For saying “fine” when greeted with “how are you?” or “how’s it going?” to be a case of deception in addition to pragmatics, it would need to be the case that the person saying “fine” expects to be understood as saying that their life is in fact going well.
I don’t think people generally expect that.
(Though there’s something kinda a bit like that that people maybe do expect. If my life is really terrible at the moment then maybe my desire for sympathy might outweigh my respect for standard conventions and make me answer “pretty bad, actually” instead of “fine”; so when I don’t do that, I am giving some indication that my life isn’t going toooo badly; so if it actually is but I still say “fine”, maybe I’m being deceptive. But that’s only the case in so far as, in fact, if my life were going badly enough then I would be likely not to do that.)
Having this convention isn’t (so it seems to me) “optimizing against clarity” in any strong sense. That is: sure, there are other possible conventions that would yield greater clarity, but it’s not so clear that they’re better that it makes sense to say that choosing this convention instead is “optimizing against clarity”. (For comparison: imagine that someone proposes a different convention: whenever two people meet, they exchange bank balances and recent medical histories. This would indeed bring greater clarity; I don’t think most of us would want that clarity; but it seems unfair to say that we’re “optimizing against clarity” if we choose not to have it.)
Almost no one expects marketers to actually tell the truth about their products, and yet it seems pretty clear that marketing is deceptive. I think this has to do with common knowledge: even though nearly everyone knows marketing is deceptive, this isn’t common knowledge to the point where an ad could contain the phrase “I am lying to you right now” without it being jarring.
The convention is optimized for preventing people from giving information about their state that would break the narrative that Things Are Fine. People’s mental processes during the conversation will actually be optimizing against breaking this narrative even in cases where it is false. See Ben’s comment here.
You may be right about marketing and common knowledge; if so, then I suggest that the standard “how are you? fine” script is common knowledge; everyone knows that a “fine” answer can be, and likely will be, given even if the person in question is not doing well at all.
I agree that when executing the how-are-you-fine script people are ipso facto discouraged from giving information about their state that contradicts Things Are Fine. That’s because when executing that script, no one is actually giving any information about their state at all. If you actually want to find out how someone’s life is going, that isn’t how you do it; you ask them some less stereotyped question and, if they trust you sufficiently, they will answer it.
Again, if the how-are-you-fine script were taken as a serious attempt to extract (one one side) and provide (on the other) information about how someone’s life is going, then for sure it would be deceptive. But that’s not how anyone generally uses it, and I don’t see a particular reason why it should be.
I was going to write a longer response but this thread covers what I wanted to say pretty well.
You have made this sort of assertion several times now; I’d like to see some elaboration on it. What sorts of social contexts do you have in mind, when you say such things? On what basis do you make this sort of claim?
Person A and B are acquaintances. A asks B “how are you?” B is having serious problems at work, will probably be fired, and face serious economic consequences. B says “fine.” Why did B say “fine” when B was in fact not fine?
Suppose B said “I’m going to lose my job and be really poor for the near future.” Prediction: this will be awkward. Why would this be awkward?
Hypothesis: it is awkward because it contradicts the idea that things are fine. While this contradiction exists in the conversation, A and B will feel tension. Tension can be resolved in a few ways. A could say “oh don’t worry, you can get another job,” contradicting the idea that there is a problem in an unhelpful way that nevertheless restores the narrative that things are fine. A could also say “wow that really sucks, let me know if you need help” agreeing that things aren’t fine and resolving the tension by offering assistance. But A might not want to actually offer assistance in some cases. A could also just say “wow that sucks;” this does not resolve the tension as much as in the previous case, but it does at least mean that A and B are currently agreeing that things aren’t fine, and A has sympathy with B, which ameliorates the tension.
Compare: rising action in a story, which produces tension that must be resolved somehow.
I see.
The account you present is rather abstract, and seems to be based on a sort of “narrative” view of social interactions. I am not sure I understand this view well enough to criticize it coherently; I also am not sure what motivates it. (It is also not obvious to me what could falsify the hypothesis in question, nor what it predicts, etc. Certainly I would appreciate a link or two to a more in-depth discussion of this sort of view.)
In any case, there are some quite obvious alternate hypotheses, some of which have been mentioned elsethread, viz.:
The “Copenhagen interpretation of ethics”
Guarding against a disadvantageous change in power relations
A simple desire for privacy
All of these alternate hypotheses (and similar ones) make use only of simple, straightforward interests and desires of individuals, and have no need to bring in abstract “narrative” concepts.
I see, thanks.
This is not how I would normally use the word “clarity” (which is why I said “it does no such thing” in response to your claim that the norm in question “optimizes against clarity”). That having been said, your usage is not terribly unreasonable, so I will not quibble with it. So, taking “clarity” to mean what you described…
… I consider this sort of “clarity” to not be clearly desirable, even totally ignoring the sorts of “adversarial/zero-sum” situations you allude to. (In fact, it seems to me that a naive, unreflective dedication to “clarity” of this sort is particularly harmful in many categories of potentially-positive-sum interactions!)
This is a topic which has been much-discussed in the rationalist meme-sphere (and beyond, of course!) over the last decade; I confess to being surprised that you appear to be unaware of what’s been said on the subject. (Or are you aware of it, but merely disagree with it all? But then it seems to me that you would, at least, not have been at all surprised by any of my comments…) I do not, at the moment, have the time to hunt for links to relevant writings, but I will try to make some time in the near future.
Given the clarification in this subthread, let me now go ahead and respond to this bit:
Indeed, you certainly would not; the problem, however, lies in the assumption that someone responding “Fine” to “How are you?” is trying to inform the asker, or that the asker expects to be informed when asking that question.
In any case, this is a point we’ve covered elsethread.
This is very bizarre logic, to be frank. The entire conception of such social interactions as coherent “rituals” that both the asker and the asked are willing “participants” in, qua ritual, is quite strange, and does not accord with anything I said, or any of my understanding of the world.
That question is the genesis of quite a long discussion. I hardly think this is the time and place for it.
That is certainly not out of the question.
I don’t know about “illegitimate”, but basing your social models on unverified just-so-stories is epistemically unwise.
What do you mean by “deception”, here? If a casual acquaintance greets me with “How are you?” and I respond with “Fine, you?”—in a case when, in fact, a monster truck has just run over my favorite elephant—do you consider this an instance of “deception”? If so, do you view “deception” as undesirable (in some general sense) or harmful (to the said casual acquaintance)?
That page seems to be some sort of highly technical discussion, involving code in a language I’ve never heard of. Would you care to summarize its core ideas in plain language, or link to such a summary elsewhere? Failing that, I have no comment in response to this.
(rest of your comment addressed in a separate response)
Re: “ritual,” it seems like “social script” might have closer to the right connotations here.
My main point here is that, if you are trying to build a reputation as being unusually honest, yet you lie because otherwise it would be slightly awkward and some people would talk to you less, then your reputation doesn’t actually count for anything. If someone won’t push against slight awkwardness to tell the truth about something only a little important, why would I expect them to push against a lot of awkwardness to tell the truth about something that is very important?
By definitions of “honesty” commonly used in American culture, being unusually honest usually requires doing things that are awkward and might cause people to talk to you less. For example, in the myth about George Washington chopping down a cherry tree, it is in fact awkward for him to admit that he chopped down a cherry tree, and he could face social consequences as a result. But he admits it anyway, because he is honest. (Ironically this didn’t actually happen, but this isn’t that important if we are trying to figure out what concepts of honesty are in common usage)
I would count saying “fine” when you are not fine to be a form of deception, one which is usually slightly harmful to both participants, but only slightly. For someone who is not attempting to be unusually honest as a matter of policy, this is not actually a big deal. It might be worth saying “fine” to minimize tension.
But the situation is very different for someone attempting to be unusually honest as a matter of policy. This type of person is trying to tell the truth almost all the time, even when it is hard and goes against their local incentives. There may be some times when they should lie, but it should have to be a really good reason, not “it would be slightly awkward if I didn’t lie.” If someone is going to lie whenever the cost-benefit analysis looks at least as favorable to lying as it does in the “saying you are fine when you are not fine” case, then they’re going to lie quite a lot about pretty important things, whenever telling the truth about these things would be comparably awkward.
Sure. Suppose I have seen a bunch of apples, which may be red or green. I say “some of the apples are red.” Is it correct for you to infer that not all the apples are red? Yes, probably; if I had seen that all the apples were red, I would have instead said “all of the apples are red.” Even if “some of the apples are red” is technically correct if all the apples are red, I would know that you would make less-correct inferences about the proportion of apples that are red if I said “some of the apples are red” instead of “all of the apples are red.” Basically, the idea is that the listener models the speaker as trying to inform the listener, and the speaker to model the listener as making these inferences.
I assume you mean, infer that not all the apples are red?
In any case, thanks for the summary. It sounds like it’s simply the Gricean maxims / the concept of implicature, which is certainly something I’m familiar with.
Whoops, thanks for the correction (edited comment).
I don’t really know that this makes your comments about it any more reasonable-sounding, but in any case this sub-point seems like a tangent, so we can let it go, if you like.
I just don’t think that this identification of “honesty” with “parsing spoken sentences in the most naively-literal possible way and then responding as if the intended meaning of your interlocutor’s utterance coincided with this literal reading” is very sensible. If someone did this, I wouldn’t think “boy, that guy/gal sure is unusually honest!”. I’d think “there goes a person who has, sadly, acquired a most inaccurate understanding, not to mention a most unproductive view, of social interactions”.
Suppose you are asked a question, where all of the following are true:
Your interlocutor neither expects nor desires for you to take the question literally and answer it truthfully.
You know that you are not expected to, and you have no desire to, take the question literally and answer it truthfully.
Your interlocutor would be harmed by you taking the question literally and answering it truthfully.
You would be harmed by you taking the question literally and answering it truthfully.
Do you maintain that, in such a case, “honesty” nevertheless demands that you do take the question literally and answer it truthfully?
If so, then this “honesty” of yours seems to be a supreme undesirable trait to have, and for one’s friends and acquaintances to have. (I maintain the scare quotes, because I would certainly not assent to any definition of “honesty” which had the aforesaid property—and, importantly, I do not think that “honesty” of this type is more predictive of certain actually desirable and prosocial behaviors, of the type that most people would expect from a person who had the as-generally-understood virtue of honesty.)
I would be interested to hear why you think this. It seems incorrect to me.
Once again, you are relying on a very unrealistic characterization of what is taking place when one person says “How are you?” and another answers “Fine”. However, we can let that slide for now (in any case, I already addressed it, earlier in this comment), and instead deal with the substantive claim that someone who does not respond to “How are you?” with a report of their actual state, is “going to lie quite a lot about pretty important things …”.
I firmly dispute this claim. And given how strong of a claim it is, I should like to see it justified quite convincingly.
These are not necessarily mutually exclusive explanations. Sometimes the point of a social transaction is to maintain some particular social fiction.
I don’t make this identification, given that I think honesty is compatible with pragmatics and metaphor, both of which are attempts to communicate that go beyond this. I would identify honesty more with “trying to communicate in a way that causes the other person to have accurate beliefs, with a significant preference for saying literally true things by default.”
Depends on the situation. If it’s actually common knowledge that the things I’m saying are not intended to be true statements (e.g. I’m participating in a skit) then of course not. Otherwise it seems at least a little dishonest. Being dishonest is not always bad, but someone trying to be unusually honest should avoid being dishonest for frivolous reasons. (Obviously, not everyone should try to be unusually honest in all contexts)
If you’re pretty often in situations where lying is advantageous, then maybe lying a lot is the right move. But if you are doing this then it would be meta-dishonest to say that you are trying to be unusually honest.
I think saying false things routinely to some extent trains people to stop telling truth from falsity as a matter of habit. I don’t have a strong case for this but it seems true according to my experience.
This is a pretty severe misquote. Read what I wrote.
Most of your comment seems to indicate that we’ve more or less reached the end of how much we can productively untangle our disagreement (at least, without full-length, top-level posts from one or both of us), but I would like to resolve this bit:
Well, first of all, to the extent that it’s a quote (which only part of it is), it’s not a misquote, per se, because you really did write those words, in that order. I assume what you meant is that it is a misrepresentation/mischaracterization of what you said and meant—which I am entirely willing to accept! (It would simply mean that I misunderstood what you were getting at; that is not hard at all to believe.)
So, could you explain in what way my attempted paraphrase/summary mischaracterized your point? I confess it does not seem to me to be a misrepresentation, except insofar as it brackets assumptions which, to me, seem both (a) flawed and unwarranted, and (b) not critical to the claim, per se (for all that they may be necessary to justify or support the claim).
Agreed that further engagement here on the disagreement is not that productive. Here’s what I said:
I am not saying that, if someone says they are fine when they are not fine, then necessarily they will lie about important things. They could be making an unprincipled exception. I am instead saying that, if they lied whenever the cost-benefit analysis looks at least as favorable to lying as in the “saying you are fine when you are not fine” case, then they’re likely going to end up lying about some pretty important things that are really awkward to talk about.
I think that Said is arguing that they’re making a *principled* exception. Vaniver’s comment makes a decent case for this.
Yes, this is correct. The exception is entirely principled (really, I’d say it’s not even an exception, in the sense that the situation is not within the category of those to which the rule applies in the first place).
I see. It seems those assumptions I mentioned are ones which you consider much more important to your point than I consider them to be, which, I suppose, is not terribly surprising. (I do still think they are unwarranted.)
I will have to consider turning what I’ve been trying to say here into a top-level post (which may be no more than a list of links and blurbs; as I said, there has been a good deal of discussion about this stuff already).