Lack of Social Grace Is an Epistemic Virtue
Someone once told me that they thought I acted like refusing to employ the bare minimum of social grace was a virtue, and that this was bad. (I’m paraphrasing; they actually used a different word that starts with b.)
I definitely don’t want to say that lack of social grace is unambiguously a virtue. Humans are social animals, so the set of human virtues is almost certainly going to involve doing social things gracefully!
Nevertheless, I will bite the bullet on a weaker claim. Politeness is, to a large extent, about concealing or obfuscating information that someone would prefer not to be revealed—that’s why we recognize the difference between one’s honest opinion, and what one says when one is “just being polite.” Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace. In this sense, we might say that the lack of social grace is an “epistemic” virtue—even if it’s probably not great for normal humans trying to live normal human lives.
Let me illustrate what I mean with one fictional and one real-life example.
The beginning of the film The Invention of Lying (before the eponymous invention of lying) depicts an alternate world in which everyone is radically honest—not just in the narrow sense of not lying, but more broadly saying exactly what’s on their mind, without thought of concealment.
In one scene, our everyman protagonist is on a date at a restaurant with an attractive woman.
“I’m very embarrassed I work here,” says the waiter. “And you’re very pretty,” he tells the woman. “That only makes this worse.”
“Your sister?” the waiter then asks our protagonist.
“No,” says our everyman.
“Daughter?”
“No.”
“She’s way out of your league.”
″… thank you.”
The woman’s cell phone rings. She explains that it’s her mother, probably calling to check on the date.
“Hello?” she answers the phone—still at the table, with our protagonist hearing every word. “Yes, I’m with him right now. … No, not very attractive. … No, doesn’t make much money. It’s alright, though, seems nice, kind of funny. … A bit fat. … Has a funny little—snub nose, kind of like a frog in the—facial … No, I won’t be sleeping with him tonight. … No, probably not even a kiss. … Okay, you too, ’bye.”
The scene is funny because of how it violates the expected social conventions of our own world. In our world, politeness demands that you not say negative-valence things about someone in front of them, because people don’t like hearing negative-valence things about themselves. Someone in our world who behaved like the woman in this scene—calling someone ugly and poor and fat right in front of them—could only be acting out of deliberate cruelty.
But the people in the movie aren’t like us. Having taken the call, why should she speak any differently just because the man she was talking about could hear? Why would he object? To a decision-theoretic agent, the value of information is always nonnegative. Given that his date thought he was unattractive, how could it be worse for him to know rather than not-know?
For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it’s advantageous for others to have positive-valence false beliefs about oneself.
The world of The Invention of Lying is simpler, clearer, easier to navigate than our world. There, you don’t have to worry whether people don’t like you and are planning to harm your interests. They’ll tell you.
In “Los Alamos From Below”, physicist Richard Feynman’s account of his work on the Manhattan Project to build the first atomic bomb, Feynman recalls being sought out by a much more senior physicist specifically for his lack of social graces:
I also met Niels Bohr. His name was Nicholas Baker in those days, and he came to Los Alamos with Jim Baker, his son, whose name is really Aage Bohr. They came from Denmark, and they were very famous physicists, as you know. Even to the big shot guys, Bohr was a great god.
We were at a meeting once, the first time he came, and everybody wanted to see the great Bohr. So there were a lot of people there, and we were discussing the problems of the bomb. I was back in a corner somewhere. He came and went, and all I could see of him was from between people’s heads.
In the morning of the day he’s due to come next time, I get a telephone call.
“Hello—Feynman?”
“Yes.”
“This is Jim Baker.” It’s his son. “My father and I would like to speak to you.”
“Me? I’m Feynman, I’m just a—”
“That’s right. Is eight o’clock OK?”
So, at eight o’clock in the morning, before anybody’s awake, I go down to the place. We go into an office in the technical area and he says, “We have been thinking how we could make the bomb more efficient and we think of the following idea.”
I say, “No, it’s not going to work. It’s not efficient … Blah, blah, blah.”
So he says, “How about so and so?”
I said, “That sounds a little bit better, but it’s got this damn fool idea in it.”
This went on for about two hours, going back and forth over lots of ideas, back and forth, arguing. [...]
“Well,” [Niels Bohr] said finally, lighting his pipe, “I guess we can call in the big shots now.” So then they called all the other guys and had a discussion with them.
Then the son told me what happened. The last time he was there, Bohr said to his son, “Remember the name of that little fellow in the back over there? He’s the only guy who’s not afraid of me, and will say when I’ve got a crazy idea. So the next time when we want to discuss ideas, we’re not going to be able to do it with these guys who say everything is yes, yes, Dr. Bohr. Get that guy and we’ll talk with him first.”
I was always dumb in that way. I never knew who I was talking to. I was always worried about the physics. If the idea looked lousy, I said it looked lousy. If it looked good, I said it looked good. Simple proposition.
Someone who felt uncomfortable with Feynman’s bluntness and wanted to believe that there’s no conflict between rationality and social graces might argue that Feynman’s “simple proposition” is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, “No, it’s not going to work”, was not Feynman implicitly asserting that just because he couldn’t see a way to make it work, it simply couldn’t? And in general, shouldn’t you know who you’re talking to? Wasn’t Bohr, the Nobel prize winner, more likely to be right than Feynman, the fresh young Ph.D. (at the time)?
While not entirely without merit (it’s true that the map is not the territory; it’s true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics, which is what Bohr wanted out of Feynman—and, incidentally, what I want out of my readers. I would not expect readers to confirm interpretations with me before publishing a critique. If the post looks lousy, say it looks lousy. If it looks good, say it looks good. Simple proposition.
- Assume Bad Faith by 25 Aug 2023 17:36 UTC; 146 points) (
- Contra Yudkowsky on Epistemic Conduct for Author Criticism by 13 Sep 2023 15:33 UTC; 69 points) (
- 17 Dec 2024 18:50 UTC; 5 points) 's comment on nim’s Shortform by (
- 21 Jun 2024 1:56 UTC; 1 point) 's comment on I would have shit in that alley, too by (
By all means, strategically violate social customs. But if you irritate people by doing it, you may be advancing your own epistemics by making them talk to you, but you’re actually hurting their epistemics by making them irritated with whatever belief you’re trying to pitch. Lack of social grace is very much not an epistemic virtue.
This post captures a fairly common belief in the rationalist community. It’s important to understand why it’s wrong.
Emotions play a strong role in human reasoning. I finally wrote up at least a little sketch of why that happens. The technical term is motivated reasoning.
Motivated reasoning/confirmation bias as the most important cognitive bias
I kinda like this post, and I think it’s pointing at something worth keeping in mind. But I don’t think the thesis is very clear or very well argued, and I currently have it at −1 in the 2023 review.
Some concrete things.
There are lots of forms of social grace, and it’s not clear which ones are included. Surely “getting on the train without waiting for others to disembark first” isn’t an epistemic virtue. I’d normally think of “distinguishing between map and territory” as an epistemic virtue but not particularly a social grace, but the last two paragraphs make me think that’s intended to be covered. Is “when I grew up, weaboo wasn’t particularly offensive, and I know it’s now considered a slur, but eh, I don’t feel like trying to change my vocabulary” an epistemic virtue?
Perhaps the claim is only meant to be that lack of “concealing or obfuscating information that someone would prefer not to be revealed” is an epistemic virtue? Then the map/territory stuff seems out of place, but the core claim seems much more defensible.
“Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace.” Let’s limit this to the social graces that are epistemically harmful. Still, I don’t see how this follows.
Idealized honest Bayesian reasoners wouldn’t need to stop and pause to think, but a human trying to imitate one will need to do that. A human getting closer in some respects to an idealized honest Bayesian reasoner might need to spend more time thinking.
And, where does “bare minimum” come from? Why will these humans do approximately-none-at-all of the thing, rather than merely less-than-maximum of it?
I do think there’s something awkward about humans-imitating-X, in pursuit of goal Y that X is very good at, doing something that X doesn’t do because it would be harmful to Y. But it’s much weaker than claimed.
There’s a claim that “distinguishing between the map and the territory” is distracting, but as I note here it’s not backed up.
I note that near the end we have: “If the post looks lousy, say it looks lousy. If it looks good, say it looks good.” But of course “looks” is in the map. The Feynman in the anecdote seems to have been following a different algorithm: “if the post looks [in Feynman’s map, which it’s unclear if he realizes is different from the territory] lousy, say it’s lousy. If it looks [...] good, say it’s good.”
Vaniver and Raemon point out something along the lines of “social grace helps institutions perservere”. Zack says he’s focusing on individual practice rather than institution-building. But both his anecdotes involve conversations. It seems that Feynman’s lack of social grace was good for Bohr’s epistemics… but that’s no help for Feynman’s individual practice. Bohr appreciating Feynman’s lack of social grace seems to have been good for Feynman’s ability-to-get-close-to-Bohr, which itself seems good for Feynman’s epistemics, but that’s quite different.
Oh, elsewhere Zack says “The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes”, which doesn’t sound like it’s focusing on individual practice?
Hypothesis: when Zack wrote this post, it wasn’t very clear to himself what he was trying to focus on.
Man, this review kinda feels like… I can imagine myself looking back at it two years later and being like “oh geez that wasn’t a serious attempt to actually engage with the post, it was just point scoring”. I don’t think that’s what’s happening, and that’s just pattern matching on the structure or something? But I also think that if it was, it wouldn’t necessarily feel like it to me now?
It also feels like I could improve it if I spent a few more hours on it and re-read the comments in more detail, and I do expect that’s true.
In any case, I’m pretty sure both [the LW review process] and [Zack specifically] prefer me to publish it.