I kinda like this post, and I think it’s pointing at something worth keeping in mind. But I don’t think the thesis is very clear or very well argued, and I currently have it at −1 in the 2023 review.
Some concrete things.
There are lots of forms of social grace, and it’s not clear which ones are included. Surely “getting on the train without waiting for others to disembark first” isn’t an epistemic virtue. I’d normally think of “distinguishing between map and territory” as an epistemic virtue but not particularly a social grace, but the last two paragraphs make me think that’s intended to be covered. Is “when I grew up, weaboo wasn’t particularly offensive, and I know it’s now considered a slur, but eh, I don’t feel like trying to change my vocabulary” an epistemic virtue?
Perhaps the claim is only meant to be that lack of “concealing or obfuscating information that someone would prefer not to be revealed” is an epistemic virtue? Then the map/territory stuff seems out of place, but the core claim seems much more defensible.
“Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace.” Let’s limit this to the social graces that are epistemically harmful. Still, I don’t see how this follows.
Idealized honest Bayesian reasoners wouldn’t need to stop and pause to think, but a human trying to imitate one will need to do that. A human getting closer in some respects to an idealized honest Bayesian reasoner might need to spend more time thinking.
And, where does “bare minimum” come from? Why will these humans do approximately-none-at-all of the thing, rather than merely less-than-maximum of it?
I do think there’s something awkward about humans-imitating-X, in pursuit of goal Y that X is very good at, doing something that X doesn’t do because it would be harmful to Y. But it’s much weaker than claimed.
There’s a claim that “distinguishing between the map and the territory” is distracting, but as I note here it’s not backed up.
I note that near the end we have: “If the post looks lousy, say it looks lousy. If it looks good, say it looks good.” But of course “looks” is in the map. The Feynman in the anecdote seems to have been following a different algorithm: “if the post looks [in Feynman’s map, which it’s unclear if he realizes is different from the territory] lousy, say it’s lousy. If it looks [...] good, say it’s good.”
Vaniver and Raemon point out something along the lines of “social grace helps institutions perservere”. Zack says he’s focusing on individual practice rather than institution-building. But both his anecdotes involve conversations. It seems that Feynman’s lack of social grace was good for Bohr’s epistemics… but that’s no help for Feynman’s individual practice. Bohr appreciating Feynman’s lack of social grace seems to have been good for Feynman’s ability-to-get-close-to-Bohr, which itself seems good for Feynman’s epistemics, but that’s quite different.
Oh, elsewhere Zack says “The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes”, which doesn’t sound like it’s focusing on individual practice?
Hypothesis: when Zack wrote this post, it wasn’t very clear to himself what he was trying to focus on.
Man, this review kinda feels like… I can imagine myself looking back at it two years later and being like “oh geez that wasn’t a serious attempt to actually engage with the post, it was just point scoring”. I don’t think that’s what’s happening, and that’s just pattern matching on the structure or something? But I also think that if it was, it wouldn’t necessarily feel like it to me now?
It also feels like I could improve it if I spent a few more hours on it and re-read the comments in more detail, and I do expect that’s true.
In any case, I’m pretty sure both [the LW review process] and [Zack specifically] prefer me to publish it.
I kinda like this post, and I think it’s pointing at something worth keeping in mind. But I don’t think the thesis is very clear or very well argued, and I currently have it at −1 in the 2023 review.
Some concrete things.
There are lots of forms of social grace, and it’s not clear which ones are included. Surely “getting on the train without waiting for others to disembark first” isn’t an epistemic virtue. I’d normally think of “distinguishing between map and territory” as an epistemic virtue but not particularly a social grace, but the last two paragraphs make me think that’s intended to be covered. Is “when I grew up, weaboo wasn’t particularly offensive, and I know it’s now considered a slur, but eh, I don’t feel like trying to change my vocabulary” an epistemic virtue?
Perhaps the claim is only meant to be that lack of “concealing or obfuscating information that someone would prefer not to be revealed” is an epistemic virtue? Then the map/territory stuff seems out of place, but the core claim seems much more defensible.
“Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace.” Let’s limit this to the social graces that are epistemically harmful. Still, I don’t see how this follows.
Idealized honest Bayesian reasoners wouldn’t need to stop and pause to think, but a human trying to imitate one will need to do that. A human getting closer in some respects to an idealized honest Bayesian reasoner might need to spend more time thinking.
And, where does “bare minimum” come from? Why will these humans do approximately-none-at-all of the thing, rather than merely less-than-maximum of it?
I do think there’s something awkward about humans-imitating-X, in pursuit of goal Y that X is very good at, doing something that X doesn’t do because it would be harmful to Y. But it’s much weaker than claimed.
There’s a claim that “distinguishing between the map and the territory” is distracting, but as I note here it’s not backed up.
I note that near the end we have: “If the post looks lousy, say it looks lousy. If it looks good, say it looks good.” But of course “looks” is in the map. The Feynman in the anecdote seems to have been following a different algorithm: “if the post looks [in Feynman’s map, which it’s unclear if he realizes is different from the territory] lousy, say it’s lousy. If it looks [...] good, say it’s good.”
Vaniver and Raemon point out something along the lines of “social grace helps institutions perservere”. Zack says he’s focusing on individual practice rather than institution-building. But both his anecdotes involve conversations. It seems that Feynman’s lack of social grace was good for Bohr’s epistemics… but that’s no help for Feynman’s individual practice. Bohr appreciating Feynman’s lack of social grace seems to have been good for Feynman’s ability-to-get-close-to-Bohr, which itself seems good for Feynman’s epistemics, but that’s quite different.
Oh, elsewhere Zack says “The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes”, which doesn’t sound like it’s focusing on individual practice?
Hypothesis: when Zack wrote this post, it wasn’t very clear to himself what he was trying to focus on.
Man, this review kinda feels like… I can imagine myself looking back at it two years later and being like “oh geez that wasn’t a serious attempt to actually engage with the post, it was just point scoring”. I don’t think that’s what’s happening, and that’s just pattern matching on the structure or something? But I also think that if it was, it wouldn’t necessarily feel like it to me now?
It also feels like I could improve it if I spent a few more hours on it and re-read the comments in more detail, and I do expect that’s true.
In any case, I’m pretty sure both [the LW review process] and [Zack specifically] prefer me to publish it.