But this seems to contradict the element of Non-Deception. If you’re not actually on the same side as the people who disagree with you, why would you (as a very strong but defeasible default) role-play otherwise?
This is a good question!! Note that in the original footnote in my post, “on the same side” is a hyperlink going to a comment by Val:
“Some version of civility and/or friendliness and/or a spirit of camaraderie and goodwill seems like a useful ingredient in many discussions. I’m not sure how best to achieve this in ways that are emotionally honest (‘pretending to be cheerful and warm when you don’t feel that way’ sounds like the wrong move to me), or how to achieve this without steering away from candor, openness, ‘realness’, etc.”
I think the core thing here is same-sidedness.
That has nothing to do directly with being friendly/civil/etc., although it’ll probably naturally result in friendliness/etc.
(Like you seem to, I think aiming for cheerfulness/warmth/etc. is rather a bad idea.)
If you & I are arguing but there’s a common-knowledge undercurrent of same-sidedness, then even impassioned and cutting remarks are pretty easy to take in stride. “No, you’re being stupid here, this is what we’ve got to attend to” doesn’t get taken as an actual personal attack because the underlying feeling is of cooperation. Not totally unlike when affectionate friends say things like “You’re such a jerk.”
This is totally different from creating comfort. I think lots of folk get this one confused. Your comfort is none of my business, and vice versa. If I can keep that straight while coming from a same-sided POV, and if you do something similar, then it’s easy to argue and listen both in good faith.
I think this is one piece of the puzzle. I think another piece is some version of “being on the same side in this sense doesn’t entail agreeing about the relevant facts; the goal isn’t to trick people into thinking your disagreements are small, it’s to make typical disagreements feel less like battles between warring armies”.
I don’t think this grounds out in simple mathematics that transcends brain architecture, but I wouldn’t be surprised if it grounds out in pretty simple and general facts about how human brains happen to work. (I do think the principle being proposed here hasn’t been stated super clearly, and hasn’t been argued for super clearly either, and until that changes it should be contested and argued about rather than taken fully for granted.)
Note that in the original footnote in my post, “on the same side” is a hyperlink going to a comment by Val
Thanks for pointing this out. (I read Val’s comment while writing my post, but unfortunately neglected to add the hyperlink when pasting the text of the footnote into my draft.) I have now edited the link into my post.
the goal isn’t to trick people into thinking your disagreements are small, it’s to make typical disagreements feel less like battles between warring armies
I think the fact that disagreements often feel like battles between warring armies is because a lot of apparent “disagreements” are usefully modeled as disguised conflicts. That is, my theory about why predictable disagreements are so ubiquitous in human life (despite the fact that Bayesian reasoners can’t forsee to disagree) is mostly conflict-theoretic rather than mistake-theoretic.
A simple example: I stole a loaf of bread. A policeman thinks I stole the bread. I claim that I didn’t steal the bread. Superficially, this looks like a “disagreement” to an outside observer noticing the two of us reporting different beliefs, but what’s actually going on is that I’m lying. Importantly, if I care more about not going to jail than I do about being honest, lying is rational. Agents have an incentive to build maps that reflect the territory because those are the maps that are most useful for computing effective plans … but they also sometimes have an incentive to sabotage the maps of other agents with different utility functions.
Most interesting real-world disagreements aren’t so simple as the “one party is lying” case. But I think the moral should generalize: predictable disagreements are mostly due to at least some parts of some parties’ brains trying to optimize for conflicting goals, rather than just being “innocently” mistaken.
I’m incredibly worried that approaches to “cooperative” or “collaborative truth-seeking” that try to cultivate the spirit that everyone is on the same side and we all just want to get to the truth, quickly collapse in practice to, “I’ll accept your self-aggrandizing lies, if you accept my self-aggrandizing lies”—not because anyone thinks of themselves as telling self-aggrandizing lies, but because that’s what the elephant in the brain does by default. I’m more optimistic about approaches that are open to the possibility that conflicts exist, in the hopes that exposing hidden conflicts (rather than pretending they’re “disagreements”) makes it easier to find Pareto improvements.
I’m incredibly worried that approaches to “cooperative” or “collaborative truth-seeking” that try to cultivate the spirit that everyone is on the same side and we all just want to get to the truth, quickly collapse in practice to, “I’ll accept your self-aggrandizing lies, if you accept my self-aggrandizing lies”—not because anyone thinks of themselves as telling self-aggrandizing lies, but because that’s what the elephant in the brain does by default.
Very strongly seconding this. (I have noticed this pattern on Less Wrong in the past, in fact, and more than once. It is no idle worry, but a very real thing that already happens.)
This is a good question!! Note that in the original footnote in my post, “on the same side” is a hyperlink going to a comment by Val:
I think this is one piece of the puzzle. I think another piece is some version of “being on the same side in this sense doesn’t entail agreeing about the relevant facts; the goal isn’t to trick people into thinking your disagreements are small, it’s to make typical disagreements feel less like battles between warring armies”.
I don’t think this grounds out in simple mathematics that transcends brain architecture, but I wouldn’t be surprised if it grounds out in pretty simple and general facts about how human brains happen to work. (I do think the principle being proposed here hasn’t been stated super clearly, and hasn’t been argued for super clearly either, and until that changes it should be contested and argued about rather than taken fully for granted.)
Thanks for pointing this out. (I read Val’s comment while writing my post, but unfortunately neglected to add the hyperlink when pasting the text of the footnote into my draft.) I have now edited the link into my post.
I think the fact that disagreements often feel like battles between warring armies is because a lot of apparent “disagreements” are usefully modeled as disguised conflicts. That is, my theory about why predictable disagreements are so ubiquitous in human life (despite the fact that Bayesian reasoners can’t forsee to disagree) is mostly conflict-theoretic rather than mistake-theoretic.
A simple example: I stole a loaf of bread. A policeman thinks I stole the bread. I claim that I didn’t steal the bread. Superficially, this looks like a “disagreement” to an outside observer noticing the two of us reporting different beliefs, but what’s actually going on is that I’m lying. Importantly, if I care more about not going to jail than I do about being honest, lying is rational. Agents have an incentive to build maps that reflect the territory because those are the maps that are most useful for computing effective plans … but they also sometimes have an incentive to sabotage the maps of other agents with different utility functions.
Most interesting real-world disagreements aren’t so simple as the “one party is lying” case. But I think the moral should generalize: predictable disagreements are mostly due to at least some parts of some parties’ brains trying to optimize for conflicting goals, rather than just being “innocently” mistaken.
I’m incredibly worried that approaches to “cooperative” or “collaborative truth-seeking” that try to cultivate the spirit that everyone is on the same side and we all just want to get to the truth, quickly collapse in practice to, “I’ll accept your self-aggrandizing lies, if you accept my self-aggrandizing lies”—not because anyone thinks of themselves as telling self-aggrandizing lies, but because that’s what the elephant in the brain does by default. I’m more optimistic about approaches that are open to the possibility that conflicts exist, in the hopes that exposing hidden conflicts (rather than pretending they’re “disagreements”) makes it easier to find Pareto improvements.
Very strongly seconding this. (I have noticed this pattern on Less Wrong in the past, in fact, and more than once. It is no idle worry, but a very real thing that already happens.)