(note: this is more antagonistic than I feel—I agree with much of the direction of this, and appreciate the discussion. But I worry that you’re ignoring a motivated blind spot in order to avoid biting some bullets).
So, there’s something precious that dissolves when defined, and only seems to occur in low-stakes conversations with a small number of people. It’s related to trust, ability to be wrong (and to point out wrongness). It feels like the ability to have rational discourse, but that feeling is not subject to rational discourse itself.
Is it possible that it’s not truth-seeking (or more importantly, truth itself) you’re worried about, but unstated friendly agreement to ignore some of the hard questions? In smaller, less important conversations, you let people get away with all sorts of simplifications, theoretical constructs, and superficial agreements, which results in a much more pleasant and confident feeling of epistemic harmony.
When it comes time to actually commit real resources, or take significant risks, however, you generally want more concrete and detailed agreement on what happens if you turn out to be incorrect in your stated, shared beliefs. Which indicates that you’re less confident than you appear to be. This feels bad, and it’s tempting for all participants to now accuse the other of bad faith. This happens very routinely in friends forming business partnerships, people getting married, etc.
Maybe it’s not a loss in truth-seeking ability, it’s a loss of the ILLUSION of truth-seeking ability. Humans vary widely in their levels of rationality, and in their capability to hold amounts of data and make predictions, and in their willingness to follow/override their illegible beliefs in favor of justifiable explicit ones. It’s not the case that the rationalist community is no better than average: we’re quite a bit better than average (and conversations like this may well improve it further). But average is TRULY abysmal.
I’ve long called it the “libertarian dilemma”: agency and self-rule and rational decision-making is great for me, and for those I know well enough to respect, but the median human is pretty bad at it, and half of them are worse than that. When you’re talking about influencing other people’s spending decisions, it’s a really tough call whether to nudge/manipulate them into making better decisions than they would if you neutrally present information in the way you (think you) prefer. Fundamentally, it may be a question of agency: do you respect people’s right to make bad decisions with their money/lives?
Is it possible that it’s not truth-seeking (or more importantly, truth itself) you’re worried about, but unstated friendly agreement to ignore some of the hard questions?
I think this is importantly not what’s going on here.
If anything, Ben’s position is something like the above sentence representing what I’ve been pushing towards (whether accidentally or on purpose), as opposed to “actually being able to have honest, truthseeking conversations about hard questions.”
And Ben’s whole point is that this is bad. (and the point of my original “precious thing” paragraph was trying communicate that I understood Ben’s concern, but was coming at it from a different angle, and that I also care about having honest, truthseeking conversations about hard things.)
[I’m not sure Ben would quite endorse this description though, and would be interested in him clarifying if it seemed off]
A major reason that private conversations are important, IMO, is that they enable people to talk through fuzzy things that are hard to articulate, but where you can ask probing questions that make sense to you-and-only-you in order to check whether you’re actually talking about the same, hard-to-articulate-thing. You can’t jump to making them explicit because you’re running off a collection of intuitions, with lots of experiences baked into your intuition. But in private conversation it’s easier (for me at least) to get a sense of whether you’re talking about the same pre-explicit thing.
(The problem with having the conversation in public is precisely that other people will be asking “wait, what precious thing, exactly?” which derails the high context conversation. There’s a sort of two-way-street that I think needs building, where people-who-have-high-context-conversations make more effort to write them up, but everyone else kinda accepts that it might not always be achievable for them to follow along that easily)
The problem with having the conversation in public is precisely that other people will be asking “wait, what precious thing, exactly?” which derails the high context conversation.
I get that, but if the high-context extensive private conversation doesn’t or can’t) identify the precious thing, it seems somewhat likely that either you’re both politely accepting that the other may be thinking about something else entirely, and/or it may not actually be a thing.
I very much like your idea that you should have the conversation with the default expectation of publishing at a later time. If you haven’t been able to agree on what the thing is by then, I think the other people asking “wait, what precious thing exactly” are probably genuinely confused.
Note that I realize and have not resolved the tension between my worry that indescribable things aren’t things, and my belief that much (and perhaps most) of human decision-making is based on illegible-but-valid beliefs. I wonder if at least some of this conversation is pointing to a tendency to leak illegible beliefs into intellectual discussions in ways that could be called “bias” or “deception” if you think the measurable world is the entirety of truth, but which could also be reasonably framed as “correction” or “debiasing” a limited partial view toward the holistic/invisible reality. I’m not sure I can make that argument, but I would respect it and take it seriously if someone did.
(note: this is more antagonistic than I feel—I agree with much of the direction of this, and appreciate the discussion. But I worry that you’re ignoring a motivated blind spot in order to avoid biting some bullets).
So, there’s something precious that dissolves when defined, and only seems to occur in low-stakes conversations with a small number of people. It’s related to trust, ability to be wrong (and to point out wrongness). It feels like the ability to have rational discourse, but that feeling is not subject to rational discourse itself.
Is it possible that it’s not truth-seeking (or more importantly, truth itself) you’re worried about, but unstated friendly agreement to ignore some of the hard questions? In smaller, less important conversations, you let people get away with all sorts of simplifications, theoretical constructs, and superficial agreements, which results in a much more pleasant and confident feeling of epistemic harmony.
When it comes time to actually commit real resources, or take significant risks, however, you generally want more concrete and detailed agreement on what happens if you turn out to be incorrect in your stated, shared beliefs. Which indicates that you’re less confident than you appear to be. This feels bad, and it’s tempting for all participants to now accuse the other of bad faith. This happens very routinely in friends forming business partnerships, people getting married, etc.
Maybe it’s not a loss in truth-seeking ability, it’s a loss of the ILLUSION of truth-seeking ability. Humans vary widely in their levels of rationality, and in their capability to hold amounts of data and make predictions, and in their willingness to follow/override their illegible beliefs in favor of justifiable explicit ones. It’s not the case that the rationalist community is no better than average: we’re quite a bit better than average (and conversations like this may well improve it further). But average is TRULY abysmal.
I’ve long called it the “libertarian dilemma”: agency and self-rule and rational decision-making is great for me, and for those I know well enough to respect, but the median human is pretty bad at it, and half of them are worse than that. When you’re talking about influencing other people’s spending decisions, it’s a really tough call whether to nudge/manipulate them into making better decisions than they would if you neutrally present information in the way you (think you) prefer. Fundamentally, it may be a question of agency: do you respect people’s right to make bad decisions with their money/lives?
I think this is importantly not what’s going on here.
If anything, Ben’s position is something like the above sentence representing what I’ve been pushing towards (whether accidentally or on purpose), as opposed to “actually being able to have honest, truthseeking conversations about hard questions.”
And Ben’s whole point is that this is bad. (and the point of my original “precious thing” paragraph was trying communicate that I understood Ben’s concern, but was coming at it from a different angle, and that I also care about having honest, truthseeking conversations about hard things.)
[I’m not sure Ben would quite endorse this description though, and would be interested in him clarifying if it seemed off]
A major reason that private conversations are important, IMO, is that they enable people to talk through fuzzy things that are hard to articulate, but where you can ask probing questions that make sense to you-and-only-you in order to check whether you’re actually talking about the same, hard-to-articulate-thing. You can’t jump to making them explicit because you’re running off a collection of intuitions, with lots of experiences baked into your intuition. But in private conversation it’s easier (for me at least) to get a sense of whether you’re talking about the same pre-explicit thing.
(The problem with having the conversation in public is precisely that other people will be asking “wait, what precious thing, exactly?” which derails the high context conversation. There’s a sort of two-way-street that I think needs building, where people-who-have-high-context-conversations make more effort to write them up, but everyone else kinda accepts that it might not always be achievable for them to follow along that easily)
I get that, but if the high-context extensive private conversation doesn’t or can’t) identify the precious thing, it seems somewhat likely that either you’re both politely accepting that the other may be thinking about something else entirely, and/or it may not actually be a thing.
I very much like your idea that you should have the conversation with the default expectation of publishing at a later time. If you haven’t been able to agree on what the thing is by then, I think the other people asking “wait, what precious thing exactly” are probably genuinely confused.
Note that I realize and have not resolved the tension between my worry that indescribable things aren’t things, and my belief that much (and perhaps most) of human decision-making is based on illegible-but-valid beliefs. I wonder if at least some of this conversation is pointing to a tendency to leak illegible beliefs into intellectual discussions in ways that could be called “bias” or “deception” if you think the measurable world is the entirety of truth, but which could also be reasonably framed as “correction” or “debiasing” a limited partial view toward the holistic/invisible reality. I’m not sure I can make that argument, but I would respect it and take it seriously if someone did.