Ah. I think this didn’t occur to me because I have a different set of habits for “Mostly don’t end up in conversations with flat earthers in the first place.” This advice was generated in the process of interacting with coworkers, roommates, and friends that are pre-filtered for being people I respect.
There certainly are people I sometimes bump into who aren’t filtered for such, who sometimes I have important disagreements with. In those cases, the correct approach depends a bit on the situation.
I think I’d stick by this advice for discussions where the outcome actually matters (i.e. where you’re not just talking with some internet rando for fun. Conversations with actual stakes, where you’re building a product).
That said, I think I mostly agree with the particular scenarios you outline here, in particular this bit:
The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.
Though this bit here...
The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance
...feels like it’s approaching a somewhat different problem than the one I was thinking of when I wrote this post. (to be fair, I did write the post to be pretty general)
Arrogance / modesty wasn’t what I’m worried about here. The axis that was most salient to me was more like guardedness/defensiveness. If they seem defensive, or digging their heels in, my first impulse is usually to push harder to get them to admit their wrongness. But, that usually makes things worse, not better.
My experience is that people will mirror whatever cognitive algorithms I’m visibly running – if I’m listening, they’re more likely to listen. If I’m confidently asserting a view, they tend to be confidently asserting a view. Whether I’m being too modest / arrogant doesn’t really matter much for this problem.
I used “flat earthers” as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you’d like to be close simply because there’s more reason to get things closer to “right”.
The reason I brought up “arrogance”/”humility” is because the failure modes you brought up of “not listening” and “having obvious bias without reflecting on it and getting rid of it” are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label “emotional safety” or “security” (i.e. that thing that drives guarded/defensive behavior when it’s not there in sufficient amounts).
When you get defensive behavior (perhaps in the form of “not listening” or whatever), cooperative and productive conversation requires that you back up and get the “emotional safety” requirements fulfilled before continuing on. Your proposed response assumes that the “safety” alarm is caused by an overreach on what I’d call the “respect” dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the “safety” requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.
My point isn’t “who cares about emotional safety, let them filter themselves out if they can’t handle the truth [as I see it]”, but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can’t regulate at all, and therefore is free to wander without correction.
While people do tend to mirror your cognitive algorithm so long as it is visible to them, it’s not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you’re not the one that is making a mistake and since it hasn’t been perceived you can still get “not listening” and the like anyway. In these kinds of situations it’s important to back up and make it visible, but that doesn’t necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.
The shared cognitive algorithm I’d rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I’m confident in they’ll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it’s nice to have a shared algorithm that can gracefully handle these kinds of things.
My point isn’t “who cares about emotional safety, let them filter themselves out if they can’t handle the truth [as I see it]”, but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can’t regulate at all, and therefore is free to wander without correction.
Thanks, this was a good neat point that gives me a conceptual handle for thinking about the overall problem.
Ah. I think this didn’t occur to me because I have a different set of habits for “Mostly don’t end up in conversations with flat earthers in the first place.” This advice was generated in the process of interacting with coworkers, roommates, and friends that are pre-filtered for being people I respect.
There certainly are people I sometimes bump into who aren’t filtered for such, who sometimes I have important disagreements with. In those cases, the correct approach depends a bit on the situation.
I think I’d stick by this advice for discussions where the outcome actually matters (i.e. where you’re not just talking with some internet rando for fun. Conversations with actual stakes, where you’re building a product).
That said, I think I mostly agree with the particular scenarios you outline here, in particular this bit:
Though this bit here...
...feels like it’s approaching a somewhat different problem than the one I was thinking of when I wrote this post. (to be fair, I did write the post to be pretty general)
Arrogance / modesty wasn’t what I’m worried about here. The axis that was most salient to me was more like guardedness/defensiveness. If they seem defensive, or digging their heels in, my first impulse is usually to push harder to get them to admit their wrongness. But, that usually makes things worse, not better.
My experience is that people will mirror whatever cognitive algorithms I’m visibly running – if I’m listening, they’re more likely to listen. If I’m confidently asserting a view, they tend to be confidently asserting a view. Whether I’m being too modest / arrogant doesn’t really matter much for this problem.
I used “flat earthers” as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you’d like to be close simply because there’s more reason to get things closer to “right”.
The reason I brought up “arrogance”/”humility” is because the failure modes you brought up of “not listening” and “having obvious bias without reflecting on it and getting rid of it” are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label “emotional safety” or “security” (i.e. that thing that drives guarded/defensive behavior when it’s not there in sufficient amounts).
When you get defensive behavior (perhaps in the form of “not listening” or whatever), cooperative and productive conversation requires that you back up and get the “emotional safety” requirements fulfilled before continuing on. Your proposed response assumes that the “safety” alarm is caused by an overreach on what I’d call the “respect” dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the “safety” requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.
My point isn’t “who cares about emotional safety, let them filter themselves out if they can’t handle the truth [as I see it]”, but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can’t regulate at all, and therefore is free to wander without correction.
While people do tend to mirror your cognitive algorithm so long as it is visible to them, it’s not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you’re not the one that is making a mistake and since it hasn’t been perceived you can still get “not listening” and the like anyway. In these kinds of situations it’s important to back up and make it visible, but that doesn’t necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.
The shared cognitive algorithm I’d rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I’m confident in they’ll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it’s nice to have a shared algorithm that can gracefully handle these kinds of things.
Thanks, this was a good neat point that gives me a conceptual handle for thinking about the overall problem.