The number of conversations which I’ve read which end in either party noticeably updating one way or the other have been relatively rare. The one point I’m not sure if I agree with is being able to predict a particular disagreement is a problem?
I suppose being able to predict the exact way in which your interlocutors will disagree is the problem? If you can foresee someone disagreeing in a particular way, and then accounting for it in your argument, and then they disagree anyway, in the exact way you tried to address, that’s generally just bad faith.
Introducing “arguments” and “bad faith” can complicate and confuse things, and neither are necessary.
As a simple model, say we’re predicting whether the next ball drawn from an urn is black, and we’ve each seen our own set of draws. When I learn that your initial prediction is a higher probability than mine, I can infer that you’ve seen a higher ratio of black than I have, so in order to take that into account I should increase my own probability of black. But how much? Maybe I don’t know how many draws you’ve witnessed.
On the next iteration, maybe they say “Oh shoot, you said 30%? In that case I’m going to drop my guess from 95% to 35%”. In that case, they’re telling you that they expect you’ve seen many more draws than them. Alternatively, they could say “I guess I’ll update from 95% to 94%”, telling you the opposite. If you knew in advance which side of your new estimate they were likely to end up on, then you could have taken that into account last time, and updated further/less far accordingly until you can’t expect to know what you will learn next time.
If you *know* that they’re going to stick to 95% and not update based on your guess, then you know they don’t view your beliefs as saying much. If *that* doesn’t change your mind and make you think “Wow, they must really know the answer then!” and update to 95%, then you don’t view their beliefs as saying much either. When you can predict that beliefs won’t update towards convergence, you’re predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.
When you can predict that beliefs won’t update towards convergence, you’re predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.
Are you saying that the interlocutors should instead change to attempting to resolve their lack of mutual respect?
Whether it’s worth working to resolve any disagreement over appropriate levels of respect is going to depend on the context, but certainly below a certain threshold object level discourse becomes predictably futile. And certainly high levels of respect are *really nice*, and allow for much more efficient communication because people are actually taking each other seriously and engaging with each other’s perspective.
There’s definitely important caveats, but I generally agree with the idea that mutual respect and the ability to sort out disagreements about the appropriate level of respect are worth deliberately cultivating. Certainly if I am in a disagreement that I’d like to actually resolve and I’m not being taken as seriously as I think I ought to be, I’m going to seek to understand why, and see if I can’t pass their “ideological test” on the matter.
As a relatively new person to lesswrong, I agree.
The number of conversations which I’ve read which end in either party noticeably updating one way or the other have been relatively rare. The one point I’m not sure if I agree with is being able to predict a particular disagreement is a problem?
I suppose being able to predict the exact way in which your interlocutors will disagree is the problem? If you can foresee someone disagreeing in a particular way, and then accounting for it in your argument, and then they disagree anyway, in the exact way you tried to address, that’s generally just bad faith.
(though sometimes I do skim posts, by god)
Introducing “arguments” and “bad faith” can complicate and confuse things, and neither are necessary.
As a simple model, say we’re predicting whether the next ball drawn from an urn is black, and we’ve each seen our own set of draws. When I learn that your initial prediction is a higher probability than mine, I can infer that you’ve seen a higher ratio of black than I have, so in order to take that into account I should increase my own probability of black. But how much? Maybe I don’t know how many draws you’ve witnessed.
On the next iteration, maybe they say “Oh shoot, you said 30%? In that case I’m going to drop my guess from 95% to 35%”. In that case, they’re telling you that they expect you’ve seen many more draws than them. Alternatively, they could say “I guess I’ll update from 95% to 94%”, telling you the opposite. If you knew in advance which side of your new estimate they were likely to end up on, then you could have taken that into account last time, and updated further/less far accordingly until you can’t expect to know what you will learn next time.
If you *know* that they’re going to stick to 95% and not update based on your guess, then you know they don’t view your beliefs as saying much. If *that* doesn’t change your mind and make you think “Wow, they must really know the answer then!” and update to 95%, then you don’t view their beliefs as saying much either. When you can predict that beliefs won’t update towards convergence, you’re predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.
Are you saying that the interlocutors should instead change to attempting to resolve their lack of mutual respect?
Whether it’s worth working to resolve any disagreement over appropriate levels of respect is going to depend on the context, but certainly below a certain threshold object level discourse becomes predictably futile. And certainly high levels of respect are *really nice*, and allow for much more efficient communication because people are actually taking each other seriously and engaging with each other’s perspective.
There’s definitely important caveats, but I generally agree with the idea that mutual respect and the ability to sort out disagreements about the appropriate level of respect are worth deliberately cultivating. Certainly if I am in a disagreement that I’d like to actually resolve and I’m not being taken as seriously as I think I ought to be, I’m going to seek to understand why, and see if I can’t pass their “ideological test” on the matter.