Whereas I would say that if I disagree with someone, then I’d like to understand their reasons for believing as they do, and I’d like them to understand mine.
If the result of both of us updating our confidence levels about various assertions based on the new evidence provided by the interaction isn’t readily apparent, as for example if I go from believing something is ~10% likely to believing it’s ~11% likely but continue to classify it as “unlikely,” that’s perfectly fine.
Of course, if one or both of us update over a threshold—that is, if one or both of us is convinced of something—that’s perfectly fine as well.
But it’s far from being the only possible valuable end result of an interaction.
Whereas I would say that if I disagree with someone, then I’d like to understand their reasons for believing as they do, and I’d like them to understand mine.
If the result of both of us updating our confidence levels about various assertions based on the new evidence provided by the interaction isn’t readily apparent, as for example if I go from believing something is ~10% likely to believing it’s ~11% likely but continue to classify it as “unlikely,” that’s perfectly fine.
Of course, if one or both of us update over a threshold—that is, if one or both of us is convinced of something—that’s perfectly fine as well.
But it’s far from being the only possible valuable end result of an interaction.