My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Sounds good, and I am looking forward to reading your post!