I’ve been mulling over where I went wrong here, and I think I’ve got it.
that it’s below 100% is really the key point, because it means that you’re going to have false positives—cases where you think that your interlocutor’s intent was clear and that you understood correctly, but where in fact you did not.
I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there’s some threshold or some clear rule for deciding when to ask for clarification, it’s not worth implementing “ask for clarification if you’re unsure” as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that’s not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone’s fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it’s worth stopping to have one or both parties do something in the vicinity of trying to pass the other’s ITT, to see where the confusion is.
I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I’m much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn’t enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver’s point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn’t really entail that you ought to have asked for clarification here, in this very instance.
Anyway, as Ben suggested I’m working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I’ll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.)
consider these two scenarios
I agree the model I’ve been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don’t think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
If this is where you are going, I have a couple disagreements with it, but I’ll wait until you’ve explained the rest of your point to state them in case I’ve guessed wrong (which I’d guess is fairly likely in this case).
My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.
Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)
I’ve been mulling over where I went wrong here, and I think I’ve got it.
I think this is where I misinterpreted you. I think I thought you were trying to claim that unless there’s some threshold or some clear rule for deciding when to ask for clarification, it’s not worth implementing “ask for clarification if you’re unsure” as a conversational norm at all, which is why I said it was an isolated demand for rigor. But if all you were trying to say was what you said in the quoted bit, that’s not an isolated demand for rigor. I totally agree that there will be false positives, in the sense that misunderstandings can persist for a while without anyone noticing or thinking to ask for clarification, without this being anyone’s fault. However, I also think that if there is a misunderstanding, this will become apparent at some point if the conversation goes on long enough, and whenever that is, it’s worth stopping to have one or both parties do something in the vicinity of trying to pass the other’s ITT, to see where the confusion is.
I think another part of the problem here is that part of what I was trying to argue was that in this case, of your (mis?)understanding of Vaniver, it should have been apparent that you needed to ask for clarification, but I’m much less confident of this now. My arguing that, if a discussion goes on long enough, misunderstandings will reveal themselves, isn’t enough to argue that in this case you should immediately have recognized that you had misunderstood (if in fact you have misunderstood, which if you still object to Vaniver’s point as I reframed it may not be the case.) My model allows that misunderstandings can persist for quite a while unnoticed, so it doesn’t really entail that you ought to have asked for clarification here, in this very instance.
Anyway, as Ben suggested I’m working on a post laying out my views on interpretive labor, ITTs, etc. in more detail, so I’ll say more there. (Relatedly, is there a way to create a top-level post from greaterwrong? I’ve been looking for a while and haven’t been able to find it if there is.)
I agree the model I’ve been laying out here would suggest that the first scenario is better, but I find myself unsure which I think is better all things considered. I certainly don’t think scenario 1 is obviously better, despite the fact that this is probably at least a little inconsistent with my previous comments. My rough guess as to where you’re going with this is something like “scenario 1 is a waste of words since scenario 2 achieves the same results more efficiently (namely, the misunderstanding is cleared up either way).”
If this is where you are going, I have a couple disagreements with it, but I’ll wait until you’ve explained the rest of your point to state them in case I’ve guessed wrong (which I’d guess is fairly likely in this case).
Basically, yes.
The problem, really, is—what? Not misunderstanding per se; that is solvable. The problem is the double illusion of transparency; when I think I’ve understood you (that is, I think that my interpretation of your words, call it X, matches your intent, which I assume is also X), and you think I’ve understood you (that is, you think that my interpretation of your words is Y, which matches what you know to be your intent, i.e. also Y); but actually your intent was Y and my interpretation is X, and neither of us is aware of this composite fact.
How to avoid this? Well, actually this might be one of two questions: first, how to guarantee that you avoid it? second, how to mostly guarantee that you avoid it? (It is easy to see that relaxing the requirement potentially yields gains in efficiency, which is why we are interested in the latter question also.)
Scenario 1—essentially, verifying your interpretation explicitly, every time any new ideas are exchanged—is one way of guaranteeing (to within some epsilon) the avoidance of double illusion of transparency. Unfortunately, it’s extremely inefficient. It gets tedious very quickly; frustration ensues. This approach cannot be maintained. It is not a solution, inasmuch as part of what makes a solution workable is that it must be actually practical to apply it.
By the way—just why is scenario 1 so very, very inefficient? Is it only because of the overhead of verification messages (a la the SYN-ACK of TCP)? That is a big part of the problem, but not the only problem. Consider this extended version:
Scenario 1a:
Alice: [makes some statement]
Bob: What do you mean by that? Surely not [straightforward reading], right? Because that would be obviously wrong. So what do you mean instead?
Alice: Wait, what? Why would that be obviously wrong?
Bob: Well, because [reasons], of course.
So now we’ve devolved into scenario 2, but having wasted two messages. And gained… what? Nothing.
Scenario 2—essentially, never explicitly verifying anything, responding to your interpretation of your interlocutors’s comments, and trusting that any misinterpretation will be inferred from your response and corrected—is one way of mostly guaranteeing the avoidance of double illusion of transparency. It is not foolproof, of course, but it is very efficient.
Scenarios 1 and 2 aren’t our only options. There is also…
Scenario 3:
Alice: [makes some statement]
Bob: Assuming you meant [straightforward reading], that is obviously wrong, because [reasons].
Note that we are now guaranteed (and not just mostly guaranteed) to avoid the double illusion of transparency. If Bob misinterpreted Alice, she can correct him. If Bob interpreted correctly, Alice can immediately respond to Bob’s criticism.
There is still overhead; Bob has to spend effort on explaining his interpretation of Alice. But it is considerably less overhead than scenario 1, and it is the minimum amount of overhead that still guarantees avoidance of the double illusion of transparency.
Personally, I favor the scenario 3 approach in cases of only moderate confidence that I’ve correctly understood my interlocutor, and the scenario 2 approach in cases of high confidence that I’ve correctly understood. (In cases of unusually low confidence, one simply asks for clarification, without necessarily putting forth a hypothesized interpretation.)
Scenarios 2 and 3 are undermined, however—their effectiveness and efficiency dramatically lowered—if people take offense at being misinterpreted, and demand that their critics achieve certainty of having correctly understood them, before writing any criticism. If people take any mis-aimed criticism as a personal attack, or lack of “interpretive labor” (in the form of the verification step as a prerequisite to criticism) as a sign of disrespect, then, obviously, scenarios 2 and 3 cannot work.
This constitutes a massive sacrifice of efficiency of communication, and thereby (because the burden of that inefficiency is borne by critics) disincentivizes lively debate, correction of flaws, and the exchange of ideas. What is gained, for that hefty price, is nothing.
After quite a while thinking about it I’m still not sure I have an adequate response to this comment; I do take your points, they’re quite good. I’ll do my best to respond to this in the post I’m writing on this topic. Perhaps when I post it we can continue the discussion there if you feel it doesn’t adequately address your points.
Sounds good, and I am looking forward to reading your post!
Indeed there is. You go to the All view or the Meta view, and click the green “+ New post” link at the upper-right, just below the tab bar. (The new-post link currently doesn’t display when viewing your own user page, which is an oversight and should be fixed soon.)
Ah, thanks!