I think it is because being specific is not really the problem, and by labeling it as such we force ourselves into a dead-end which does not contain a solution to the real problem. The real problem is achieving communication. By ‘achieving communication’, I mean that concepts in one mind are reproduced with good fidelity in another. By good fidelity, I mean that 90% (arbitrary threshold) of assertions based on my model will be confirmed as true by yours.
There are many different ways that the fidelity can be low between my model and yours:
specific vs abstract
mismatched entity-relationship semantic models
ambiguous words
vague concepts
Surely there are many more.
Examples of what I mean by these few:
specific vs abstract: dog vs toy chihuahua puppy
model mismatch: A contract lawyer, a reservoir modeler, and a mud-logger are trying to share the concept “well”. Their models of what a “well” is have some attributes with similar names, but different meanings and uses, like “name” or “location”. To the mud-logger, a well is a stack of physical measurements of the drilling mud sampled at different drilling depths. To the lawyer, a well is a feature of land-use contracts, service contracts, etc.
Another kind of model mismatch: I think of two entities as having a “has-a” relationship. A house “has” 0 or 1 garages (detached). But you think of the same two entities using a mixin pattern: a house can have or not have garage attributes (not detached). “I put my car in the house” makes no sense to me, because a car goes in a garage but not in a house, but might make sense to you for a house with a built-in garage. We may go a long time before figuring out that my “house” isn’t precisely the same as yours.
ambiguity: I’m a cowboy, you’re an artist. We are trying to share the concept “draw”. We can’t because the concept doesn’t equate.
vagueness: I say my decision theory “one-boxes”. You have no idea what that means, but you create a place-holder for it in your model. So on some level you feel like you understand, but if you drill down, you can get to a point where something important is not defined well enough to use.
It is difficult to know when something that is transparent to you is being misrepresented in my head based on how you explain it to me. “I know you think you understand what you thought I said, but I’m not sure you’re aware that what I said was not what I meant.”
I suggest an exercise/game to train someone to detect and avoid these pitfalls: combine malicious misunderstanding (you tell me to stick the pencil in the sharpener and I insert the eraser end) and fidelity checking.
You make an assertion about your model.
I generate a challenge that is in logical agreement with your assertionss, but which I expect will fail to match your actual model. If I succeed, I get a point.
Repeat, until I am unable to create a successful challenge.
The longer it takes you to create an airtight set of assertions, the more I get.
Then we switch roles.
So I am looking for all the ways your model might be ill-defined, and all the ways your description might be ambiguous or overly abstract.
You are trying to cement all of those gaps as parsimoniously as possible.
I’ve left the hardest part for last: the players need to be supplied with a metaphoric tinkertoy set of model parts. The parts need to support all of the kinds of fidelity-failure we can think of. And the set should be exensible, for when we think of more.
I suggest an exercise/game to train someone to detect and avoid these pitfalls: combine malicious misunderstanding (you tell me to stick the pencil in the sharpener and I insert the eraser end) and fidelity checking.
I suspect the Socratic method (the old one, not the bland one) fits under this heading- “put forth a proposition, and I’ll demolish you with your own statements.”
Why is “be specific” a hard skill to teach?
I think it is because being specific is not really the problem, and by labeling it as such we force ourselves into a dead-end which does not contain a solution to the real problem. The real problem is achieving communication. By ‘achieving communication’, I mean that concepts in one mind are reproduced with good fidelity in another. By good fidelity, I mean that 90% (arbitrary threshold) of assertions based on my model will be confirmed as true by yours.
There are many different ways that the fidelity can be low between my model and yours:
specific vs abstract
mismatched entity-relationship semantic models
ambiguous words
vague concepts
Surely there are many more.
Examples of what I mean by these few:
specific vs abstract: dog vs toy chihuahua puppy
model mismatch: A contract lawyer, a reservoir modeler, and a mud-logger are trying to share the concept “well”. Their models of what a “well” is have some attributes with similar names, but different meanings and uses, like “name” or “location”. To the mud-logger, a well is a stack of physical measurements of the drilling mud sampled at different drilling depths. To the lawyer, a well is a feature of land-use contracts, service contracts, etc.
Another kind of model mismatch: I think of two entities as having a “has-a” relationship. A house “has” 0 or 1 garages (detached). But you think of the same two entities using a mixin pattern: a house can have or not have garage attributes (not detached). “I put my car in the house” makes no sense to me, because a car goes in a garage but not in a house, but might make sense to you for a house with a built-in garage. We may go a long time before figuring out that my “house” isn’t precisely the same as yours.
ambiguity: I’m a cowboy, you’re an artist. We are trying to share the concept “draw”. We can’t because the concept doesn’t equate.
vagueness: I say my decision theory “one-boxes”. You have no idea what that means, but you create a place-holder for it in your model. So on some level you feel like you understand, but if you drill down, you can get to a point where something important is not defined well enough to use.
It is difficult to know when something that is transparent to you is being misrepresented in my head based on how you explain it to me. “I know you think you understand what you thought I said, but I’m not sure you’re aware that what I said was not what I meant.”
I suggest an exercise/game to train someone to detect and avoid these pitfalls: combine malicious misunderstanding (you tell me to stick the pencil in the sharpener and I insert the eraser end) and fidelity checking.
You make an assertion about your model.
I generate a challenge that is in logical agreement with your assertionss, but which I expect will fail to match your actual model. If I succeed, I get a point.
Repeat, until I am unable to create a successful challenge.
The longer it takes you to create an airtight set of assertions, the more I get.
Then we switch roles.
So I am looking for all the ways your model might be ill-defined, and all the ways your description might be ambiguous or overly abstract. You are trying to cement all of those gaps as parsimoniously as possible.
I’ve left the hardest part for last: the players need to be supplied with a metaphoric tinkertoy set of model parts. The parts need to support all of the kinds of fidelity-failure we can think of. And the set should be exensible, for when we think of more.
I suspect the Socratic method (the old one, not the bland one) fits under this heading- “put forth a proposition, and I’ll demolish you with your own statements.”
Sadly, “Communicate well” isn’t quite as simple of a skill.