You’re assuming that the explainer knows enough about explaining to try to identify the nepocu and to solicit feedback about which concepts are confusing.
Because it’s a good assumption. Explaining is nothing but tracing out your own internal model’s inferential relationships between the concepts. The only bar to this would be not knowing it. So I don’t see what kind of “explaining skill” there is that goes above and beyond that.
Soliciting feedback, for its part, is but a matter of asking, “do you understand [link in my ontology]?” and/or watching and listening for when they say the don’t understand.
(And I take it you don’t find the term “nepocu” to be particularly annoying?)
No, I think Cyan is right. Have you read Eliezer’s “A Technical Explanation of Technical Explanation?” You may wish to write “A Lay Explanation of Lay Explanation.” I would certainly read and probably vote up such an article.
I don’t see, though, how I’m describing a different kind of explanation, or a distinctly lay one. The explanation standards I’m giving are what you would need to give for a technical explanation as well, in the case where your listener starts from a point of less knowledge about your field (i.e. a far nepocu).
The technical explanation only differs in terms of its greater detail (afaict—you may mean something else); it doesn’t change in type.
Because it’s a good assumption...The only bar to this would be not knowing it.
This is exactly the point under dispute. I’m open to evidence on this point, but you’ll have to do better than flat assertion. As a skilled explainer, you may not be aware of some things you do automatically that do not come naturally to less skilled explainers—even those who are competent within their domains of expertise.
I am indifferent to the term “nepocu”. Obviously some kind of abbreviation is necessary.
Are you sure you’re not overparsing me there? The part you truncated is the key point, that to explain, you need only trace out your internal ontology. To reject my position, it would have to be possible for someone both to actually know the connection to the nepocu, and be unable to articulate the inferential connection.
There are certainly people who have a “Chinese room” understanding, allowing them to deftly match outputs with the right input, and thus meet the standard “expert” threshold. But this is only a level 1 understanding.
I do appreciate your input, though, about what I should include.
The part you truncated is the key point, that to explain, you need only trace out your internal ontology. [emphasis added]
I suppose I’d dispute that, then. It seems to me that to explain skillfully, you need to have not just a grasp of your internal ontology, but also a reasonably accurate map of your conversant’s internal ontology.
One could, in their own head, recognize far-reaching inferential flows between their field of expertise and the rest of their knowledge, and yet fail to recognize that the task of explaining essentially lies is seeking the nepocu and going from there. Level 2 understanding is a property of one individual’s internal ontology; seeking the nepocu is in the same class as understanding the typical mind fallacy and the problem of expecting short inferential distances, these being concerned with the relationship between two distinct internal ontologies.
But it seems premature to go on with this discussion until you’ve made the post. I’m happy to continue if you want to (there’s no shortage of electrons, after all), but if the post is near completion, it probably makes more sense to wait until it’s done.
Okay, point taken. In any case, it would be hard for me to simultaneously claim that understanding necessarily enables you to explain, and that I have advice that would enable you to explain if you only have an understanding.
On the other hand, the advice I’m giving is derided as “obvious”, but, if it’s so obvious, why aren’t people following it?
It seems to me that to explain skillfully, you need to have not just a grasp of your internal ontology, but also a reasonably accurate map of your conversant’s internal ontology. … seeking the nepocu is in the same class as understanding the typical mind fallacy and the problem of expecting short inferential distances, these being concerned with the relationship between two distinct internal ontologies.
But someone doesn’t really need to recognize the difference between their own internal ontology and someone else’s. In the worst case, they can just abandon attempts to link to the listener’s ontology, and “overwrite” with their own, and this would be the obvious next step. In my (admittedly biased) opinion, the reason people don’t take this route is not because this would take too long, but because the domain knowledge isn’t even well-connected to the rest of their own internal ontology.
(Also, this is distinct from the “expecting short inferential distances” problem in that people don’t simply expect it short, but that they wouldn’t know what to do even if they knew it were very long.)
But it seems premature to go on with this discussion until you’ve made the post. I’m happy to continue if you want to (there’s no shortage of electrons, after all), but if the post is near completion, it probably makes more sense to wait until it’s done.
I still think advice would be helpful at this stage. I’ll send you what I have so far, up to the understanding / nepocu points.
Explaining is nothing but tracing out your own internal model’s inferential relationships between the concepts.
I disagree. Your internal model cannot be copied into anyone else’s head just by expounding it. To explain something successfully—that is, to get someone else to understand something—you have to take account of the state of the person you are explaining it to. An explanation that one person finds a model of clarity, another may find tedious and confusing. (I have seen both reactions to Eliezer’s article on Bayes’ theorem.)
When I am assisting students in a computer laboratory, and a student indicates they have a problem, the question I ask myself when I listen to them is “what information does this student need, and not have?” That is what I seek to provide, not a dump of my own thought processes around the subject.
I generally get favourable feedback, so I think I’m onto something here.
As a general rule, explanations share this property with software: until you have tried it and seen it work, you do not know that it works.
I agree with and practice all of that, so I was oversimplifying with the part you quoted. I should probably have said something more like,
“Explaining starts from tracing out your internal model’s inferential relationship between the concepts, and proceeds by finding how it can connect to—and if necessary, correct—the listener’s ontology.”
You’re assuming that the explainer knows enough about explaining to try to identify the nepocu and to solicit feedback about which concepts are confusing.
Because it’s a good assumption. Explaining is nothing but tracing out your own internal model’s inferential relationships between the concepts. The only bar to this would be not knowing it. So I don’t see what kind of “explaining skill” there is that goes above and beyond that.
Soliciting feedback, for its part, is but a matter of asking, “do you understand [link in my ontology]?” and/or watching and listening for when they say the don’t understand.
(And I take it you don’t find the term “nepocu” to be particularly annoying?)
No, I think Cyan is right. Have you read Eliezer’s “A Technical Explanation of Technical Explanation?” You may wish to write “A Lay Explanation of Lay Explanation.” I would certainly read and probably vote up such an article.
I don’t see, though, how I’m describing a different kind of explanation, or a distinctly lay one. The explanation standards I’m giving are what you would need to give for a technical explanation as well, in the case where your listener starts from a point of less knowledge about your field (i.e. a far nepocu).
The technical explanation only differs in terms of its greater detail (afaict—you may mean something else); it doesn’t change in type.
This is exactly the point under dispute. I’m open to evidence on this point, but you’ll have to do better than flat assertion. As a skilled explainer, you may not be aware of some things you do automatically that do not come naturally to less skilled explainers—even those who are competent within their domains of expertise.
I am indifferent to the term “nepocu”. Obviously some kind of abbreviation is necessary.
Are you sure you’re not overparsing me there? The part you truncated is the key point, that to explain, you need only trace out your internal ontology. To reject my position, it would have to be possible for someone both to actually know the connection to the nepocu, and be unable to articulate the inferential connection.
There are certainly people who have a “Chinese room” understanding, allowing them to deftly match outputs with the right input, and thus meet the standard “expert” threshold. But this is only a level 1 understanding.
I do appreciate your input, though, about what I should include.
It’s entirely possible.
I suppose I’d dispute that, then. It seems to me that to explain skillfully, you need to have not just a grasp of your internal ontology, but also a reasonably accurate map of your conversant’s internal ontology.
One could, in their own head, recognize far-reaching inferential flows between their field of expertise and the rest of their knowledge, and yet fail to recognize that the task of explaining essentially lies is seeking the nepocu and going from there. Level 2 understanding is a property of one individual’s internal ontology; seeking the nepocu is in the same class as understanding the typical mind fallacy and the problem of expecting short inferential distances, these being concerned with the relationship between two distinct internal ontologies.
But it seems premature to go on with this discussion until you’ve made the post. I’m happy to continue if you want to (there’s no shortage of electrons, after all), but if the post is near completion, it probably makes more sense to wait until it’s done.
Okay, point taken. In any case, it would be hard for me to simultaneously claim that understanding necessarily enables you to explain, and that I have advice that would enable you to explain if you only have an understanding.
On the other hand, the advice I’m giving is derided as “obvious”, but, if it’s so obvious, why aren’t people following it?
But someone doesn’t really need to recognize the difference between their own internal ontology and someone else’s. In the worst case, they can just abandon attempts to link to the listener’s ontology, and “overwrite” with their own, and this would be the obvious next step. In my (admittedly biased) opinion, the reason people don’t take this route is not because this would take too long, but because the domain knowledge isn’t even well-connected to the rest of their own internal ontology.
(Also, this is distinct from the “expecting short inferential distances” problem in that people don’t simply expect it short, but that they wouldn’t know what to do even if they knew it were very long.)
I still think advice would be helpful at this stage. I’ll send you what I have so far, up to the understanding / nepocu points.
I disagree. Your internal model cannot be copied into anyone else’s head just by expounding it. To explain something successfully—that is, to get someone else to understand something—you have to take account of the state of the person you are explaining it to. An explanation that one person finds a model of clarity, another may find tedious and confusing. (I have seen both reactions to Eliezer’s article on Bayes’ theorem.)
When I am assisting students in a computer laboratory, and a student indicates they have a problem, the question I ask myself when I listen to them is “what information does this student need, and not have?” That is what I seek to provide, not a dump of my own thought processes around the subject.
I generally get favourable feedback, so I think I’m onto something here.
As a general rule, explanations share this property with software: until you have tried it and seen it work, you do not know that it works.
I agree with and practice all of that, so I was oversimplifying with the part you quoted. I should probably have said something more like,
“Explaining starts from tracing out your internal model’s inferential relationship between the concepts, and proceeds by finding how it can connect to—and if necessary, correct—the listener’s ontology.”