Re: 1—“Forked codebases that have a lot in common but are somewhat tricky to merge” seems like a pretty good metaphor to me.
The question I’d like to answer that is near your questions is: “What is the minimal patch/bridge that will let us use all of both codebases without running into merge conflicts?”
We do have a candidate answer to this question, which we’ve been trying out at AIRCS to reasonable effect. Our candidate answer is something like: an explicit distinction between “tacit knowledge” (inarticulate hunches, early-stage research intuitions, the stuff people access and see in one another while circling, etc.) and the “explicit” (“knowledge” worthy of the name, as in the LW codebase—the thing I believe Ben Pace is mainly gesturing at in his comment above).
Here’s how we explain it at AIRCS:
By “explicit” knowledge, we mean visible-to-conscious-consideration denotative claims that are piecewise-checkable and can be passed explicitly between humans using language.
Example: the claim “Amy knows how to ride a bicycle” is explicit.
By “tacit” knowledge, we mean stuff that allows you to usefully navigate the world (and so contains implicit information about the world, and can be observationally evaluated for how well people seem to navigate the relevant parts of the world when they have this knowledge) but is not made of explicit denotations that can be fully passed verbally between humans.
Example: however the heck Amy actually manages to ride the bicycle (the opaque signals she sends to her muscles, etc.) is in her own tacit knowledge. We can know explicitly “Amy has sufficient tacit knowledge to balance on a bicycle,” but we cannot explicitly track how she balances, and Amy cannot hand her bicycle-balancing ability to Bob via speech (although speech may help). Relatedly, Amy can’t check the individual pieces of her (opaque) motor patterns to figure out which ones are the principles by which she successfully stays up and which are counterproductive superstition.
I’ll give a few more examples to anchor the concepts:
In mathematics:
Explicit: which things have been proven; which proofs are valid.
Tacit: which heuristics may be useful for finding proofs; which theorems are interesting/important. (Some such heuristics can be stated explicitly, but I wouldn’t call those statements “knowledge.” I can’t verify that they’re right in the way I can verify “Amy can ride a bike” or “2+3=5.”)
In science:
Explicit: specific findings of science, such as “if you take a given amount of hydrogen and decrease its volume by half, you double its pressure.” The “experiment” and “conclusion” steps of the scientific method.
Tacit: which hypotheses are worth testing.
In Paul Graham-style startups:
Explicit: what metrics one is hitting, once one achieves an MVP.
Tacit: the way Graham’s small teams of cofounders are supposed to locate their MVP. (In calling this “tacit,” I don’t mean you can’t communicate any of this verbally. Of course they use words. But they way they use words is made of ad hoc spaghetti-code bits of attempt to get gut intuitions back and forth between a small set of people who know each other well. It is quite different from the scalable processes of explicit science/knowledge that can compile across large sets of people and long periods of time. This is why Graham claims that co-founder teams should have 2-4 people, and that if you hire e.g. 10 people to a pre-MVP startup, it won’t scale well.)
In the context of the AIRCS workshop, we share “The Tacit and the Explicit” in order to avoid two different kinds of errors:
People taking “I know it in my gut” as zero-value, and attempting to live via the explicit only. My sense is that some LessWrong users like Said_Achmiz tend to err in this direction. (This error can be fatal to early-stage research, and to one’s ability to discuss ordinary life/relationship/productivity “bugs” and solutions, and many other mundanely useful topics.)
People taking “I know it in my gut” as vetted knowledge, and attempting to build on gut feelings in the manner of knowledge. (This error can be fatal to global epistemology: “but I just feel that religion is true / the future can’t be that weird / whatever”).
We find ourselves needing to fix both those errors in order to allow people to attempt grounded original thinking about AI safety. They need to be able to have intuitions, and take those intuitions seriously enough to develop them / test them / let them breathe, without mistaking those intuitions for knowledge.
So, at the AIRCS workshop, we introduce the explicit (which is a big part of what I take Ben Pace to be gesturing at above actually) at the same time that we introduce the tacit (which is the thing that Ben Pace describes benefiting from at CFAR IMO). And we introduce a framework to try to keep them separate so that learning cognitive processes that help with the tacit will not accidentally mess with folks’ explicit, nor vice versa. (We’ve been introducing this framework at AIRCS for about a year, and I do think it’s been helpful. I think it’s getting to the point where we could try writing it up for LW—i.e., putting the framework more fully into the explicit.)
People taking “I know it in my gut” as zero-value, and attempting to live via the explicit only. My sense is that some LessWrong users like Said_Achmiz tend to err in this direction.
I’d be happy to, except that I’m not sure quite what I need to clarify.
I mean, it’s just not true that I consider “tacit” knowledge (which may, or may not be, the same thing as procedural knowledge—but either way…) to be “zero-value”. That isn’t a thing that I believe, nor is it adjacent to some similar thing that I believe, nor is it a recognizable distortion of some different thing that I believe.
For instance, I’m a designer, and I am quite familiar with looking at a design, or design element, and declaring that it is just wrong, or that it looks right this way and not that way; or making something look a certain way because that’s what looks good and right; etc., etc. Could I explicitly explain the precise and specific reason for every detail of every design decision I make? Of course not; it’s absurd even to suggest it. There is such a thing as “good taste”, “design sense”, etc. You know quite well, I’m sure, what I am talking about.
So when someone says that I attempt to live via the explicit only, and take other sorts of knowledge as having zero value—what am I to say to that? It isn’t true, and obviously so. Perhaps Anna could say a bit about what led her to this conclusion about my views. I am happy to comment further; but as it stands, I am at a loss.
If what Anna meant was “Said undervalues ‘gut’ knowledge, relative to explicit knowledge”… well, that is, of course, not an obviously false or absurd claim; but what she wrote is an odd way of saying it. I have reread the relevant section of Anna’s comment several times, and it is difficult to read it as simply a note that certain people (such as, ostensibly, myself) are merely on somewhat the wrong point along a continuum of placing relative value on this vs. that form of knowledge; it is too banal and straightforward a point, to need to be phrased in such a way as Anna phrased it.
But then, this is getting too speculative to be useful. Perhaps Anna can clarify what she meant.
If it helps for your own calibration of how you come across, there was a thread a while back where I expressed indignation at the phrase “Overcoming intuitions” and you emphatically agreed.
I remember being surprised that you agreed, and having to update my model of your beliefs.
In this comment, you explicitly understood and agreed with the material that was teaching explicit knowledge (philosophy), but objected to the material designed to teach intuitions (circling).
Surely you can see how this does not at all imply that I object to intuition, yes? Logically, after all, there are at least three other possibilities:
That I don’t believe that intuitions can be taught; or…
That I don’t believe that this particular approach (circling) is good for teaching intuitions; or…
That I object to circling for reasons unrelated to the (purported) fact that it teaches intuitions.
(There are other, subtler, possibilities; but these three are the obvious ones.)
The conclusion that I have something against intuitions, drawn from the observation that I am skeptical of circling in particular (or any similar thing), seems to me to be really quite unwarranted.
Yes. If you’re wondering, I basically updated more towards #1.
I wouldn’t call the conclusion unwarranted by the way, it’s a perfectly valid interpretation of seeing this sort of stance from you, it was simply uninformed.
How does your “tacit vs. explicit” dichotomy relative to the “procedural vs. declarative” dichotomy? Are they identical? (If so, why the novel terminology?) Are they totally orthogonal? Some other relationship?
Some notes, for my own edification and that of anyone else curious about all this terminology and the concepts behind it.
Some searching turns up an article by one Fred Nickols, titled “The Knowledge in Knowledge Management” [PDF]. (As far as I can tell, “knowledge management” seems to be a field or topic of study that originates in the world of business consulting; and Fred Nickols is a former executive at a consulting firm of some sort.)
Nickols offers the following definitions:
Explicit knowledge, as the first word in the term implies, is knowledge that has been articulated and, more often than not, captured in the form of text, tables, diagrams, product specifications and so on. … An example of explicit knowledge with which we are all familiar is the formula for finding the area of a rectangle (i.e., length times width). Other examples of explicit knowledge include documented best practices, the formalized standards by which an insurance claim is adjudicated and the official expectations for performance set forth in written work objectives.
Tacit knowledge is knowledge that cannot be articulated. As Michael Polanyi (1997), the chemist-turned-philosopher who coined the term put it, “We know more than we can tell.” Polanyi used the example of being able to recognize a person’s face but being only vaguely able to describe how that is done.
Knowledge that can be articulated but hasn’t is implicit knowledge. … This is the kind of knowledge that can often be teased out of a competent performer by a task analyst, knowledge engineer or other person skilled in identifying the kind of knowledge that can be articulated but hasn’t.
The explicit, implicit, tacit categories of knowledge are not the only ones in use. Cognitive psychologists sort knowledge into two categories: declarative and procedural. Some add strategic as a third category.
Declarative knowledge has much in common with explicit knowledge in that declarative knowledge consists of descriptions of facts and things or of methods and procedures. … For most practical purposes, declarative knowledge and explicit knowledge may be treated as synonyms. This is because all declarative knowledge is explicit knowledge, that is, it is knowledge that can be and has been articulated.
[Procedural knowledge] is an area where important differences of opinion exist.
One view of procedural knowledge is that it is knowledge that manifests itself in the doing of something. As such it is reflected in motor or manual skills and in cognitive or mental skills. We think, we reason, we decide, we dance, we play the piano, we ride bicycles, we read customers’ faces and moods (and our bosses’ as well), yet we cannot reduce to mere words that which we obviously know or know how to do. Attempts to do so are often recognized as little more than after-the-fact rationalizations. …
Another view of procedural knowledge is that it is knowledge about how to do something. This view of procedural knowledge accepts a description of the steps of a task or procedure as procedural knowledge. The obvious shortcoming of this view is that it is no different from declarative knowledge except that tasks or methods are being described instead of facts or things.
Pending the resolution of this disparity, we are left to resolve this for ourselves. On my part, I have chosen to acknowledge that some people refer to descriptions of tasks, methods and procedures as declarative knowledge and others refer to them as procedural knowledge. For my own purposes, however, I choose to classify all descriptions of knowledge as declarative and reserve procedural for application to situations in which the knowing may be said to be in the doing. Indeed, as the diagram in Figure 2 shows, declarative knowledge ties to “describing” and procedural knowledge ties to “doing.” Thus, for my purposes, I am able to comfortably view all procedural knowledge as tacit just as all declarative knowledge is explicit.
Some reading this will immediately say, “Whoa there. If all procedural knowledge is tacit, that means we can’t articulate it. In turn, that means we can’t make it explicit, that is, we can’t articulate and capture it in the form of books, tables, diagrams and so on.” That is exactly what I mean. When we describe a task, step by step, or when we draw a flowchart representing a process, these are representations. Describing what we do or how we do it yields declarative knowledge. A description of an act is not the act just as the map is not the territory.
Oh, no. The diagrams are taken from the paper; they’re in the PDF I linked.
EDIT: Which paper is, by the way, quite worth reading; it’s written in an exceptionally clear and straightforward way, and gets right to the heart of all relevant matters. I was very impressed, truth be told. I could’ve usefully quoted much more, but then I’d just be pasting the whole paper (which, in addition to its other virtues, is mercifully short).
Re: 1—“Forked codebases that have a lot in common but are somewhat tricky to merge” seems like a pretty good metaphor to me.
The question I’d like to answer that is near your questions is: “What is the minimal patch/bridge that will let us use all of both codebases without running into merge conflicts?”
We do have a candidate answer to this question, which we’ve been trying out at AIRCS to reasonable effect. Our candidate answer is something like: an explicit distinction between “tacit knowledge” (inarticulate hunches, early-stage research intuitions, the stuff people access and see in one another while circling, etc.) and the “explicit” (“knowledge” worthy of the name, as in the LW codebase—the thing I believe Ben Pace is mainly gesturing at in his comment above).
Here’s how we explain it at AIRCS:
By “explicit” knowledge, we mean visible-to-conscious-consideration denotative claims that are piecewise-checkable and can be passed explicitly between humans using language.
Example: the claim “Amy knows how to ride a bicycle” is explicit.
By “tacit” knowledge, we mean stuff that allows you to usefully navigate the world (and so contains implicit information about the world, and can be observationally evaluated for how well people seem to navigate the relevant parts of the world when they have this knowledge) but is not made of explicit denotations that can be fully passed verbally between humans.
Example: however the heck Amy actually manages to ride the bicycle (the opaque signals she sends to her muscles, etc.) is in her own tacit knowledge. We can know explicitly “Amy has sufficient tacit knowledge to balance on a bicycle,” but we cannot explicitly track how she balances, and Amy cannot hand her bicycle-balancing ability to Bob via speech (although speech may help). Relatedly, Amy can’t check the individual pieces of her (opaque) motor patterns to figure out which ones are the principles by which she successfully stays up and which are counterproductive superstition.
I’ll give a few more examples to anchor the concepts:
In mathematics:
Explicit: which things have been proven; which proofs are valid.
Tacit: which heuristics may be useful for finding proofs; which theorems are interesting/important. (Some such heuristics can be stated explicitly, but I wouldn’t call those statements “knowledge.” I can’t verify that they’re right in the way I can verify “Amy can ride a bike” or “2+3=5.”)
In science:
Explicit: specific findings of science, such as “if you take a given amount of hydrogen and decrease its volume by half, you double its pressure.” The “experiment” and “conclusion” steps of the scientific method.
Tacit: which hypotheses are worth testing.
In Paul Graham-style startups:
Explicit: what metrics one is hitting, once one achieves an MVP.
Tacit: the way Graham’s small teams of cofounders are supposed to locate their MVP. (In calling this “tacit,” I don’t mean you can’t communicate any of this verbally. Of course they use words. But they way they use words is made of ad hoc spaghetti-code bits of attempt to get gut intuitions back and forth between a small set of people who know each other well. It is quite different from the scalable processes of explicit science/knowledge that can compile across large sets of people and long periods of time. This is why Graham claims that co-founder teams should have 2-4 people, and that if you hire e.g. 10 people to a pre-MVP startup, it won’t scale well.)
In the context of the AIRCS workshop, we share “The Tacit and the Explicit” in order to avoid two different kinds of errors:
People taking “I know it in my gut” as zero-value, and attempting to live via the explicit only. My sense is that some LessWrong users like Said_Achmiz tend to err in this direction. (This error can be fatal to early-stage research, and to one’s ability to discuss ordinary life/relationship/productivity “bugs” and solutions, and many other mundanely useful topics.)
People taking “I know it in my gut” as vetted knowledge, and attempting to build on gut feelings in the manner of knowledge. (This error can be fatal to global epistemology: “but I just feel that religion is true / the future can’t be that weird / whatever”).
We find ourselves needing to fix both those errors in order to allow people to attempt grounded original thinking about AI safety. They need to be able to have intuitions, and take those intuitions seriously enough to develop them / test them / let them breathe, without mistaking those intuitions for knowledge.
So, at the AIRCS workshop, we introduce the explicit (which is a big part of what I take Ben Pace to be gesturing at above actually) at the same time that we introduce the tacit (which is the thing that Ben Pace describes benefiting from at CFAR IMO). And we introduce a framework to try to keep them separate so that learning cognitive processes that help with the tacit will not accidentally mess with folks’ explicit, nor vice versa. (We’ve been introducing this framework at AIRCS for about a year, and I do think it’s been helpful. I think it’s getting to the point where we could try writing it up for LW—i.e., putting the framework more fully into the explicit.)
This is not an accurate portrayal of my views.
I’d be particularly interested, in this context, if you are up clarifying what your views are here.
I’d be happy to, except that I’m not sure quite what I need to clarify.
I mean, it’s just not true that I consider “tacit” knowledge (which may, or may not be, the same thing as procedural knowledge—but either way…) to be “zero-value”. That isn’t a thing that I believe, nor is it adjacent to some similar thing that I believe, nor is it a recognizable distortion of some different thing that I believe.
For instance, I’m a designer, and I am quite familiar with looking at a design, or design element, and declaring that it is just wrong, or that it looks right this way and not that way; or making something look a certain way because that’s what looks good and right; etc., etc. Could I explicitly explain the precise and specific reason for every detail of every design decision I make? Of course not; it’s absurd even to suggest it. There is such a thing as “good taste”, “design sense”, etc. You know quite well, I’m sure, what I am talking about.
So when someone says that I attempt to live via the explicit only, and take other sorts of knowledge as having zero value—what am I to say to that? It isn’t true, and obviously so. Perhaps Anna could say a bit about what led her to this conclusion about my views. I am happy to comment further; but as it stands, I am at a loss.
For what it’s worth, I think that saying “Person X tends to err in Y direction” does not mean “Person X endorses or believes Y”.
If what Anna meant was “Said undervalues ‘gut’ knowledge, relative to explicit knowledge”… well, that is, of course, not an obviously false or absurd claim; but what she wrote is an odd way of saying it. I have reread the relevant section of Anna’s comment several times, and it is difficult to read it as simply a note that certain people (such as, ostensibly, myself) are merely on somewhat the wrong point along a continuum of placing relative value on this vs. that form of knowledge; it is too banal and straightforward a point, to need to be phrased in such a way as Anna phrased it.
But then, this is getting too speculative to be useful. Perhaps Anna can clarify what she meant.
If it helps for your own calibration of how you come across, there was a thread a while back where I expressed indignation at the phrase “Overcoming intuitions” and you emphatically agreed.
I remember being surprised that you agreed, and having to update my model of your beliefs.
Can you think of an example of something I said that led you to that previous, pre-update model?
I can’t, but here’s an example from this same thread:
https://www.lesswrong.com/posts/96N8BT9tJvybLbn5z/we-run-the-center-for-applied-rationality-ama#HgQCE8aHctKjYEWHP
In this comment, you explicitly understood and agreed with the material that was teaching explicit knowledge (philosophy), but objected to the material designed to teach intuitions (circling).
Surely you can see how this does not at all imply that I object to intuition, yes? Logically, after all, there are at least three other possibilities:
That I don’t believe that intuitions can be taught; or…
That I don’t believe that this particular approach (circling) is good for teaching intuitions; or…
That I object to circling for reasons unrelated to the (purported) fact that it teaches intuitions.
(There are other, subtler, possibilities; but these three are the obvious ones.)
The conclusion that I have something against intuitions, drawn from the observation that I am skeptical of circling in particular (or any similar thing), seems to me to be really quite unwarranted.
Yes. If you’re wondering, I basically updated more towards #1.
I wouldn’t call the conclusion unwarranted by the way, it’s a perfectly valid interpretation of seeing this sort of stance from you, it was simply uninformed.
How does your “tacit vs. explicit” dichotomy relative to the “procedural vs. declarative” dichotomy? Are they identical? (If so, why the novel terminology?) Are they totally orthogonal? Some other relationship?
Explicit vs. tacit knowledge isn’t a CFAR concept, and is pretty well established in the literature. Here’s an example:
https://www.basicknowledge101.com/pdf/km/KM_roles.pdf
Some notes, for my own edification and that of anyone else curious about all this terminology and the concepts behind it.
Some searching turns up an article by one Fred Nickols, titled “The Knowledge in Knowledge Management” [PDF]. (As far as I can tell, “knowledge management” seems to be a field or topic of study that originates in the world of business consulting; and Fred Nickols is a former executive at a consulting firm of some sort.)
Nickols offers the following definitions:
Thanks! (I’m assuming you made the diagrams?)
Oh, no. The diagrams are taken from the paper; they’re in the PDF I linked.
EDIT: Which paper is, by the way, quite worth reading; it’s written in an exceptionally clear and straightforward way, and gets right to the heart of all relevant matters. I was very impressed, truth be told. I could’ve usefully quoted much more, but then I’d just be pasting the whole paper (which, in addition to its other virtues, is mercifully short).
Huh, thought I skimmed the paper and didn’t see diagrams but somehow missed them I guess