Third hypothesis: knowledge representation isn’t actually a good paradigm for either human or machine learning. Neural networks don’t have to be initialized with a structure, they infer the structure from the data, just like humans do.
“Infer the structure from the data” still implies that the NN has some internal representation of knowledge. Whether the structure is initialized or learned isn’t necessarily central to the question—what matters is that there is some structure, and we want to know how to represent that structure in an intelligible manner. The interesting question is then: are the structures used by “knowledge representation” researchers isomorphic to the structures learned by humans and/or NNs?
I haven’t read much on KR, but my passing impression is that the structures they use do not correspond very well to the structures actually used internally by humans/NNs. That would be my guess as to why KR tools aren’t used more widely.
On the other hand, there are representations of certain kinds of knowledge which do seem very similar to the way humans represent knowledge—causal graphs/Bayes nets are an example which jumps to mind. And those have seen pretty wide adoption.
Good hypothesis, here is why I don’t think it’s likely to be true.
It seems to me that when humans make explicit arguments with written language, we are doing a natural language form of knowledge representation. In science and philosophy the process of making conceptual models explicit is very useful for theory formulation and evaluation. i.e., In conceptual domains, human thinkers don’t learn like today’s neural nets, we don’t just immerse ourselves in a sea of raw numbers and absorb the correlations. We might do something like that on the perceptual level, but with scientific and philosophical thought, we are able to abstract over experience and explicitly formulate hypotheses, theories, and arguments. We name patterns to form concepts, and then we reason about these concepts. We make arguments to contextualize and interpret the significance of observations.
All of these operations of human thinking involve a natural language version of knowledge representation. But natural language is imprecise and it doesn’t scale well. It is transmitted through books and articles that pile up as information silos. I’m not saying we can or should eliminate natural language from intellectual inquiry, it will always have a role, but my question is why haven’t we supplemented it with a formal knowledge representation system designed for human thinkers.
Third hypothesis: knowledge representation isn’t actually a good paradigm for either human or machine learning. Neural networks don’t have to be initialized with a structure, they infer the structure from the data, just like humans do.
“Infer the structure from the data” still implies that the NN has some internal representation of knowledge. Whether the structure is initialized or learned isn’t necessarily central to the question—what matters is that there is some structure, and we want to know how to represent that structure in an intelligible manner. The interesting question is then: are the structures used by “knowledge representation” researchers isomorphic to the structures learned by humans and/or NNs?
I haven’t read much on KR, but my passing impression is that the structures they use do not correspond very well to the structures actually used internally by humans/NNs. That would be my guess as to why KR tools aren’t used more widely.
On the other hand, there are representations of certain kinds of knowledge which do seem very similar to the way humans represent knowledge—causal graphs/Bayes nets are an example which jumps to mind. And those have seen pretty wide adoption.
Good hypothesis, here is why I don’t think it’s likely to be true.
It seems to me that when humans make explicit arguments with written language, we are doing a natural language form of knowledge representation. In science and philosophy the process of making conceptual models explicit is very useful for theory formulation and evaluation. i.e., In conceptual domains, human thinkers don’t learn like today’s neural nets, we don’t just immerse ourselves in a sea of raw numbers and absorb the correlations. We might do something like that on the perceptual level, but with scientific and philosophical thought, we are able to abstract over experience and explicitly formulate hypotheses, theories, and arguments. We name patterns to form concepts, and then we reason about these concepts. We make arguments to contextualize and interpret the significance of observations.
All of these operations of human thinking involve a natural language version of knowledge representation. But natural language is imprecise and it doesn’t scale well. It is transmitted through books and articles that pile up as information silos. I’m not saying we can or should eliminate natural language from intellectual inquiry, it will always have a role, but my question is why haven’t we supplemented it with a formal knowledge representation system designed for human thinkers.