“Infer the structure from the data” still implies that the NN has some internal representation of knowledge. Whether the structure is initialized or learned isn’t necessarily central to the question—what matters is that there is some structure, and we want to know how to represent that structure in an intelligible manner. The interesting question is then: are the structures used by “knowledge representation” researchers isomorphic to the structures learned by humans and/or NNs?
I haven’t read much on KR, but my passing impression is that the structures they use do not correspond very well to the structures actually used internally by humans/NNs. That would be my guess as to why KR tools aren’t used more widely.
On the other hand, there are representations of certain kinds of knowledge which do seem very similar to the way humans represent knowledge—causal graphs/Bayes nets are an example which jumps to mind. And those have seen pretty wide adoption.
“Infer the structure from the data” still implies that the NN has some internal representation of knowledge. Whether the structure is initialized or learned isn’t necessarily central to the question—what matters is that there is some structure, and we want to know how to represent that structure in an intelligible manner. The interesting question is then: are the structures used by “knowledge representation” researchers isomorphic to the structures learned by humans and/or NNs?
I haven’t read much on KR, but my passing impression is that the structures they use do not correspond very well to the structures actually used internally by humans/NNs. That would be my guess as to why KR tools aren’t used more widely.
On the other hand, there are representations of certain kinds of knowledge which do seem very similar to the way humans represent knowledge—causal graphs/Bayes nets are an example which jumps to mind. And those have seen pretty wide adoption.