Aren’t the symbols grounded by human engineering? Humans use those particular boxes/tokens to represent particular concepts, and they can define a way in which the concepts map to the inputs to the system.
I’m not sure “grounding is similar” when a model is fully human engineered (in e.g. decision trees and causal models), vs. when it is dynamically derived (in e.g. artificial neural networks) is a reasonable claim to make.
This points to a natural extension of my argument from the post, which sounds a bit troll-y but I think is actually basically true: large human-written computer programs are also uninterpretable for the same reasons given in the OP. In practice, variables in a program do diverge from what they supposedly represent, and plenty of common real-world bugs can be viewed this way.
The problem is of course much more severe, controlling for system size, when the system is dynamically generated.
The problem is of course much more severe, controlling for system size, when the system is dynamically generated.
If you say this, then I don’t think you can claim that deep learning is not less interpretable than probabilistic/logical systems. I don’t think anyone was making the claim that the latter systems were perfectly/fully interpetabls.
Divergence as a result of human error seems less challenging to rectify than divergence as a result of a completely opaque labelling process.
Well, we have to do apples-to-apples comparisons. Logic/probability models can obviously solve any problems deep learning can (worst-case one can implement a deep learner in the logic system), but not if a human is hand-coding the whole structure. Dynamic generation is a necessary piece for logic/probability to solve the same problems deep learning methods solve.
Aren’t the symbols grounded by human engineering? Humans use those particular boxes/tokens to represent particular concepts, and they can define a way in which the concepts map to the inputs to the system.
I’m not sure “grounding is similar” when a model is fully human engineered (in e.g. decision trees and causal models), vs. when it is dynamically derived (in e.g. artificial neural networks) is a reasonable claim to make.
This points to a natural extension of my argument from the post, which sounds a bit troll-y but I think is actually basically true: large human-written computer programs are also uninterpretable for the same reasons given in the OP. In practice, variables in a program do diverge from what they supposedly represent, and plenty of common real-world bugs can be viewed this way.
The problem is of course much more severe, controlling for system size, when the system is dynamically generated.
If you say this, then I don’t think you can claim that deep learning is not less interpretable than probabilistic/logical systems. I don’t think anyone was making the claim that the latter systems were perfectly/fully interpetabls.
Divergence as a result of human error seems less challenging to rectify than divergence as a result of a completely opaque labelling process.
Well, we have to do apples-to-apples comparisons. Logic/probability models can obviously solve any problems deep learning can (worst-case one can implement a deep learner in the logic system), but not if a human is hand-coding the whole structure. Dynamic generation is a necessary piece for logic/probability to solve the same problems deep learning methods solve.