The problem is of course much more severe, controlling for system size, when the system is dynamically generated.
If you say this, then I don’t think you can claim that deep learning is not less interpretable than probabilistic/logical systems. I don’t think anyone was making the claim that the latter systems were perfectly/fully interpetabls.
Divergence as a result of human error seems less challenging to rectify than divergence as a result of a completely opaque labelling process.
Well, we have to do apples-to-apples comparisons. Logic/probability models can obviously solve any problems deep learning can (worst-case one can implement a deep learner in the logic system), but not if a human is hand-coding the whole structure. Dynamic generation is a necessary piece for logic/probability to solve the same problems deep learning methods solve.
If you say this, then I don’t think you can claim that deep learning is not less interpretable than probabilistic/logical systems. I don’t think anyone was making the claim that the latter systems were perfectly/fully interpetabls.
Divergence as a result of human error seems less challenging to rectify than divergence as a result of a completely opaque labelling process.
Well, we have to do apples-to-apples comparisons. Logic/probability models can obviously solve any problems deep learning can (worst-case one can implement a deep learner in the logic system), but not if a human is hand-coding the whole structure. Dynamic generation is a necessary piece for logic/probability to solve the same problems deep learning methods solve.