If we genuinely had no idea of what neural nets were doing, NN research wouldn’t be getting anywhere. But that’s obviously not the case.
More to the point, there’s promising-lookingwork going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).
Furthermore it’s not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it’s a neural network or not.
If we genuinely had no idea of what neural nets were doing, NN research wouldn’t be getting anywhere. But that’s obviously not the case.
More to the point, there’s promising-looking work going on at getting a better understanding of what various NNs actually represent. Deep learning networks might actually have relatively human-comprehensible features on some of their levels (see e.g. the first link).
Furthermore it’s not clear that any other human-level machine learning model would be any more comprehensible. Worst case, we have something like a billion variables in a million dimensions: good luck trying to understand how that works, regardless of whether it’s a neural network or not.