Deep learning is currently poorly understood, but this seems more like a result of how young the field is, rather than some inherent mysteriousness of neural networks.
I think “inherent mysteriousness” is also possible. Some complex things are intractable to prove stuff about.
I think “inherent mysteriousness” is also possible. Some complex things are intractable to prove stuff about.