I think there’s no need to think of “training/learning” algorithms as absolutely distinct from “principled” algorithms. It’s just that the understanding of why deep learning works is a little weak, so we don’t know how to view it in a principled way.
the understanding of why deep learning works is a little weak, so we don’t know how to view it in a principled way.
It sounds like you’re saying, “deep learning itself is actually approximating some more ideal process.” (I have no comments on that, but I find it interesting to think about what that process would be, and what its safety-relevant properties would be)
I think there’s no need to think of “training/learning” algorithms as absolutely distinct from “principled” algorithms. It’s just that the understanding of why deep learning works is a little weak, so we don’t know how to view it in a principled way.
It sounds like you’re saying, “deep learning itself is actually approximating some more ideal process.” (I have no comments on that, but I find it interesting to think about what that process would be, and what its safety-relevant properties would be)