Something like deep learning is likely to produce concepts that are very difficult for humans to understand, while probabilistic programming might produce more transparent models. How easy it is to make transparent AGI (compared to opaque AGI) is an open question.
Maybe I’m biased as an open proponent of probabilistic programming, but I think the latter can make AGI at all, while the former not only would result in opaque AGI, but basically can’t result in a successful real-world AGI at all.
I don’t think you can get away from the need to do hierarchical inference on complex models in Turing-complete domains (in short: something very like certain models expressible in probabilistic programming). A deep neural net is basically just drawing polygons in a hierarchy of feature spaces, and hoping your polygons have enough edges to approximate the shape you really mean but not so many edges that they take random noise in the training data to be part of the shape—given just the right conditions, it can approximate the right thing, but it can’t even describe how to do the right thing in general.
Maybe I’m biased as an open proponent of probabilistic programming, but I think the latter can make AGI at all, while the former not only would result in opaque AGI, but basically can’t result in a successful real-world AGI at all.
I don’t think you can get away from the need to do hierarchical inference on complex models in Turing-complete domains (in short: something very like certain models expressible in probabilistic programming). A deep neural net is basically just drawing polygons in a hierarchy of feature spaces, and hoping your polygons have enough edges to approximate the shape you really mean but not so many edges that they take random noise in the training data to be part of the shape—given just the right conditions, it can approximate the right thing, but it can’t even describe how to do the right thing in general.