Yes, but only because “ANN” is enormously broad (tensor/linear algebra program space), and basically includes all possible routes to AGI (all possible approximations of bayesian inference).
“Enormously broad” is just another way of saying “not very useful”. We don’t even know in which sense (if any) the “deep networks” that are used in practice may be said to approximate Bayesian inference; the best we can do, AIUI, is make up a hand-wavy story about how they must be some “hierarchical” variation of single-layer networks, i.e. generalized linear models.
Specifically I meant approx bayesian inference over the tensor program space to learn the ANN, not that the ANN itself needs to implement bayesian inference (although they will naturally tend to learn that, as we see in all the evidence for various bayesian ops in the brain) .
“Enormously broad” is just another way of saying “not very useful”. We don’t even know in which sense (if any) the “deep networks” that are used in practice may be said to approximate Bayesian inference; the best we can do, AIUI, is make up a hand-wavy story about how they must be some “hierarchical” variation of single-layer networks, i.e. generalized linear models.
Specifically I meant approx bayesian inference over the tensor program space to learn the ANN, not that the ANN itself needs to implement bayesian inference (although they will naturally tend to learn that, as we see in all the evidence for various bayesian ops in the brain) .