Deep learning uses global backprop. One of Hinton’s core freakout moments early last year about capabilities advancing faster than expected was when he proposed to himself that the reason nobody has been able to beat backprop with brainlike training algorithms is that backprop is actually more able to pack information into weights.
Deep learning uses global backprop. One of Hinton’s core freakout moments early last year about capabilities advancing faster than expected was when he proposed to himself that the reason nobody has been able to beat backprop with brainlike training algorithms is that backprop is actually more able to pack information into weights.
my point being, deep learning may be faster per op than humans, as a result of a distinctly better training algo.