For what it’s worth I found the “re-implement backprop” to be extremely useful in developing a gears-level model of what was going on under the hood.
Andre Karpathy’s “A Hacker’s Guide to Neural Networks” is really good, and I think focuses on getting a good intuitive understanding of what’s going on: https://karpathy.github.io/neuralnets/
I’ve also found Coursera and other MOOCs in the past somewhat watered down, but YMMV.
For what it’s worth I found the “re-implement backprop” to be extremely useful in developing a gears-level model of what was going on under the hood.
Andre Karpathy’s “A Hacker’s Guide to Neural Networks” is really good, and I think focuses on getting a good intuitive understanding of what’s going on: https://karpathy.github.io/neuralnets/
I’ve also found Coursera and other MOOCs in the past somewhat watered down, but YMMV.