I think even that is overstating how useful it. For example, I think we can all agree that regularization is a huge and very important topic in ML for years. Here is the Wiki entry: https://en.wikipedia.org/wiki/Regularization_(mathematics)#Other_uses_of_regularization_in_statistics_and_machine_learning. Or interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence . Things like layer normalization are not even mentioned anywhere. Pretty useless for learning about neural nets.
yeah, fair enough.
I think even that is overstating how useful it. For example, I think we can all agree that regularization is a huge and very important topic in ML for years. Here is the Wiki entry: https://en.wikipedia.org/wiki/Regularization_(mathematics)#Other_uses_of_regularization_in_statistics_and_machine_learning. Or interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence . Things like layer normalization are not even mentioned anywhere. Pretty useless for learning about neural nets.
yeah, fair enough.