Just a quick comment: don’t use Wikipedia for machine learning topics. Unlike using it for e.g. some math topics, it’s very outdated and full of poorly written articles. Instead, the intro sections of ML papers or review papers that you can find through Google Scholar are usually quite readable.
Just a quick comment: don’t use Wikipedia for machine learning topics. Unlike using it for e.g. some math topics, it’s very outdated and full of poorly written articles. Instead, the intro sections of ML papers or review papers that you can find through Google Scholar are usually quite readable.
It has been improved significantly in the past few years, but it does still tend to lag the papers themselves.
I think even that is overstating how useful it. For example, I think we can all agree that regularization is a huge and very important topic in ML for years. Here is the Wiki entry: https://en.wikipedia.org/wiki/Regularization_(mathematics)#Other_uses_of_regularization_in_statistics_and_machine_learning. Or interpretability: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence . Things like layer normalization are not even mentioned anywhere. Pretty useless for learning about neural nets.
yeah, fair enough.