There is an interesting angle to this—I think it maps to the difference between (traditional) statistics and data science.
In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.
In data science (and/or ML) a lot of models are of the sprawling black-box kind where coefficients are not separable and make no sense outside of the context of the whole model. These models aren’t traditionally parsimonious either. Also, because many usual metrics scale badly to large datasets, overfitting has to be managed differently.
In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.
Keep in mind that traditional stats also includes semi-parametric and non-parametric methods. These give you models which basically manage overfitting by making complexity scale with the amount of data, i.e. they’re by no means “small” or “parsimonious” in the general case. And yes, they’re more similar to the ML stuff but you still get a lot more guarantees.
Also, because many usual metrics scale badly to large datasets, overfitting has to be managed differently.
I get the impression that ML folks have to be way more careful about overfitting because their methods are not going to find the ‘best’ fit—they’re heavily non-deterministic. This means that an overfitted model has basically no real chance of successfully extrapolating from the training set. This is a problem that traditional stats doesn’t have—in that case, your model will still be optimal in some appropriate sense, no matter how low your measures of fit are.
I think I am giving up on correcting “google/wikipedia experts,” it’s just a waste of time, and a losing battle anyways. (I mean the GP here).
I get the impression that ML folks have to be way more careful about overfitting because their methods are
not going to find the ‘best’ fit—they’re heavily non-deterministic. This means that an overfitted model has
basically no real chance of successfully extrapolating from the training set. This is a problem that
traditional stats doesn’t have—in that case, your model will still be optimal in some appropriate sense, no
matter how low your measures of fit are.
That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.
There is an interesting angle to this—I think it maps to the difference between (traditional) statistics and data science.
In traditional stats you are used to small, parsimonious models. In these small models each coefficient, each part of the model is separable in a way, it is meaningful and interpretable by itself. The big thing to avoid is overfitting.
In data science (and/or ML) a lot of models are of the sprawling black-box kind where coefficients are not separable and make no sense outside of the context of the whole model. These models aren’t traditionally parsimonious either. Also, because many usual metrics scale badly to large datasets, overfitting has to be managed differently.
Keep in mind that traditional stats also includes semi-parametric and non-parametric methods. These give you models which basically manage overfitting by making complexity scale with the amount of data, i.e. they’re by no means “small” or “parsimonious” in the general case. And yes, they’re more similar to the ML stuff but you still get a lot more guarantees.
I get the impression that ML folks have to be way more careful about overfitting because their methods are not going to find the ‘best’ fit—they’re heavily non-deterministic. This means that an overfitted model has basically no real chance of successfully extrapolating from the training set. This is a problem that traditional stats doesn’t have—in that case, your model will still be optimal in some appropriate sense, no matter how low your measures of fit are.
I think I am giving up on correcting “google/wikipedia experts,” it’s just a waste of time, and a losing battle anyways. (I mean the GP here).
That said, this does not make sense to me. Bias variance tradeoffs are fundamental everywhere.