This sounds like a classic example of the bias-variance tradeoff: adding parameters to your model means it can more accurately fit your data (lower bias), but is more sensitive to fluctuations in that data (higher variance). Total error when making predictions on new data is minimized when the bias and variance errors are balanced.
Another example: given n data points, you can always draw a polynomial of degree nā1 that fits them perfectly. But the interpolated output values may vary wildly with slight perturbations to your measured data, which is unlikely to represent a real trend. Often a simple linear regression is the most appropriate model.
This sounds like a classic example of the bias-variance tradeoff: adding parameters to your model means it can more accurately fit your data (lower bias), but is more sensitive to fluctuations in that data (higher variance). Total error when making predictions on new data is minimized when the bias and variance errors are balanced.
Another example: given n data points, you can always draw a polynomial of degree nā1 that fits them perfectly. But the interpolated output values may vary wildly with slight perturbations to your measured data, which is unlikely to represent a real trend. Often a simple linear regression is the most appropriate model.