There’s also a large set of “goodness of fit” measures that evaluate the quality of a model, including simple things like r^2 but also more exotic tests to do things like compare distributions. Wikipedia again has a good overview (https://www.wikiwand.com/en/Goodness_of_fit)
Some additional ideas: There’s a large variety of “loss functions” that are used in machine learning to score the quality of solutions. There are a lot of these, but some of the most popular are below. A good overview is at https://medium.com/udacity-pytorch-challengers/a-brief-overview-of-loss-functions-in-pytorch-c0ddb78068f7
* Mean Absolute Error (a.k.a. L1 loss)
* Mean squared error
* Negative log-likelihood
* Hinge loss
* KL divergence
* BLEU loss for machine translation (https://www.wikiwand.com/en/BLEU)
There’s also a large set of “goodness of fit” measures that evaluate the quality of a model, including simple things like r^2 but also more exotic tests to do things like compare distributions. Wikipedia again has a good overview (https://www.wikiwand.com/en/Goodness_of_fit)