You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.
The usual error analysis provides an estimate of an error in the result in terms of error in the parameters. Any experiment used to test a model is going to rely on this kind of error analysis to determine whether the result of the experiment lies within the estimated error of the prediction given the uncertainty in the measured parameters.
For example, for an inverse-square law (like Newtonian gravity) you could perturb the separation distance by a quantity epsilon, expand in a Taylor series with respect to epsilon, and apply Taylor’s theorem to get an upper bound on the error in the prediction given an error in the measurement of the separation distance. Statistical methods could be used should such analysis prove intractable.
The issue that the author refers to is where the model exhibits chaotic behavior and a very small error in the measurement could cause a huge error in the result. This kind of behavior renders any long-term predictions completely unreliable. In the words of Edward Lorenz:
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
The usual error analysis provides an estimate of an error in the result in terms of error in the parameters. Any experiment used to test a model is going to rely on this kind of error analysis to determine whether the result of the experiment lies within the estimated error of the prediction given the uncertainty in the measured parameters.
For example, for an inverse-square law (like Newtonian gravity) you could perturb the separation distance by a quantity epsilon, expand in a Taylor series with respect to epsilon, and apply Taylor’s theorem to get an upper bound on the error in the prediction given an error in the measurement of the separation distance. Statistical methods could be used should such analysis prove intractable.
The issue that the author refers to is where the model exhibits chaotic behavior and a very small error in the measurement could cause a huge error in the result. This kind of behavior renders any long-term predictions completely unreliable. In the words of Edward Lorenz: