A special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealization is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be “absolutely” correct and are accepted as “axioms”. The sense of this “absoluteness” lies precisely in the fact that we allow ourselves to use these “facts” according to the rules of formal logic, in the process declaring as “theorems” all that we can derive from them.
It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result...
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions (“proofs”), the less reliable is the final result.
Complex models are rarely useful (unless for those writing their dissertations).
The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called “the inconceivable effectiveness of mathematics in natural sciences” (or “the Wigner principle”).
You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.
Modeling uncertainty between models (of which guessing the distribution of an uncertain parameter is an example) is harder to handle formally. But overall, it’s not difficult to improve on the naive guess-the-exact-values-and-predict method.
You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.
The usual error analysis provides an estimate of an error in the result in terms of error in the parameters. Any experiment used to test a model is going to rely on this kind of error analysis to determine whether the result of the experiment lies within the estimated error of the prediction given the uncertainty in the measured parameters.
For example, for an inverse-square law (like Newtonian gravity) you could perturb the separation distance by a quantity epsilon, expand in a Taylor series with respect to epsilon, and apply Taylor’s theorem to get an upper bound on the error in the prediction given an error in the measurement of the separation distance. Statistical methods could be used should such analysis prove intractable.
The issue that the author refers to is where the model exhibits chaotic behavior and a very small error in the measurement could cause a huge error in the result. This kind of behavior renders any long-term predictions completely unreliable. In the words of Edward Lorenz:
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
-Vladimir Arnold, On Teaching Mathematics
You can model uncertain parameters within a model as random variables, and then run a large number of simulations to get a distribution of outcomes.
Modeling uncertainty between models (of which guessing the distribution of an uncertain parameter is an example) is harder to handle formally. But overall, it’s not difficult to improve on the naive guess-the-exact-values-and-predict method.
The usual error analysis provides an estimate of an error in the result in terms of error in the parameters. Any experiment used to test a model is going to rely on this kind of error analysis to determine whether the result of the experiment lies within the estimated error of the prediction given the uncertainty in the measured parameters.
For example, for an inverse-square law (like Newtonian gravity) you could perturb the separation distance by a quantity epsilon, expand in a Taylor series with respect to epsilon, and apply Taylor’s theorem to get an upper bound on the error in the prediction given an error in the measurement of the separation distance. Statistical methods could be used should such analysis prove intractable.
The issue that the author refers to is where the model exhibits chaotic behavior and a very small error in the measurement could cause a huge error in the result. This kind of behavior renders any long-term predictions completely unreliable. In the words of Edward Lorenz: