There are a number of different mechanisms which can trigger bifurcations. Finite precision is one of them. Another is that the measurements used to initialize the simulation have much more limited precision and accuracy, and that they do not sample the entire globe (so further approximations must be made to fill in the gaps). There also are numerical errors from the approximations used in converting differential equations to algebraic equations and algebraic errors whenever approximations to the solution of a large linear algebraic system are made. Etc. Any these can trigger bifurcations and make prediction of a certain realization (say, what happens in reality) impossible beyond a certain time.
The good news is that none of these models try to solve for a particular realization. Usually they try to solve for the ensemble mean or some other statistic. Basically, let’s say you have a collection of nominally equivalent initial conditions for the system*. Let’s say you evolve these fields in time, and average the results overall realizations at each time. That’s your ensemble average. If you decompose the fields to be solved into an ensemble mean and a fluctuation, you can then apply an averaging operator and get differential equations which are better behaved (in terms of resolution requirements; I assume they are less chaotic as well), but have unclosed terms which require models. This is turbulence modeling. (To be absolutely clear, what I’ve written is somewhat inaccurate, as from what I understand most climate and weather models use large eddy simulation which is a spatial filtering rather than ensemble averaging. You can ignore this for now.)
One could argue that the ensemble mean is more useful in some areas than others. Certainly, if you just want to calculate drag on a wing (a time-averaged quantity), the ensemble mean is great in that it allows you to jump directly to that. But if you want something which varies in time (as climate and weather models do) then you might not expect this approach to work so well. (But what else can you do?)
nostalgebraist is right, but a fair bit abstract. I never really liked the language of attractors when speaking about fluid dynamics. (Because you can’t visualize what the “attractor” is for a vector field so easily.) A much easier way to understand what he is saying is that there are multiple time scales, say, a slow and a fast one. Hopefully it’s not necessary to accurately predict or model the fast one (weather) to accurately predict the slow one (climate). You can make similar statements about spatial scales. This is not always true, but there are reasons to believe it is true in many circumstances in fluid dynamics.
In terms of accumulation of numerical error causing the problems, I don’t think that’s quite right. I think it’s more right to say that uncertainty grows in time due to both accumulation of numerical error and chaos, but It’s not clear to me which is more significant. This is assuming that climate models use some sort of turbulence model, which they do. It’s also assuming that an appropriate numerical method was used. For example, in combustion simulations, if you use a numerical method which has considerable dispersion errors, the entire result can go to garbage very quickly if this type of error causes the temperature to unphysically rise above the ignition temperature. Then you have flame propagation, etc., which might not happen if a better method was used.
* I have asked specifically about what this means from a technical standpoint, and am yet to get a satisfactory reply. My thinking is that the initial condition is the set of all possible initial conditions given the probability distribution of all the measurements. I have seen some weather models use what looks like Monte Carlo sampling to get average storm trajectories, for example, so someone must have formalized this.
There are a number of different mechanisms which can trigger bifurcations. Finite precision is one of them. Another is that the measurements used to initialize the simulation have much more limited precision and accuracy, and that they do not sample the entire globe (so further approximations must be made to fill in the gaps). There also are numerical errors from the approximations used in converting differential equations to algebraic equations and algebraic errors whenever approximations to the solution of a large linear algebraic system are made. Etc. Any these can trigger bifurcations and make prediction of a certain realization (say, what happens in reality) impossible beyond a certain time.
The good news is that none of these models try to solve for a particular realization. Usually they try to solve for the ensemble mean or some other statistic. Basically, let’s say you have a collection of nominally equivalent initial conditions for the system*. Let’s say you evolve these fields in time, and average the results overall realizations at each time. That’s your ensemble average. If you decompose the fields to be solved into an ensemble mean and a fluctuation, you can then apply an averaging operator and get differential equations which are better behaved (in terms of resolution requirements; I assume they are less chaotic as well), but have unclosed terms which require models. This is turbulence modeling. (To be absolutely clear, what I’ve written is somewhat inaccurate, as from what I understand most climate and weather models use large eddy simulation which is a spatial filtering rather than ensemble averaging. You can ignore this for now.)
One could argue that the ensemble mean is more useful in some areas than others. Certainly, if you just want to calculate drag on a wing (a time-averaged quantity), the ensemble mean is great in that it allows you to jump directly to that. But if you want something which varies in time (as climate and weather models do) then you might not expect this approach to work so well. (But what else can you do?)
nostalgebraist is right, but a fair bit abstract. I never really liked the language of attractors when speaking about fluid dynamics. (Because you can’t visualize what the “attractor” is for a vector field so easily.) A much easier way to understand what he is saying is that there are multiple time scales, say, a slow and a fast one. Hopefully it’s not necessary to accurately predict or model the fast one (weather) to accurately predict the slow one (climate). You can make similar statements about spatial scales. This is not always true, but there are reasons to believe it is true in many circumstances in fluid dynamics.
In terms of accumulation of numerical error causing the problems, I don’t think that’s quite right. I think it’s more right to say that uncertainty grows in time due to both accumulation of numerical error and chaos, but It’s not clear to me which is more significant. This is assuming that climate models use some sort of turbulence model, which they do. It’s also assuming that an appropriate numerical method was used. For example, in combustion simulations, if you use a numerical method which has considerable dispersion errors, the entire result can go to garbage very quickly if this type of error causes the temperature to unphysically rise above the ignition temperature. Then you have flame propagation, etc., which might not happen if a better method was used.
* I have asked specifically about what this means from a technical standpoint, and am yet to get a satisfactory reply. My thinking is that the initial condition is the set of all possible initial conditions given the probability distribution of all the measurements. I have seen some weather models use what looks like Monte Carlo sampling to get average storm trajectories, for example, so someone must have formalized this.