This is true. My contention is in most modeling (climate models, certainly) other sources of noise completely dominate over the calculation noise.
The more I think about this, the less sure I am about how true this is. I was initially thinking that the input and model uncertainties are very large. But I think Vaniver is right and this depends on the particulars of the implementation. The differences between different simulation codes for nominally identical inputs can be surprising. Both are large. (I am thinking in particular about fluid dynamics here, but it’s basically the same equations as in weather and climate modeling, so I assume my conclusions carry over as well.)
One weird idea that comes from this: You could use an approach like MILES in fluid dynamics where you treat the numerical error as a model, which could reduce uncertainty. This only makes sense in turbulence modeling and would take more time than I have to explain.
I was initially thinking that the input and model uncertainties are very large. But I think Vaniver is right and this depends on the particulars of the implementation.
I am not a climatologist, but I have a hard time imagining how the input and model uncertainties in a climate model can be driven down to the magnitudes where floating-point precision starts to matter.
If I’m reading Vaniver correctly (or possibly I’m steelmanning his argument without realizing it), he’s using round-off error (as it’s called in scientific computing) as an example of one of several numerical errors, e.g., discretization and truncation. There are further subcategories like dispersion and dissipation (the latter is the sort of “model” MILES provides for turbulent dissipation). I don’t think round-off error usually is the dominant factor, but the other numerical errors can be, and this might often be the case in fluid flow simulations on more modest hardware.
Round-off error can accumulate to dominate the numerical error if you do things wrong. See figure 38.5 for a representative illustration of the total numerical error as a function of time step. If the time step becomes very small, total numerical error actually increases due to build-up of round-off error. As I said, this only happens if you do things wrong, but it can happen.
Yes, I understand all that, but this isn’t the issue. The issue is how much all the assorted calculation errors matter in comparison to the rest of the uncertainty in the model.
I don’t think we disagree too much. If I had to pick one, I’d agree with you that the rest of the uncertainty is likely larger in most cases, but I think you substantially underestimate how inaccurate these numerical methods can be. Many commercial computational fluid dynamics codes use quite bad numerical methods along with large grid cells and time steps, so it seems possible to me that those errors can exceed the uncertainties in the other parameters. I can think of one case in particular in my own work where the numerical errors likely exceed the other uncertainties.
The more I think about this, the less sure I am about how true this is. I was initially thinking that the input and model uncertainties are very large. But I think Vaniver is right and this depends on the particulars of the implementation. The differences between different simulation codes for nominally identical inputs can be surprising. Both are large. (I am thinking in particular about fluid dynamics here, but it’s basically the same equations as in weather and climate modeling, so I assume my conclusions carry over as well.)
One weird idea that comes from this: You could use an approach like MILES in fluid dynamics where you treat the numerical error as a model, which could reduce uncertainty. This only makes sense in turbulence modeling and would take more time than I have to explain.
I am not a climatologist, but I have a hard time imagining how the input and model uncertainties in a climate model can be driven down to the magnitudes where floating-point precision starts to matter.
If I’m reading Vaniver correctly (or possibly I’m steelmanning his argument without realizing it), he’s using round-off error (as it’s called in scientific computing) as an example of one of several numerical errors, e.g., discretization and truncation. There are further subcategories like dispersion and dissipation (the latter is the sort of “model” MILES provides for turbulent dissipation). I don’t think round-off error usually is the dominant factor, but the other numerical errors can be, and this might often be the case in fluid flow simulations on more modest hardware.
Round-off error can accumulate to dominate the numerical error if you do things wrong. See figure 38.5 for a representative illustration of the total numerical error as a function of time step. If the time step becomes very small, total numerical error actually increases due to build-up of round-off error. As I said, this only happens if you do things wrong, but it can happen.
Yes, I understand all that, but this isn’t the issue. The issue is how much all the assorted calculation errors matter in comparison to the rest of the uncertainty in the model.
I don’t think we disagree too much. If I had to pick one, I’d agree with you that the rest of the uncertainty is likely larger in most cases, but I think you substantially underestimate how inaccurate these numerical methods can be. Many commercial computational fluid dynamics codes use quite bad numerical methods along with large grid cells and time steps, so it seems possible to me that those errors can exceed the uncertainties in the other parameters. I can think of one case in particular in my own work where the numerical errors likely exceed the other uncertainties.