Based on what I’ve read and the contents of the IPCC report, the match between models and climate change has been pretty good so far, actually.
As you mentioned, this is LessWrong. So someone (like me) will go and look at your link. And find that it doesn’t talk about forecasts, it is predominantly concerned with whether models can reproduce historically observed features of the climate. In other words, the issue is just trying to get a good fit to historical data.
By the way, if you look at p.771 and around, you’ll notice that the models have a lot of difficulties with the current “hiatus”.
And find that it doesn’t talk about forecasts, it is predominantly concerned with whether models can reproduce historically observed features of the climate
Exactly; that’s what I said.
EDIT: To clarify, I’m trying to say that past models have had a good fit to data so far, so it’s reasonable to expect they will continue to perform. This is certainly evidence towards climate models being able to carry out predictions, and it’s how science should be done.
In other words, the issue is just trying to get a good fit to historical data.
You make it sound as if they are just arbitrarily varying the parameters of the models to get a good fit. In reality, they are using model ensembles for various different emissions scenarios obtained from real-world data and seeing if the resulting predictions fall within a reasonable confidence interval of what was actually observed. The answer is: yes, they do.
By the way, if you look at p.771 and around, you’ll notice that the models have a lot of difficulties with the current “hiatus”.
There are several ways of interpreting this; I’d be glad to have a discussion about it if you’re interested.
Why, then, are you talking about the models’ fit when answering the question of whether the “climate models can predict weather changes over long term periods” (emphasis mine)?
You make it sound as if they are just arbitrarily varying the parameters of the models to get a good fit.
Not arbitrarily, of course, but “varying the parameters of the models” is the most common and a very general method of getting “a good fit”.
and seeing if the resulting predictions
Do you mean something like cross-validation? I don’t think they predict the future in this context.
Why, then, are you talking about the models’ fit when answering the question of whether the “climate models can predict weather changes over long term periods” (emphasis mine)?
What other way is there? Building a time machine?
How else can you estimate the suitability of models in making predictions than testing their past predictions on current data?
One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.
The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.
Now, there are two plausible conclusions to this:
Those past mistakes have been appropriately corrected into today’s models, so we don’t need to worry too much about past failures.
This is like Paul Samuelson’s economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.
One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.
It’s not as simple as that. Most models give predictions that are conditional on input data to the models (real rate of CO2 production, etc.). To analyze the predictions from, say, a model developed in 1990, you need to feed the model input data from after 1990. Otherwise you get too wide an error margin in your prediction.
The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.
True. As I said, this is definitely evidence towards the suitability of the models, and certainly seems to be counter to the claim that “there is no evidence that climate models are valuable in predicting future climate trends.
This is like Paul Samuelson’s economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.
That’s definitely a possibility, but it’s reasonable to think that the mathematics and science involved in the climate models stands on a firmer basis than economical analysis, and definitely a firmer basis than Samuelson’s analysis.
The usual plain-vanilla way is to use out-of-sample testing—check the model on data that neither the model nor the researchers have seen before. It’s common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.
In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.
The characteristics of the model’s fit are not necessarily a good guide to the model’s predictive capabilities. Overfitting is still depressingly common.
As you mentioned, this is LessWrong. So someone (like me) will go and look at your link. And find that it doesn’t talk about forecasts, it is predominantly concerned with whether models can reproduce historically observed features of the climate. In other words, the issue is just trying to get a good fit to historical data.
By the way, if you look at p.771 and around, you’ll notice that the models have a lot of difficulties with the current “hiatus”.
Exactly; that’s what I said.
EDIT: To clarify, I’m trying to say that past models have had a good fit to data so far, so it’s reasonable to expect they will continue to perform. This is certainly evidence towards climate models being able to carry out predictions, and it’s how science should be done.
You make it sound as if they are just arbitrarily varying the parameters of the models to get a good fit. In reality, they are using model ensembles for various different emissions scenarios obtained from real-world data and seeing if the resulting predictions fall within a reasonable confidence interval of what was actually observed. The answer is: yes, they do.
There are several ways of interpreting this; I’d be glad to have a discussion about it if you’re interested.
Why, then, are you talking about the models’ fit when answering the question of whether the “climate models can predict weather changes over long term periods” (emphasis mine)?
Not arbitrarily, of course, but “varying the parameters of the models” is the most common and a very general method of getting “a good fit”.
Do you mean something like cross-validation? I don’t think they predict the future in this context.
What other way is there? Building a time machine?
How else can you estimate the suitability of models in making predictions than testing their past predictions on current data?
One possible answer is to look at how the then-state-of-the-art models in (say) 1990, 1995, 2000, etc, predicted temperature changes going forwards.
The answer, in point-of-fact, is that they consistently predicted a considerably greater temperature rise than actually took place, although the actual temperature rise is just about within the error bars of most models.
Now, there are two plausible conclusions to this:
Those past mistakes have been appropriately corrected into today’s models, so we don’t need to worry too much about past failures.
This is like Paul Samuelson’s economics textbook, which consistently (in editions published in the 50s, 60s, 70s and 80s) predicted that the Soviet Union would overtake the US economy in 25 years.
It’s not as simple as that. Most models give predictions that are conditional on input data to the models (real rate of CO2 production, etc.). To analyze the predictions from, say, a model developed in 1990, you need to feed the model input data from after 1990. Otherwise you get too wide an error margin in your prediction.
True. As I said, this is definitely evidence towards the suitability of the models, and certainly seems to be counter to the claim that “there is no evidence that climate models are valuable in predicting future climate trends.
That’s definitely a possibility, but it’s reasonable to think that the mathematics and science involved in the climate models stands on a firmer basis than economical analysis, and definitely a firmer basis than Samuelson’s analysis.
The usual plain-vanilla way is to use out-of-sample testing—check the model on data that neither the model nor the researchers have seen before. It’s common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.
In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.
The characteristics of the model’s fit are not necessarily a good guide to the model’s predictive capabilities. Overfitting is still depressingly common.