How thorough is your knowledge of the AGW literature, Silas? I’m only familiar with bits and pieces of it, much of it filtered through sties like Real Climate, but what I’ve seen suggests that climate scientists are doing better than you indicate. For instance, the paper described here includes estimates excluding tree ring data as well as estimates that include tree ring data, because of questions about the reliability of that data (and it cites a bunch of other articles that have addressed that issue). They also describe methods for calibrating and validating proxy data that I haven’t tried to understand, but which seem like the sort of thing that they should be doing.
I think the narrow issue of multi-proxy studies teaches an interesting lesson to folks who like to think of things in terms of Bayesian probabilities.
I would submit that at a bare minimum, any multi-proxy study (such as the one you cite) needs to provide clear inclusion and exclusion criteria for the proxies which are used and not used.
Let’s suppose that there is a universe of 300 possible temperature proxies which can be used and Michael Mann chooses 30 for his paper. If he does not explain to us how he chose those 30, then how can anyone have any confidence in his results?
I haven’t read the paper myself, but here’s what the infamous Steve McIntyre says:
I identified 33 non-tree ring proxies with that started on or before 1000 – many, perhaps even most, of these proxies are new to the recon world. How were these particular proxies selected? How many proxies were screened prior to establishing this network? Mann didn’t say
Yes, I’ve followed Real Climate, on and off, and with greater intensity after the Freakonomics fiasco (where RCers were right because of how sloppy the Freakons were), which directly preceded climategate. FWIW, I haven’t been impressed with how they handle stuff outside their expertise, like the time-discounting issue.
As for the paper you mention, my primary concern is not that the tree data by itself overturns everything, but rather, that they consider it a valid method to clip out disconfirmatory data while still counting the remainder as confirmatory, which makes me wonder how competent the rest of the field is.
The responses on RC about the tree ring issue reek of “missing the point”:
The paper in question is the Mann, Bradley and Hughes (1998) Nature paper on the original multiproxy temperature reconstruction, and the ‘trick’ is just to plot the instrumental records along with reconstruction so that the context of the recent warming is clear. Scientists often use the term “trick” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all. As for the ‘decline’, it is well known that Keith Briffa’s maximum latewood tree ring density proxy diverges from the temperature records after 1960 (this is more commonly known as the “divergence problem”–see e.g. the recent discussion in this paper) and has been discussed in the literature since Briffa et al in Nature in 1998 (Nature, 391, 678-682). Those authors have always recommend not using the post 1960 part of their reconstruction, and so while ‘hiding’ is probably a poor choice of words (since it is ‘hidden’ in plain sight), not using the data in the plot is completely appropriate, as is further research to understand why this happens.
Not using the data at all would be appropriate (or maybe not, since you should include disconfirmatory data points). Including only the data points that agree with you would be very inappropriate, as they certainly can’t count as additional proof once they’re filtered for agreement with the theory.
I’m growing less clear about what your complaint is. If you’re just pointing out a methodological problem in that one paper then I agree with you. If you’re claiming that the whole field is so messed up that no one even realizes it’s a problem, then the paper that I linked looks like a counterexample to your claim. The authors seem to recognize that it’s bad to make ad hoc choices about which proxies to use or which years to apply them to, so they came up with a systematic procedure for selecting proxies (it looks similar to taking all of every proxy that correlates significantly with the 150 years of instrumental temperature records and then averaging those proxy estimates together, but more complicated). And because tree-ring data had been the most problematic (in having a poor fit with the temperature record), they ran a separate set of analyses that excluded those data. They may not explicitly criticize the other methodology, but they’re replacing it with a better methodology, which is good enough for me.
Is it only being rooted out in 2008? There have been a bunch of different proxy reconstructions over the years—are you saying that this 2008 paper was the first one to avoid that methodological problem? Do you know the climate literature well enough to be making these kinds of statements?
There are several factors that can limit] tree growth. Sometimes, low temperature is the bottleneck. So, the tree ring data can in any case be considered a reliable indicator of a floor on the temperature. It isn’t any colder than this point.
They try to pick trees that are more likely to find low temperature the bottleneck. Sometimes it isn’t.
That doesn’t mean that the whole series is useless, even if they happen to be using it wrong (and I don’t know that they are).
How thorough is your knowledge of the AGW literature, Silas? I’m only familiar with bits and pieces of it, much of it filtered through sties like Real Climate, but what I’ve seen suggests that climate scientists are doing better than you indicate. For instance, the paper described here includes estimates excluding tree ring data as well as estimates that include tree ring data, because of questions about the reliability of that data (and it cites a bunch of other articles that have addressed that issue). They also describe methods for calibrating and validating proxy data that I haven’t tried to understand, but which seem like the sort of thing that they should be doing.
I think the narrow issue of multi-proxy studies teaches an interesting lesson to folks who like to think of things in terms of Bayesian probabilities.
I would submit that at a bare minimum, any multi-proxy study (such as the one you cite) needs to provide clear inclusion and exclusion criteria for the proxies which are used and not used.
Let’s suppose that there is a universe of 300 possible temperature proxies which can be used and Michael Mann chooses 30 for his paper. If he does not explain to us how he chose those 30, then how can anyone have any confidence in his results?
I haven’t read the paper myself, but here’s what the infamous Steve McIntyre says:
Yes, I’ve followed Real Climate, on and off, and with greater intensity after the Freakonomics fiasco (where RCers were right because of how sloppy the Freakons were), which directly preceded climategate. FWIW, I haven’t been impressed with how they handle stuff outside their expertise, like the time-discounting issue.
As for the paper you mention, my primary concern is not that the tree data by itself overturns everything, but rather, that they consider it a valid method to clip out disconfirmatory data while still counting the remainder as confirmatory, which makes me wonder how competent the rest of the field is.
The responses on RC about the tree ring issue reek of “missing the point”:
Not using the data at all would be appropriate (or maybe not, since you should include disconfirmatory data points). Including only the data points that agree with you would be very inappropriate, as they certainly can’t count as additional proof once they’re filtered for agreement with the theory.
I’m growing less clear about what your complaint is. If you’re just pointing out a methodological problem in that one paper then I agree with you. If you’re claiming that the whole field is so messed up that no one even realizes it’s a problem, then the paper that I linked looks like a counterexample to your claim. The authors seem to recognize that it’s bad to make ad hoc choices about which proxies to use or which years to apply them to, so they came up with a systematic procedure for selecting proxies (it looks similar to taking all of every proxy that correlates significantly with the 150 years of instrumental temperature records and then averaging those proxy estimates together, but more complicated). And because tree-ring data had been the most problematic (in having a poor fit with the temperature record), they ran a separate set of analyses that excluded those data. They may not explicitly criticize the other methodology, but they’re replacing it with a better methodology, which is good enough for me.
You don’t understand why I’m suspicious that a fundamental problem with their methodology, widely used as proof, is only being rooted out in 2008?
Be glad it’s happening at all.
Is it only being rooted out in 2008? There have been a bunch of different proxy reconstructions over the years—are you saying that this 2008 paper was the first one to avoid that methodological problem? Do you know the climate literature well enough to be making these kinds of statements?
There are several factors that can limit] tree growth. Sometimes, low temperature is the bottleneck. So, the tree ring data can in any case be considered a reliable indicator of a floor on the temperature. It isn’t any colder than this point.
They try to pick trees that are more likely to find low temperature the bottleneck. Sometimes it isn’t.
That doesn’t mean that the whole series is useless, even if they happen to be using it wrong (and I don’t know that they are).