I was wondering how long it would be until the AGW issue was directly broached on a top-level post. Here I will state my views on it.
First, I want to fend off the potential charge of motivated cognition. I have spent the better part of two years criticizing fellow “libertarians” for trivializing the issue, and especially for their rationalizations of “Screw the Bengalis” even when they condition on AGW being true. I don’t have the links gathered in one place, but just look here and here, and linked discussions, for examples.
That said, here are the warning signs for me (this is just to summarize, will gather links later if necessary):
1) Failed predictions. Given the complexity of the topic, your models inevitably end up doing curve-fitting. (Contrary to a popular misconception, they do not go straight from “the equations they design planes from” to climate models.) That gives you significant leeway in fitting the data to your theory. To be scientific and therefore remove the ability of humans to bias the data, it is vital that model predictions be validated against real-world results. They’ve failed, badly: they predicted, by existing measures of “global temperature”, that it would be much higher than it is now.
2) Anti-Bayesian methodology accepted as commonplace. As an example, regarding the “hide the decline” issue with the tree rings, here’s what happened: Scientists want to know how hot it was millenia ago. Temperature records weren’t kept then. So, they measure by proxies. One common proxy is believed to be tree rings. But tree rings don’t match the time period in which we have the best data.
The correct procedure at this point is to either a) recognize that they aren’t good proxies, or b) include them in toto as an outlier data point. Instead, what they do is to keep all the data points that support the theory, and throw out the rest, calling it a “divergence problem”, and further, claim the remaining points as additional substantiation of the theory. Do I need to explain here what’s wrong with that?
And yet the field completely lacks journals with articles criticizing this.
3) Error cascades. Despite the supposed independence of the datasets, they ultimately come from only a few interbred sources, and further data is tuned so that it matches these data sets. People are kept out of publication, specifically on the basis that their data contradicts the “correct” data.
Finally, you can’t just argue, “The scientists believe AGW, I trust scientists, ergo, the evidence favors AGW.” Science is a method, not a person. AGW is credible to the exent that there is Bayesian evidence for it, and to the extent scientists are following science and finding Bayesian evidence. The history of the field is a history of fitting the data to the theory and increasing pressure to make sure your data conforms to what the high-status people decreed is correct.
Again, if the field is cleansed and audited and the theory turns out to hold up and be a severe problem, I would love for CO2 emissions to finally have their damage priced in so that they’re not wastefully done, and I pity the fools that demand Bengalis go and sue each emitter if they want compensation. But that’s not where we are.
And I don’t think it’s logically rude to demand that the evidence adhere to the standard safeguards against human failings.
Yup, this behavior has long been typical when academics form competing groups, whether the public hears about such groups or not. If you knew how academia worked, this news would not surprise you nor change your opinions on global warming.
People are crazy, the world is mad. Of course there’s gross misbehavior by climate scientists, just like the rest of academia is malfunctioning. But the amount of scrutiny leveled on climate science is vastly greater than the amount of scrutiny leveled on, say, the dietary scientists who randomly made up the idea that saturated fat was bad for you; and the scrutiny really hasn’t turned up anything that bad, just typical behavior by “working” scientists. So I doubt that this is one of the cases where the academic field is just grossly entirely wrong.
I am not particularly interested in a discussion of the virtues of saturated fat. It certainly seems like a bad example of scientists randomly making things up, though.
FWIW, here is a reasonably well-balanced analyisis of the 2010 study you mentioned:
“Study fails to link saturated fat, heart disease”
I was explaining a problem with studies like the one cited—in exploring the hypotheses that saturated fats are inferior to various other fats. Basically, they don’t bear on those hypotheses.
In this particular case, the authors pretty clearly stated that: “More data are needed to elucidate whether CVD risks are likely to be influenced by the specific nutrients used to replace saturated fat.”
People are crazy, the world is mad. Of course there’s gross misbehavior by climate scientists, just like the rest of academia is malfunctioning. But the amount of scrutiny leveled on climate science is vastly greater than the amount of scrutiny leveled on, say, the dietary scientists...
Yes, and I expect that if you put this much scrutiny on most fields, where they are well-protected from falsification, you’d find the same thing. Like you said, scientists aren’t usually trained in the rationalist arts, and can keep bad ideas alive much longer than they should be.
But this doesn’t mean we should just shrug it off as “just the way it works”; we should appropriately discount their evidence for having a less reliable truth-finding procedure if we’re not already assuming as much.
Another difference is that climate scientists are deriving lots and lots of attention, funding, and prestige out of worldwide concern for global warming.
True—they seem ignorant of the “politics is the mind-killer” phenomenon. A boring research field may yield reliable science—but once huge sums of money start to depend on its findings, you have to spend proportionally more effort keeping out bias—such as by making your findings impossible to fake (i.e. no black-box methods for filtering the raw data).
How thorough is your knowledge of the AGW literature, Silas? I’m only familiar with bits and pieces of it, much of it filtered through sties like Real Climate, but what I’ve seen suggests that climate scientists are doing better than you indicate. For instance, the paper described here includes estimates excluding tree ring data as well as estimates that include tree ring data, because of questions about the reliability of that data (and it cites a bunch of other articles that have addressed that issue). They also describe methods for calibrating and validating proxy data that I haven’t tried to understand, but which seem like the sort of thing that they should be doing.
I think the narrow issue of multi-proxy studies teaches an interesting lesson to folks who like to think of things in terms of Bayesian probabilities.
I would submit that at a bare minimum, any multi-proxy study (such as the one you cite) needs to provide clear inclusion and exclusion criteria for the proxies which are used and not used.
Let’s suppose that there is a universe of 300 possible temperature proxies which can be used and Michael Mann chooses 30 for his paper. If he does not explain to us how he chose those 30, then how can anyone have any confidence in his results?
I haven’t read the paper myself, but here’s what the infamous Steve McIntyre says:
I identified 33 non-tree ring proxies with that started on or before 1000 – many, perhaps even most, of these proxies are new to the recon world. How were these particular proxies selected? How many proxies were screened prior to establishing this network? Mann didn’t say
Yes, I’ve followed Real Climate, on and off, and with greater intensity after the Freakonomics fiasco (where RCers were right because of how sloppy the Freakons were), which directly preceded climategate. FWIW, I haven’t been impressed with how they handle stuff outside their expertise, like the time-discounting issue.
As for the paper you mention, my primary concern is not that the tree data by itself overturns everything, but rather, that they consider it a valid method to clip out disconfirmatory data while still counting the remainder as confirmatory, which makes me wonder how competent the rest of the field is.
The responses on RC about the tree ring issue reek of “missing the point”:
The paper in question is the Mann, Bradley and Hughes (1998) Nature paper on the original multiproxy temperature reconstruction, and the ‘trick’ is just to plot the instrumental records along with reconstruction so that the context of the recent warming is clear. Scientists often use the term “trick” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all. As for the ‘decline’, it is well known that Keith Briffa’s maximum latewood tree ring density proxy diverges from the temperature records after 1960 (this is more commonly known as the “divergence problem”–see e.g. the recent discussion in this paper) and has been discussed in the literature since Briffa et al in Nature in 1998 (Nature, 391, 678-682). Those authors have always recommend not using the post 1960 part of their reconstruction, and so while ‘hiding’ is probably a poor choice of words (since it is ‘hidden’ in plain sight), not using the data in the plot is completely appropriate, as is further research to understand why this happens.
Not using the data at all would be appropriate (or maybe not, since you should include disconfirmatory data points). Including only the data points that agree with you would be very inappropriate, as they certainly can’t count as additional proof once they’re filtered for agreement with the theory.
I’m growing less clear about what your complaint is. If you’re just pointing out a methodological problem in that one paper then I agree with you. If you’re claiming that the whole field is so messed up that no one even realizes it’s a problem, then the paper that I linked looks like a counterexample to your claim. The authors seem to recognize that it’s bad to make ad hoc choices about which proxies to use or which years to apply them to, so they came up with a systematic procedure for selecting proxies (it looks similar to taking all of every proxy that correlates significantly with the 150 years of instrumental temperature records and then averaging those proxy estimates together, but more complicated). And because tree-ring data had been the most problematic (in having a poor fit with the temperature record), they ran a separate set of analyses that excluded those data. They may not explicitly criticize the other methodology, but they’re replacing it with a better methodology, which is good enough for me.
Is it only being rooted out in 2008? There have been a bunch of different proxy reconstructions over the years—are you saying that this 2008 paper was the first one to avoid that methodological problem? Do you know the climate literature well enough to be making these kinds of statements?
There are several factors that can limit] tree growth. Sometimes, low temperature is the bottleneck. So, the tree ring data can in any case be considered a reliable indicator of a floor on the temperature. It isn’t any colder than this point.
They try to pick trees that are more likely to find low temperature the bottleneck. Sometimes it isn’t.
That doesn’t mean that the whole series is useless, even if they happen to be using it wrong (and I don’t know that they are).
And I don’t think it’s logically rude to demand that the evidence adhere to the standard safeguards against human failings.
It isn’t logically rude to criticize a science. Though in fairness to climate science I think nearly every science routinely makes errors similar to the ones you mention. That said, we shouldn’t take this information and conclude that AGW is probably false.. Scientists should be Bayesians and the fact that they’re not is evidence against what they believe… but it isn’t strong enough evidence to reverse the evidence we get from the fact that they’re still scientists.
[W]hat they do is to keep all the data points that support the theory, and throw out the rest, calling it a “divergence problem”, and further, claim the remaining points as additional substantiation of the theory.
And yet the field completely lacks journals with articles criticizing this.
Would you clarify this? That seems on its face to be a very strong, which is to say improbable, claim.
I wasn’t saying journals don’t mention the divergence problem, if that’s what you thought. I was saying they don’t criticize the practice of stripping all the data you don’t like from a dataset and then calling the remaining points further substantiation of your theory. It’s this “trick” that is regarded as commonplace in climatology and thus “no big deal”.
I was saying they don’t criticize the practice of stripping all the data you don’t like from a dataset and then calling the remaining points further substantiation of your theory.
There seem to be two kinds of criticism that it’s important to distinguish. On the one hand, there is the following domain-invariant criticism: “It’s wrong to strip data with no motivation other than you don’t like it.” The difficulty with making this criticism is that you have to justify your claim to be able to read the data-stripper’s mind. You need to show that this really was their only motivation. However, although you might have sufficient Bayesian evidence to justify this claim, you probably don’t have enough scientific evidence to convince a journal editor.
On the other hand, there are domain-specific criticisms: “It’s wrong to strip this specific data, and here are domain-specific reasons why it’s wrong: X, Y, and Z.” (E.g., X might be a domain-specific argument that the data probably wasn’t due to measurement error.) It seems much easier to justify this latter kind of criticism at the standards required for a scientific journal.
These considerations are independent of the domain under consideration. I would expect them to operate in other domains besides climate science. For example, I would expect it to be uncommon to find astronomers accusing each other in peer reviewed journals of throwing out data just because they don’t like it, even though I expect that it probably happens just as often as in climatology.
It’s just easier to avoid getting into psychological motivations for throwing data out if you have a theoretic argument for why the data shouldn’t have been thrown out. This seems sufficient to me to explain your observation.
It’s this “trick” that is regarded as commonplace in climatology and thus “no big deal”.
In that case, you should be able to find climatologists openly admitting to throwing out data just because they don’t like it. But the “just because” part rules out all the alleged examples that I’ve seen, including those from the CRU e-mails.
There seem to be two kinds of criticism that it’s important to distinguish. On the one hand, there is the following domain-invariant criticism: “It’s wrong to strip data with no motivation other than you don’t like it.”
This wasn’t my claim. They may very well have a reason for excluding that data, and were well-intentioned in doing so. It’s just that they don’t understand that when you filter a data set so that it only retains points consistent with theory T, you can’t turn around and use it as evidence of T. And no one ever points this out.
It’s not that they recognize themselves as throwing out data points because they don’t like them; it’s that “well of course these points are wrong—they don’t match the theory!”
In that case, you should be able to find climatologists openly admitting to throwing out data just because they don’t like it. But the “just because” part rules out all the alleged examples that I’ve seen, including those from the CRU e-mails.
Really? You gave me the impression before you hadn’t read them, based on your reaction to the term “divergence problem”. But if you read them, you know that this is what happened: Scientist 1 notices that data set A shows cooling after time t1. Scientist 2 says, don’t worry, just delete the part after t1, but otherwise continue to use the data set; this is a standard technique. (A brilliant idea, even—i.e. “trick”)
It would be one thing if they said, “Clip out points x304 thru x509 because of data-specific problem P related to that span, then check for conformance with theory T.” But here, it was, “Clip out data on the basis of it being inconsistent with T (hopefully we’ll have a reason later), and then cite it as proof of T.” (The remainder was included in a chart attempting to substantiate T.)
Weren’t they filtering out proxy data because it was inconsistent with the (more reliable) data, not with the theory? The divergence problem is that the tree ring proxy diverges from the actual measured temperatures after 1960. The tree ring data show a pretty good fit with the measured temperatures from 1850 or so to 1960, so it seems like they do serve as a decent proxy for temperature, which raises the questions of 1) what to do with the tree ring data to estimate historical temperatures and 2) why this divergence in trends is happening.
The initial response to question 1 was to exclude the post-1960 data, essentially assuming that something weird happened to the trees after 1960 which didn’t affect the rest of the data set. That is problematic, especially since they didn’t have an answer to question 2, but it’s not as bad as what you’re describing. There’s no need to even consider any theory T. And now there’s been a bunch of research into why the divergence happens and what it implies about the proxy estimates, as well as efforts to find other proxies that don’t behave in this weird way.
Again, the problem is not that they threw out a portion of the series. The problem is throwing out a portion of the series and also using the remainder as further substantiation. Yes, the fact that it doesn’t match more reliable measures is a reason to conclude it’s invalid during one particular period; but having decided this, it cannot count as an additional supporting data point.
If the inference flows from the other measures to the tree ring data, it cannot flow back as reinforcement for the other measures.
But if they’re fitting the tree ring data to another data set and not to the theory, then they don’t have the straightforward circularity problem where the data are being tailored to the theory and then used as confirmation of that theory.
I’m starting to think that there’s a bigger inferential gap between us than I realized. I don’t see how tree ring data has been used “as reinforcement for the other measures,” and now I’m wondering what you mean by it being used to further substantiate the theory, and even what the theory is. Maybe it’s not worth continuing off on this tangent here?
Let me try one last time, with as little jargon as possible. Here is what I am claiming happened, and what its implications are:
Most proxies for temperature follow a temperature vs. time pattern of P1.
Some don’t. They adhere to a different pattern, P2, which is just P1 for a while, and then something different.
Scientists present a claim C1: the past history of temperature is that of P1.
Scientists present data substantating C1. Their data is the proxies following P1.
The scientists provide further data to substantiate C1. That data is the proxies following P2, but with the data that are different from P1 trimmed off.
So scientists were using P2, filtered for its agreement with P1, to prove C1.
That is not kosher.
That method was used in major reports.
That method went uncriticized for years after certainty of C1 was claimed.
That merits an epic facepalm regarding the basic reasoning skills of this field.
Does this exposition differe from what you thought I was arguing before?
Then I guess I just disagree with you. Scientists’ belief about the temperature pattern (P1) from 1850 to the present isn’t based on proxies—it’s based on measurements of the temperature which are much more reliable than any proxy. The best Bayesian estimate of the temperature since 1850 gives almost all of the weight to the measurements and very little weight to any other source of evidence (that is especially true over the past 50 years when measurements have been more rigorous, and that is the time period when P1 and P2 differ).
The tree ring proxy was filtered based on its agreement with the temperature measurements, and then used to estimate temperatures prior to 1850, when we don’t have measurements. If you want to think of it as substantiating something, it helped confirm the estimates made with other proxy data sets (other tree rings, ice cores, etc.), and it was not filtered based on its agreement with those other proxies. So I don’t think that the research has the kind of obvious flaw that you’re describing here.
I do think that the divergence problem raises questions which I haven’t seen answered adequately, but I’ve assumed that those questions were dealt with in the climate literature. The biggest issue I have is with using the tree ring proxy to support the claim that the temperatures of the past few decades are unprecedented (in the context of the past 1500 years or so) when that proxy hasn’t tracked the high temperatures over the past few decades. I thought you might have been referring to that with your “further substantiation” comment, and that either you knew enough about the literature to correct my mistaken assumption that it dealt with this problem, or you were overclaiming by that nobody in the field was concerned about this and we could at least get glimpses of the literature that dealt with it. (And I have gotten those glimpses over the past couple days—Wikipedia cites a paper that raises the possibility that tree rings don’t track temperatures above a certain threshold, and the paper I linked shows that they are trying to use proxies that don’t diverge.)
Are we agreed that the rapid rise in CO2 levels, to highs not seen in human history and owing to human intervention, is undisputed fact?
If so, it seems to me that the default extrapolation, from our everyday experience with systems we understand poorly, is that when you turn a dial all the way up without knowing what the heck you’re doing, you won’t like the results. Example include: numerous cases of introducing animal species (bacteria, sheep, wasps) to populations not adapted to them, said populations then suffering upheaval; stock market crashes; losing two space shuttles; and so on.
The burden of proof seems to be on those who insist that yeah, CO2 levels are rising super fast, but don’t worry, it’ll be business as usual (except winters will be nicer and summers will need a little more ice cubes).
Wha...? Is that an argument by surface analogy? Does every increase in every value owing to human intervention lead to a catastrophe? How about internet connectivity? Land committed to agriculture? Air respired by humans? Shoes built? Radio waves transmitted?
How do you even measure the reference classes appropriately?
For some of these examples, yes, there are catastrophic scenarios on record.
Overgrazing in Iceland to name one I’ve seen first-hand. Beaches despoiled by lethal greeen algae in France as a result of intensive pig farming is another. Shoes—that’s perhaps an excessively restricted category, but the Pacific Trash Vortex is one consequence of turning the dial up on manufacturing capacity without adequate control of the consequences. Improved Internet connectivity is having demonstrated, large and undesired effects on industries such as entertainment and newspapers.
Radio waves… no, offhand I can’t think of an issue on record with those, unless EMF sensitivity counts—but I would be hugely surprised if that turned out to be real (i.e. not psychogenic; the discomfort could be real).
You mentioned “failed predictions”, but left those unspecified. OK, here is a list of empirical confirmations of positive feedback loops involving CO2. Arctic ice melt is the one I’d lose sleep over, since the methane sequestered in Arctic ice is a much more powerful greenhouse gas than CO2. Ice melt also has an effect on water salinity which indirectly affects thermohaline circulation.
The causal details of how some of these positive feedbacks could bring about deeply undesirable consequences seem to me to be better established than the details of how runaway AI could lead to the destruction of human values. But I may have more to learn about either.
This isn’t analogy, as in “build something that looks like a bird and it will fly”. More like abstracting away from examples in several categories, to “systems that remain stable tend to be characterized by feedback loops, including both negative feedback (such as the governor) for regulation and positive for growth or excitation”. The latter leads to predictions, e.g. if you observe only one type of feedback in a stable system a search for the other type will generally be fruitful.
For instance, we observe that successful community Web sites tend to become even more successful as enthusiast users take the good news outside. Yet very few sites become very big. We can look for regulatory feedback loops. A good one stems from the joke “Nobody goes to that restaurant anymore, it’s too crowded.” As the audience of a community site increases, its output may become difficult to handle, turning people away feeling overwhelmed. I would predict that LW will run out of new commenters before it runs out of readers, that a lowered influx of new commenters leads to staleness in the contributions of post authors, in turn leading post authors to look elsewhere for stimulation.
Now, perhaps CO2 levels rising through the roof aren’t going to do anything bad. But that’s as much an argument as saying “perhaps I will win the lottery”.
For some of these examples, yes, there are catastrophic scenarios on record.
Overgrazing in Iceland to name one I’ve seen first-hand. Beaches despoiled by lethal greeen algae in France as a result of intensive pig farming is another. Shoes—that’s perhaps an excessively restricted category, but the Pacific Trash Vortex is one consequence of turning the dial up on manufacturing capacity without adequate control of the consequences. Improved Internet connectivity is having demonstrated, large and undesired effects on industries such as entertainment and newspapers.
This raises the issue of what exactly people mean by ‘catastrophic’. None of the examples you give are ‘catastrophic’ on anything like the scale of what some prophesize for global warming. I personally think it is a misuse of the word catastrophe to apply it to the situations you describe. If global warming was only forecast to cause problems on that sort of scale then I don’t think anyone would be seriously contemplating the kinds of measures often advocated to mitigate the risk.
The effects of improved Internet connectivity are having large positive effects on the entertainment industries and newspapers from the perspective of most people who aren’t incumbents in those industries, just as technological progress generally benefits societies as a whole while sometimes reducing the income of groups who made their living from the supplanted technologies that preceded them.
None of the examples you give are ‘catastrophic’ on anything like the scale of what some prophesize for global warming.
That’s because you’re cherry-picking. Having the Gulf Stream stop, one of the possible consequences of Arctic Ice melt, would be very unpleasant.
In other cases the effects we’re seeing are only the start of a chain of effects. The Pacific Trash Vortex is basically us dumping tiny plastic particles into our own food chain, ultimately poisoning ourselves. It’s bad in itself, but the knock-on effects will be worse. Sure, it still pales in comparison to some predicted AGW effects: that’s why the latter has become the more pressing issue.
These examples were direct responses to Silas, who meant to ridicule the initial instances I gave of the class bad things happening as a result of pushing too hard the parameters of systems we understand poorly, on various scales. Many of his own suggestions turn out not to be ridiculous at all, but rather serious matters.
Having the Gulf Stream stop, one of the possible consequences of Arctic Ice melt, would be very unpleasant.
The Gulf Stream makes the difference between Europe and the west coast of North America, not east coasts. Maybe it would be unpleasant, but a catastrophe?
I’ve heard claims that the gulf stream switching off would cause Britain to undergo a climate change that would have consequences I would call ‘catastrophic’, at least in the short term. Some predictions talk about average temperatures dropping by 5-8 C in a matter of months which would have severe consequences for British agriculture and would likely have a noticeable impact on GDP. I’m not sure I put much faith in those predictions however.
This would also be a catastrophe on a different scale from the more alarmist AGW predictions. We’re talking about a major disruption to the British economy but not an existential threat to the human race.
That’s because you’re cherry-picking. Having the Gulf Stream stop, one of the possible consequences of Arctic Ice melt, would be very unpleasant.
I thought you were using that as an example of a potential catastrophic effect of global warming, whereas I was saying none of your examples of things that have actually happened are what I would call catastrophic. I have heard some predictions of what might happen to the climate in Britain if arctic ice melt caused the gulf stream to stop and if those predictions were to pan out then I think ‘catastrophic’ would be an appropriate word to use for the consequences for Britain.
I don’t disagree that some of the predictions for the consequences of AGW are situations for which the word ‘catastrophic’ is appropriate. My point is that some of these predictions are an entirely different scale of disaster from anything you’ve given as an example of actual consequences of human activity to date. The Pacific Trash Vortex cannot reasonably be described as ‘catastrophic’ in my opinion, though dire predictions may exist that if they transpire might justify such language.
Based on the voting patterns, I’m going astray somewhere. We don’t seem to disagree on the facts (high CO2 levels, past environmental damage) and I’m not seeing arguments directed at my reasoning, beyond the criticism of “surface analogy” that I’ve done my best to adress. So I’ll let this be my final comment on the topic, and hope to find insight in others’ discussion.
We quite agree there hasn’t yet been a catastrophe on the scale predicted for AGW: we wouldn’t be having this conversation if there had been. If you read the original post all over again, you’ll find that was its entire point. Don’t demand that particular proof.
The Pacific Trash Vortex cannot reasonably be described as ‘catastrophic’ in my opinion
We don’t want to play dictionary games with the word “catastrophe”. One constructive proposal would be to consider the cost to our economies of cleaning up one or the other of these environmental impacts—including their knock-on effects—versus the costs of prevention. We haven’t incurred the costs of the Trash Vortex yet, it’s not making itself felt to you; but it’s nevertheless a fact not a prediction, and we can base estimates on it.
The typical cost of cleaning up an oil spill seems to be on the order of $10M per ton. The Pacific garbage patch may contain as much as 100 million tons of plastic debris. As an order of magnitude estimate, one Trash Vortex appears to be worth one subprime crisis, albeit spread out over a longer period.
We’re clearly in Black Swan territory, and yet this is just one example picked almost at random (in fact, picked from what Silas took to be counterexamples).
I’m not seeing arguments directed at my reasoning, beyond the criticism of “surface analogy” that I’ve done my best to adress.
Ok, I’ll try and make it more explicit. Your reasoning seems to be that our experience with complex systems that we don’t fully understand is that disrupting them has bad unintended consequences and therefore the burden of proof is on those who suggest that we don’t need to take drastic action to reduce CO2 levels.
I don’t think your conclusion follows from your premise because it seems to me that there are no examples of bad unintended consequences that we haven’t been able to deal with without paying an excessive cost and few examples of bad unintended consequences that even end up with a negative overall economic cost. The only reasonable argument for adopting the kind of drastic and hugely expensive measures necessary to significantly reduce CO2 levels is that the potential effects are so catastrophic that we can’t afford to risk them. There are no examples of similar situations in the past, though as you rightly point out that is not strong evidence that such situations cannot happen since we might not be around to discuss the issue if they had. On the other hand there are lots of examples of dire/catastrophic predictions that have failed to pan out, although in some cases mitigating action has been taken that means we haven’t had the control experiment of doing nothing.
It seems to me that the burden of proof is still very much on those who argue we must take very economically costly actions now because unlike previous problems which have turned out to be relatively cheap to deal with this problem poses a significant risk of genuine catastrophe.
One constructive proposal would be to consider the cost to our economies of cleaning up one or the other of these environmental impacts—including their knock-on effects—versus the costs of prevention. We haven’t incurred the costs of the Trash Vortex yet, it’s not making itself felt to you; but it’s nevertheless a fact not a prediction, and we can base estimates on it.
It’s also important to consider the cost of doing nothing and dealing with the consequences. The trash vortex is a problematic example to use here because there have not been any significant bad consequences yet. It may be a fact that it exists but I haven’t found any estimates of the economic cost it is imposing right now and only vague warnings of possible higher pollutant levels in future.
If the cost of doing nothing about CO2 levels were similar to the cost we appear to be paying for doing nothing about the Pacific Trash Vortex then it would be a no brainer to do nothing about CO2 levels.
Ah that particular idea of all human pleasures being harmful for the environment is pretty much religious. It’s not at all what the impact is like.
Computing is basically blameless in the direct sense for global warming. We should probably enjoy it as much as possible. Electricity is good. Trains are good. Holidaying is good.
Airconditioning is bad. Air travel is bad. Short product lifetime is bad.
The situation is far more positive than some make it out to be. Even the direst climate change predictions necessitates drastic changes in some aspects of life.
AGW can’t take away modern medicine or virtual reality from you.
Why do you think “harmful for the environment” means “leading to global warming”? Lots of things are harmful for the environment. Drying swamps to make railroads harm it. Holidaying leads to decreased “old habitat” biodiversity. Building power plants on small mountain rivers leads to decreased biodiversity, too. Yes, these things are good for us. It just has no bearing on whether they are good for nature.
You claim there are significant issues with the climate science process, but admit there are no journal articles criticizing the process. If you know enough to find faults with their science, why haven’t you yourself written an article on the matter?
Do you think there is something inherent in the culture of climatology science that introduces these anti-Bayesian biases? Why is climate science subject to this when other sciences are not?
Are you saying the field is systemically politically driven from the top down?
Have you followed the climategate email leak story at all? One of the more damning themes in the leaked emails is the discussion of ways to keep dissenting views out of the peer reviewed journals. One of the stronger arguments used against AGW skeptics was that there were not more papers supporting their claims in peer reviewed journals. Given the prevalence of this argument, clear evidence of efforts to keep ‘dissenting’ opinions out of the main peer reviewed journals is a big problem for the credibility of climate science. For example:
The group also did not approve of the American Geophysical Union (AGU) and its choices allowing opposing views to be heard. The group’s trade publication, Geophysical Research Letters (GRL) was targeted by Michael Mann as he wrote, “I’m not sure that GRL can be seen as an honest broker in these debates anymore.” He however acknowledged the publications importance saying, “We can’t afford to lose GRL.”
Mann seemed particularly concerned about ‘contrarian’ with the name Saiers, presumably James Saiers of the Yale School of Forestry & Environmental Studies. “Apparently, the contrarians now have an “in” with GRL. This guy Saiers has a prior connection w/ the University of Virginia Dept. of Environmental Sciences [where Saiers completed his PhD] that causes m some unease,” Mann wrote.
Tom Wigley, a senior scientist in the Climate and Global Dynamics Division at NCAR, felt though that they could deal with Saiers by getting him removed from the AGU. “If you think that Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU channels to get him ousted.”
This was the danger of always criticising the skeptics for not publishing in the
“peer-reviewed literature”. Obviously, they found a solution to that—take over a journal!
So what do we do about this? I think we have to stop considering “Climate Research” as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal. We would also need to consider what we tell or request of our more reasonable colleagues who currently sit on the editorial board...
What, specifically, is “damning” about those quotes?
Suppose creationists took over a formerly respected biology journal. Wouldn’t you expect to find quotes like the above (with climate sceptics replaced by creationists) from the private correspondence of biologists?
AGW skeptics have often been challenged on the lack of peer reviewed papers in credible climate science journals supporting their arguments. Now it is quite possible that this is the case because skeptical papers have been rejected purely due to being bad science (as is the case with the lack of papers supporting the effectiveness of homeopathy in medical journals). However, the absence of papers from the key journals cannot be treated as independent evidence of the badness of the science if there is a concerted effort by AGW believers to keep such papers out of the journals.
It is legitimate to attack the science the AGW skeptics are doing. It is not legitimate to dismiss the science purely on the basis that they have not been published in peer reviewed journals if there is a concerted effort to keep them out of peer reviewed journals based on their conclusions rather than on their methods. Now I’m sure the AGW believers feel that they are rejecting bad science rather than rejecting conclusions they don’t like but emails like the above certainly make it appear that it is the conclusions as much as the methods that they are actually objecting to.
In my opinion the CRU emails mean that it no longer appears justified to ignore claims by AGW skeptics purely because they have not appeared in a peer reviewed journal. They may still be wrong but there is sufficient evidence of biased selection by the journals to not trust that journal publication is an unbiased signal of scientific quality.
Agreed. “No peer-reviewed publications” is not an argument that I’ve ever used or would use, even in advance of the CRU emails, because of course that is how academia works in general.
For the most part, I don’t think you’re quite answering my question.
You present two explanations for the lack of peer-reviewed articles that are sceptical of the scientific consensus on global warming. The first is that there is unjust suppression of such views. The second is that such scepticism is based on bad science. You say that you think the leaked emails support the first explanation, and that there is sufficient evidence of biased (I’m guessing “biased” means “unmerited by the quality of the science” here) selection by journals. What is that sufficient evidence? More specifically, how does the information conveyed by the leaked emails distinguish between the first and second scenarios?
Now I’m sure the AGW believers feel that they are rejecting bad science rather than rejecting conclusions they don’t like but emails like the above certainly make it appear that it is the conclusions as much as the methods that they are actually objecting to.
This addresses my questions, but I was asking for more specifics. Let A = “AGW sceptics are being suppressed from journals without proper evaluation of their science” and B = “AGW sceptics are being suppressed from journals because their science is unsound”. Let E be the information provided by the email leaks. How do you get to the conclusion that the likelihood ratio P(E|A)/P(E|B) is significantly above 1?
Personally I can’t see how the likelihood ratio would be anything but about 1, and it seems to me that those who act if the ratio is significantly greater than 1 are simply ignoring the estimation of P(E|B) because their prior for P(B) is small.
(EDIT: I originally wrote P(A|E) and P(B|E) instead P(E|A) and P(E|B). My text was still, apparently, clear enough that this wrong notation didn’t cause confusion. I’ve now fixed the notation.)
I do think the likelihood ratio is significantly above 1. This is based off reading some of the emails, documents and code comments in the leaks. Here’s a reasonable summary of the emails. It looks like dubious science to me. I find it hard to understand how anyone can claim otherwise unless they are ideologically motivated. If you genuinely can’t see it then I’m not really interested in arguing over minutiae so we’ll just have to leave it at that.
It seems to me that AGW skeptics made a variety of claims that AGW believers dismissed as paranoid: there was a conspiracy to keep skeptical papers out of the journals; there were efforts to damage the careers of climate scientists who didn’t ‘toe the party line’; there were dubious and possibly illegal efforts to keep the original data behind key papers out of the hands of skeptics despite FOI regulations. I did not see many AGW believers prior to the climategate emails saying “Yes, of course all of that happens, that’s just the way science operates in the real world”.
When the CRU leaks became public and substantiated all the ‘paranoid’ claims above, including proof of illegal destruction of emails and data to avoid FOI requests, I find it suspicious when people claim that it doesn’t change their opinions at all. The standard response seems to be “Oh yes, that’s just how science works in the real world. I already knew scientists routinely engage in this sort of behaviour and the degree of such behaviour revealed in the emails is exactly in line with my prior expectations so my probability estimates are unchanged”. That seems highly suspect to me and looks an awful lot like confirmation bias.
You’re still talking about how the e-mails fit into the scenario of fraudulent climate scientists, that is, P(E|A) by my notation. I specifically said that I feel P(E|B) is being ignored by those who claim the e-mails are evidence of misconduct. Your link, for example, mostly lists things like climatologists talking about discrediting journals that publish AGW-sceptical stuff, which is exactly what they would do if they, in good faith, thought that AGW-scepticism is based on quack science. Reading the e-mails and concluding that sceptical papers are being suppressed without merit seems like merely assuming the conclusion.
(Regarding the FOI requests, that might indeed be something that might reasonably set off alarms and significantly reduce P(E|B) - if you believe the sceptics’ commentaries accompanying the relevant quotes. But googling for “mcintyre foi harassment” and doing some reading gives a different story.)
My impression from reading the emails is that different standards are being applied to the AGW skeptics because of their conclusions rather than because of their methods. At the same time there is evidence of data massaging and dubious practices around their own methods in order to match their pre-conceived conclusions. The whole process does not look like the disinterested search for truth that is the scientific ideal.
My P(B|E) would be higher if I read emails that seemed to focus on methodological errors first rather than proceeding from the fact that a journal has published unwelcome conclusions to the proposal that the journal must be boycotted.
I think there’s too much attention paid to the emails, and not enough to all of the publicly available information about the exact same events. Maybe it’s because private communications seem like secret information that contain the hidden truth, or maybe it’s just a cascade effect where everyone focuses on the emails because everyone is focusing on the emails.
The second email that you quoted is in response to the publication of a skeptical article by Soon & Baliunas (2003) in the journal Climate Research which generated a big public controversy among climate scientists. Reactions to that publication include several editors of the journal resigning in protest (and releasing statements about why they resigned), the publisher of the journal writing a letter admitting that the article contained claims that weren’t supported by the evidence (pdf), and a scientific rebuttal to the article being published later that same year. I think that you get a better sense of what happened (and whether climate scientists were reacting to the methods or just the conclusions) by reading accounts written at the time than from the snippets of emails. And of course there’s Wikipedia.
FOI requests? Which ones? Those for proprietary data sets that they weren’t allowed at that time to release, or the FOI requests for information availalble from a public FTP site?
You claim there are significant issues with the climate science process, but admit there are no journal articles criticizing the process. If you know enough to find faults with their science, why haven’t you yourself written an article on the matter?
For the same reason I haven’t personally solved every injustice: a) time constraints, and b) others are currently raising awareness of this problem.
Do you think there is something inherent in the culture of climatology science that introduces these anti-Bayesian biases? Why is climate science subject to this when other sciences are not?
Other sciences are affected by anti-Bayesian biases, and this will be a tendency in proportion to the difficulty of finding solid evidence that your theory is wrong. Which is why I claim e.g. sociology and literature are mostly a waste of time.
Generally speaking, science is in some ways too strict and some ways not strict enough. Eliezer_Yudkowsky has actually pointed out before the general failure to appropriately teach rationality in the classroom, and so scientists in general aren’t aware of this problem.
Politics, of course, does play a part. When it’s not just about “who’s right” but about “who gets to control resources”, then the biases go into hyperdrive. People aren’t just pointing out problems with your research, they’re fighting for the other team! The goal is then about proving them wrong, not stopping to check whether your theory is correct in the first place. (“Ask whether, not why.”)
I basically agree with SilasBarta. If you look carefully, what’s going on in climate science is absolutely apalling.
One can ask a simple probability question: Given that a climate simulation matches history, what is the probability that it will accurately predict the future?
Another question: What evidence is there that climate simulations are accurate besides the fact that they match history?
And another question: If you take 10 or 15 iffy climate simulations, average them, and then use a bootstrap or equivalent method to produce a 95% confidence interval, are you actually accomplishing anything?
I was wondering how long it would be until the AGW issue was directly broached on a top-level post. Here I will state my views on it.
First, I want to fend off the potential charge of motivated cognition. I have spent the better part of two years criticizing fellow “libertarians” for trivializing the issue, and especially for their rationalizations of “Screw the Bengalis” even when they condition on AGW being true. I don’t have the links gathered in one place, but just look here and here, and linked discussions, for examples.
That said, here are the warning signs for me (this is just to summarize, will gather links later if necessary):
1) Failed predictions. Given the complexity of the topic, your models inevitably end up doing curve-fitting. (Contrary to a popular misconception, they do not go straight from “the equations they design planes from” to climate models.) That gives you significant leeway in fitting the data to your theory. To be scientific and therefore remove the ability of humans to bias the data, it is vital that model predictions be validated against real-world results. They’ve failed, badly: they predicted, by existing measures of “global temperature”, that it would be much higher than it is now.
2) Anti-Bayesian methodology accepted as commonplace. As an example, regarding the “hide the decline” issue with the tree rings, here’s what happened: Scientists want to know how hot it was millenia ago. Temperature records weren’t kept then. So, they measure by proxies. One common proxy is believed to be tree rings. But tree rings don’t match the time period in which we have the best data.
The correct procedure at this point is to either a) recognize that they aren’t good proxies, or b) include them in toto as an outlier data point. Instead, what they do is to keep all the data points that support the theory, and throw out the rest, calling it a “divergence problem”, and further, claim the remaining points as additional substantiation of the theory. Do I need to explain here what’s wrong with that?
And yet the field completely lacks journals with articles criticizing this.
3) Error cascades. Despite the supposed independence of the datasets, they ultimately come from only a few interbred sources, and further data is tuned so that it matches these data sets. People are kept out of publication, specifically on the basis that their data contradicts the “correct” data.
Finally, you can’t just argue, “The scientists believe AGW, I trust scientists, ergo, the evidence favors AGW.” Science is a method, not a person. AGW is credible to the exent that there is Bayesian evidence for it, and to the extent scientists are following science and finding Bayesian evidence. The history of the field is a history of fitting the data to the theory and increasing pressure to make sure your data conforms to what the high-status people decreed is correct.
Again, if the field is cleansed and audited and the theory turns out to hold up and be a severe problem, I would love for CO2 emissions to finally have their damage priced in so that they’re not wastefully done, and I pity the fools that demand Bengalis go and sue each emitter if they want compensation. But that’s not where we are.
And I don’t think it’s logically rude to demand that the evidence adhere to the standard safeguards against human failings.
http://www.overcomingbias.com/2009/11/its-news-on-academia-not-climate.html
People are crazy, the world is mad. Of course there’s gross misbehavior by climate scientists, just like the rest of academia is malfunctioning. But the amount of scrutiny leveled on climate science is vastly greater than the amount of scrutiny leveled on, say, the dietary scientists who randomly made up the idea that saturated fat was bad for you; and the scrutiny really hasn’t turned up anything that bad, just typical behavior by “working” scientists. So I doubt that this is one of the cases where the academic field is just grossly entirely wrong.
It just occurred to me that this really needs to be the title of a short popular book on heuristics and biases.
The book title had already occurred to me, but it shouldn’t be the first book in the series.
A good related video:
http://www.ted.com/talks/sendhil_mullainathan.html
http://en.wikipedia.org/wiki/Saturated_fat#Saturated_fat_intake_and_disease_-_Claimed_associations
...doesn’t look as though scientists were “randomly making things” up to me.
But what there saying fails to account for a lot of data. They’re ignoring it.
A popular article (w/Seth Roberts) covering the issue: http://freetheanimal.com/2009/09/saturated-fat-intake-vs-heart-disease-stroke.html
2010 Harvard School of Public Health (intervention/meta-analysis): Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease
Saturated fat, carbohydrate, and cardiovascular disease
Another meta-analysis: The questionable role of saturated and polyunsaturated fatty acids in cardiovascular disease
Population Studies: Cardiovascular disease in the masai
Cholesterol, coconuts, and diet on Polynesian atolls: a natural experiment: the Pukapuka and Tokelau island studies
Cardiovascular event risk in relation to dietary fat intake in middle-aged individuals: data from The Malmö Diet and Cancer Study
I am not particularly interested in a discussion of the virtues of saturated fat. It certainly seems like a bad example of scientists randomly making things up, though.
FWIW, here is a reasonably well-balanced analyisis of the 2010 study you mentioned:
“Study fails to link saturated fat, heart disease”
http://www.reuters.com/article/idUSTRE61341020100204
If you look at guidance on saturated fat it often recommends replacing it with better fats—e.g.:
“You should replace foods high in saturated fats with foods high in monounsaturated and/or polyunsaturated fats.”
http://www.americanheart.org/presenter.jhtml?identifier=3045790
Epidemiological studies no-doubt include many who substituted saturated fats with twinkies.
Where does the “guidance” come from? You can’t cite “guidance” as evidence against the proposition that dietary scientists were making stuff up.
I was explaining a problem with studies like the one cited—in exploring the hypotheses that saturated fats are inferior to various other fats. Basically, they don’t bear on those hypotheses.
In this particular case, the authors pretty clearly stated that: “More data are needed to elucidate whether CVD risks are likely to be influenced by the specific nutrients used to replace saturated fat.”
Yes, and I expect that if you put this much scrutiny on most fields, where they are well-protected from falsification, you’d find the same thing. Like you said, scientists aren’t usually trained in the rationalist arts, and can keep bad ideas alive much longer than they should be.
But this doesn’t mean we should just shrug it off as “just the way it works”; we should appropriately discount their evidence for having a less reliable truth-finding procedure if we’re not already assuming as much.
Another difference is that climate scientists are deriving lots and lots of attention, funding, and prestige out of worldwide concern for global warming.
True—they seem ignorant of the “politics is the mind-killer” phenomenon. A boring research field may yield reliable science—but once huge sums of money start to depend on its findings, you have to spend proportionally more effort keeping out bias—such as by making your findings impossible to fake (i.e. no black-box methods for filtering the raw data).
Which climate researchers failed at tremendously.
How thorough is your knowledge of the AGW literature, Silas? I’m only familiar with bits and pieces of it, much of it filtered through sties like Real Climate, but what I’ve seen suggests that climate scientists are doing better than you indicate. For instance, the paper described here includes estimates excluding tree ring data as well as estimates that include tree ring data, because of questions about the reliability of that data (and it cites a bunch of other articles that have addressed that issue). They also describe methods for calibrating and validating proxy data that I haven’t tried to understand, but which seem like the sort of thing that they should be doing.
I think the narrow issue of multi-proxy studies teaches an interesting lesson to folks who like to think of things in terms of Bayesian probabilities.
I would submit that at a bare minimum, any multi-proxy study (such as the one you cite) needs to provide clear inclusion and exclusion criteria for the proxies which are used and not used.
Let’s suppose that there is a universe of 300 possible temperature proxies which can be used and Michael Mann chooses 30 for his paper. If he does not explain to us how he chose those 30, then how can anyone have any confidence in his results?
I haven’t read the paper myself, but here’s what the infamous Steve McIntyre says:
Yes, I’ve followed Real Climate, on and off, and with greater intensity after the Freakonomics fiasco (where RCers were right because of how sloppy the Freakons were), which directly preceded climategate. FWIW, I haven’t been impressed with how they handle stuff outside their expertise, like the time-discounting issue.
As for the paper you mention, my primary concern is not that the tree data by itself overturns everything, but rather, that they consider it a valid method to clip out disconfirmatory data while still counting the remainder as confirmatory, which makes me wonder how competent the rest of the field is.
The responses on RC about the tree ring issue reek of “missing the point”:
Not using the data at all would be appropriate (or maybe not, since you should include disconfirmatory data points). Including only the data points that agree with you would be very inappropriate, as they certainly can’t count as additional proof once they’re filtered for agreement with the theory.
I’m growing less clear about what your complaint is. If you’re just pointing out a methodological problem in that one paper then I agree with you. If you’re claiming that the whole field is so messed up that no one even realizes it’s a problem, then the paper that I linked looks like a counterexample to your claim. The authors seem to recognize that it’s bad to make ad hoc choices about which proxies to use or which years to apply them to, so they came up with a systematic procedure for selecting proxies (it looks similar to taking all of every proxy that correlates significantly with the 150 years of instrumental temperature records and then averaging those proxy estimates together, but more complicated). And because tree-ring data had been the most problematic (in having a poor fit with the temperature record), they ran a separate set of analyses that excluded those data. They may not explicitly criticize the other methodology, but they’re replacing it with a better methodology, which is good enough for me.
You don’t understand why I’m suspicious that a fundamental problem with their methodology, widely used as proof, is only being rooted out in 2008?
Be glad it’s happening at all.
Is it only being rooted out in 2008? There have been a bunch of different proxy reconstructions over the years—are you saying that this 2008 paper was the first one to avoid that methodological problem? Do you know the climate literature well enough to be making these kinds of statements?
There are several factors that can limit] tree growth. Sometimes, low temperature is the bottleneck. So, the tree ring data can in any case be considered a reliable indicator of a floor on the temperature. It isn’t any colder than this point.
They try to pick trees that are more likely to find low temperature the bottleneck. Sometimes it isn’t.
That doesn’t mean that the whole series is useless, even if they happen to be using it wrong (and I don’t know that they are).
It isn’t logically rude to criticize a science. Though in fairness to climate science I think nearly every science routinely makes errors similar to the ones you mention. That said, we shouldn’t take this information and conclude that AGW is probably false.. Scientists should be Bayesians and the fact that they’re not is evidence against what they believe… but it isn’t strong enough evidence to reverse the evidence we get from the fact that they’re still scientists.
Would you clarify this? That seems on its face to be a very strong, which is to say improbable, claim.
The first hit on Google scholar for climate “divergence problem” turns up this: On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes from the journal Global and Planetary Change. From a cursory glance at the abstract, it seems to fit the bill.
I wasn’t saying journals don’t mention the divergence problem, if that’s what you thought. I was saying they don’t criticize the practice of stripping all the data you don’t like from a dataset and then calling the remaining points further substantiation of your theory. It’s this “trick” that is regarded as commonplace in climatology and thus “no big deal”.
There seem to be two kinds of criticism that it’s important to distinguish. On the one hand, there is the following domain-invariant criticism: “It’s wrong to strip data with no motivation other than you don’t like it.” The difficulty with making this criticism is that you have to justify your claim to be able to read the data-stripper’s mind. You need to show that this really was their only motivation. However, although you might have sufficient Bayesian evidence to justify this claim, you probably don’t have enough scientific evidence to convince a journal editor.
On the other hand, there are domain-specific criticisms: “It’s wrong to strip this specific data, and here are domain-specific reasons why it’s wrong: X, Y, and Z.” (E.g., X might be a domain-specific argument that the data probably wasn’t due to measurement error.) It seems much easier to justify this latter kind of criticism at the standards required for a scientific journal.
These considerations are independent of the domain under consideration. I would expect them to operate in other domains besides climate science. For example, I would expect it to be uncommon to find astronomers accusing each other in peer reviewed journals of throwing out data just because they don’t like it, even though I expect that it probably happens just as often as in climatology.
It’s just easier to avoid getting into psychological motivations for throwing data out if you have a theoretic argument for why the data shouldn’t have been thrown out. This seems sufficient to me to explain your observation.
In that case, you should be able to find climatologists openly admitting to throwing out data just because they don’t like it. But the “just because” part rules out all the alleged examples that I’ve seen, including those from the CRU e-mails.
This wasn’t my claim. They may very well have a reason for excluding that data, and were well-intentioned in doing so. It’s just that they don’t understand that when you filter a data set so that it only retains points consistent with theory T, you can’t turn around and use it as evidence of T. And no one ever points this out.
It’s not that they recognize themselves as throwing out data points because they don’t like them; it’s that “well of course these points are wrong—they don’t match the theory!”
Really? You gave me the impression before you hadn’t read them, based on your reaction to the term “divergence problem”. But if you read them, you know that this is what happened: Scientist 1 notices that data set A shows cooling after time t1. Scientist 2 says, don’t worry, just delete the part after t1, but otherwise continue to use the data set; this is a standard technique. (A brilliant idea, even—i.e. “trick”)
It would be one thing if they said, “Clip out points x304 thru x509 because of data-specific problem P related to that span, then check for conformance with theory T.” But here, it was, “Clip out data on the basis of it being inconsistent with T (hopefully we’ll have a reason later), and then cite it as proof of T.” (The remainder was included in a chart attempting to substantiate T.)
Weren’t they filtering out proxy data because it was inconsistent with the (more reliable) data, not with the theory? The divergence problem is that the tree ring proxy diverges from the actual measured temperatures after 1960. The tree ring data show a pretty good fit with the measured temperatures from 1850 or so to 1960, so it seems like they do serve as a decent proxy for temperature, which raises the questions of 1) what to do with the tree ring data to estimate historical temperatures and 2) why this divergence in trends is happening.
The initial response to question 1 was to exclude the post-1960 data, essentially assuming that something weird happened to the trees after 1960 which didn’t affect the rest of the data set. That is problematic, especially since they didn’t have an answer to question 2, but it’s not as bad as what you’re describing. There’s no need to even consider any theory T. And now there’s been a bunch of research into why the divergence happens and what it implies about the proxy estimates, as well as efforts to find other proxies that don’t behave in this weird way.
Again, the problem is not that they threw out a portion of the series. The problem is throwing out a portion of the series and also using the remainder as further substantiation. Yes, the fact that it doesn’t match more reliable measures is a reason to conclude it’s invalid during one particular period; but having decided this, it cannot count as an additional supporting data point.
If the inference flows from the other measures to the tree ring data, it cannot flow back as reinforcement for the other measures.
But if they’re fitting the tree ring data to another data set and not to the theory, then they don’t have the straightforward circularity problem where the data are being tailored to the theory and then used as confirmation of that theory.
I’m starting to think that there’s a bigger inferential gap between us than I realized. I don’t see how tree ring data has been used “as reinforcement for the other measures,” and now I’m wondering what you mean by it being used to further substantiate the theory, and even what the theory is. Maybe it’s not worth continuing off on this tangent here?
Let me try one last time, with as little jargon as possible. Here is what I am claiming happened, and what its implications are:
Most proxies for temperature follow a temperature vs. time pattern of P1.
Some don’t. They adhere to a different pattern, P2, which is just P1 for a while, and then something different.
Scientists present a claim C1: the past history of temperature is that of P1.
Scientists present data substantating C1. Their data is the proxies following P1.
The scientists provide further data to substantiate C1. That data is the proxies following P2, but with the data that are different from P1 trimmed off.
So scientists were using P2, filtered for its agreement with P1, to prove C1.
That is not kosher.
That method was used in major reports.
That method went uncriticized for years after certainty of C1 was claimed.
That merits an epic facepalm regarding the basic reasoning skills of this field.
Does this exposition differe from what you thought I was arguing before?
Then I guess I just disagree with you. Scientists’ belief about the temperature pattern (P1) from 1850 to the present isn’t based on proxies—it’s based on measurements of the temperature which are much more reliable than any proxy. The best Bayesian estimate of the temperature since 1850 gives almost all of the weight to the measurements and very little weight to any other source of evidence (that is especially true over the past 50 years when measurements have been more rigorous, and that is the time period when P1 and P2 differ).
The tree ring proxy was filtered based on its agreement with the temperature measurements, and then used to estimate temperatures prior to 1850, when we don’t have measurements. If you want to think of it as substantiating something, it helped confirm the estimates made with other proxy data sets (other tree rings, ice cores, etc.), and it was not filtered based on its agreement with those other proxies. So I don’t think that the research has the kind of obvious flaw that you’re describing here.
I do think that the divergence problem raises questions which I haven’t seen answered adequately, but I’ve assumed that those questions were dealt with in the climate literature. The biggest issue I have is with using the tree ring proxy to support the claim that the temperatures of the past few decades are unprecedented (in the context of the past 1500 years or so) when that proxy hasn’t tracked the high temperatures over the past few decades. I thought you might have been referring to that with your “further substantiation” comment, and that either you knew enough about the literature to correct my mistaken assumption that it dealt with this problem, or you were overclaiming by that nobody in the field was concerned about this and we could at least get glimpses of the literature that dealt with it. (And I have gotten those glimpses over the past couple days—Wikipedia cites a paper that raises the possibility that tree rings don’t track temperatures above a certain threshold, and the paper I linked shows that they are trying to use proxies that don’t diverge.)
Are we agreed that the rapid rise in CO2 levels, to highs not seen in human history and owing to human intervention, is undisputed fact?
If so, it seems to me that the default extrapolation, from our everyday experience with systems we understand poorly, is that when you turn a dial all the way up without knowing what the heck you’re doing, you won’t like the results. Example include: numerous cases of introducing animal species (bacteria, sheep, wasps) to populations not adapted to them, said populations then suffering upheaval; stock market crashes; losing two space shuttles; and so on.
The burden of proof seems to be on those who insist that yeah, CO2 levels are rising super fast, but don’t worry, it’ll be business as usual (except winters will be nicer and summers will need a little more ice cubes).
Wha...? Is that an argument by surface analogy? Does every increase in every value owing to human intervention lead to a catastrophe? How about internet connectivity? Land committed to agriculture? Air respired by humans? Shoes built? Radio waves transmitted?
How do you even measure the reference classes appropriately?
For some of these examples, yes, there are catastrophic scenarios on record.
Overgrazing in Iceland to name one I’ve seen first-hand. Beaches despoiled by lethal greeen algae in France as a result of intensive pig farming is another. Shoes—that’s perhaps an excessively restricted category, but the Pacific Trash Vortex is one consequence of turning the dial up on manufacturing capacity without adequate control of the consequences. Improved Internet connectivity is having demonstrated, large and undesired effects on industries such as entertainment and newspapers.
Radio waves… no, offhand I can’t think of an issue on record with those, unless EMF sensitivity counts—but I would be hugely surprised if that turned out to be real (i.e. not psychogenic; the discomfort could be real).
You mentioned “failed predictions”, but left those unspecified. OK, here is a list of empirical confirmations of positive feedback loops involving CO2. Arctic ice melt is the one I’d lose sleep over, since the methane sequestered in Arctic ice is a much more powerful greenhouse gas than CO2. Ice melt also has an effect on water salinity which indirectly affects thermohaline circulation.
The causal details of how some of these positive feedbacks could bring about deeply undesirable consequences seem to me to be better established than the details of how runaway AI could lead to the destruction of human values. But I may have more to learn about either.
This isn’t analogy, as in “build something that looks like a bird and it will fly”. More like abstracting away from examples in several categories, to “systems that remain stable tend to be characterized by feedback loops, including both negative feedback (such as the governor) for regulation and positive for growth or excitation”. The latter leads to predictions, e.g. if you observe only one type of feedback in a stable system a search for the other type will generally be fruitful.
For instance, we observe that successful community Web sites tend to become even more successful as enthusiast users take the good news outside. Yet very few sites become very big. We can look for regulatory feedback loops. A good one stems from the joke “Nobody goes to that restaurant anymore, it’s too crowded.” As the audience of a community site increases, its output may become difficult to handle, turning people away feeling overwhelmed. I would predict that LW will run out of new commenters before it runs out of readers, that a lowered influx of new commenters leads to staleness in the contributions of post authors, in turn leading post authors to look elsewhere for stimulation.
Now, perhaps CO2 levels rising through the roof aren’t going to do anything bad. But that’s as much an argument as saying “perhaps I will win the lottery”.
This raises the issue of what exactly people mean by ‘catastrophic’. None of the examples you give are ‘catastrophic’ on anything like the scale of what some prophesize for global warming. I personally think it is a misuse of the word catastrophe to apply it to the situations you describe. If global warming was only forecast to cause problems on that sort of scale then I don’t think anyone would be seriously contemplating the kinds of measures often advocated to mitigate the risk.
The effects of improved Internet connectivity are having large positive effects on the entertainment industries and newspapers from the perspective of most people who aren’t incumbents in those industries, just as technological progress generally benefits societies as a whole while sometimes reducing the income of groups who made their living from the supplanted technologies that preceded them.
That’s because you’re cherry-picking. Having the Gulf Stream stop, one of the possible consequences of Arctic Ice melt, would be very unpleasant.
In other cases the effects we’re seeing are only the start of a chain of effects. The Pacific Trash Vortex is basically us dumping tiny plastic particles into our own food chain, ultimately poisoning ourselves. It’s bad in itself, but the knock-on effects will be worse. Sure, it still pales in comparison to some predicted AGW effects: that’s why the latter has become the more pressing issue.
These examples were direct responses to Silas, who meant to ridicule the initial instances I gave of the class bad things happening as a result of pushing too hard the parameters of systems we understand poorly, on various scales. Many of his own suggestions turn out not to be ridiculous at all, but rather serious matters.
The Gulf Stream makes the difference between Europe and the west coast of North America, not east coasts. Maybe it would be unpleasant, but a catastrophe?
I’ve heard claims that the gulf stream switching off would cause Britain to undergo a climate change that would have consequences I would call ‘catastrophic’, at least in the short term. Some predictions talk about average temperatures dropping by 5-8 C in a matter of months which would have severe consequences for British agriculture and would likely have a noticeable impact on GDP. I’m not sure I put much faith in those predictions however.
This would also be a catastrophe on a different scale from the more alarmist AGW predictions. We’re talking about a major disruption to the British economy but not an existential threat to the human race.
I thought you were using that as an example of a potential catastrophic effect of global warming, whereas I was saying none of your examples of things that have actually happened are what I would call catastrophic. I have heard some predictions of what might happen to the climate in Britain if arctic ice melt caused the gulf stream to stop and if those predictions were to pan out then I think ‘catastrophic’ would be an appropriate word to use for the consequences for Britain.
I don’t disagree that some of the predictions for the consequences of AGW are situations for which the word ‘catastrophic’ is appropriate. My point is that some of these predictions are an entirely different scale of disaster from anything you’ve given as an example of actual consequences of human activity to date. The Pacific Trash Vortex cannot reasonably be described as ‘catastrophic’ in my opinion, though dire predictions may exist that if they transpire might justify such language.
Based on the voting patterns, I’m going astray somewhere. We don’t seem to disagree on the facts (high CO2 levels, past environmental damage) and I’m not seeing arguments directed at my reasoning, beyond the criticism of “surface analogy” that I’ve done my best to adress. So I’ll let this be my final comment on the topic, and hope to find insight in others’ discussion.
We quite agree there hasn’t yet been a catastrophe on the scale predicted for AGW: we wouldn’t be having this conversation if there had been. If you read the original post all over again, you’ll find that was its entire point. Don’t demand that particular proof.
We don’t want to play dictionary games with the word “catastrophe”. One constructive proposal would be to consider the cost to our economies of cleaning up one or the other of these environmental impacts—including their knock-on effects—versus the costs of prevention. We haven’t incurred the costs of the Trash Vortex yet, it’s not making itself felt to you; but it’s nevertheless a fact not a prediction, and we can base estimates on it.
The typical cost of cleaning up an oil spill seems to be on the order of $10M per ton. The Pacific garbage patch may contain as much as 100 million tons of plastic debris. As an order of magnitude estimate, one Trash Vortex appears to be worth one subprime crisis, albeit spread out over a longer period.
We’re clearly in Black Swan territory, and yet this is just one example picked almost at random (in fact, picked from what Silas took to be counterexamples).
Ok, I’ll try and make it more explicit. Your reasoning seems to be that our experience with complex systems that we don’t fully understand is that disrupting them has bad unintended consequences and therefore the burden of proof is on those who suggest that we don’t need to take drastic action to reduce CO2 levels.
I don’t think your conclusion follows from your premise because it seems to me that there are no examples of bad unintended consequences that we haven’t been able to deal with without paying an excessive cost and few examples of bad unintended consequences that even end up with a negative overall economic cost. The only reasonable argument for adopting the kind of drastic and hugely expensive measures necessary to significantly reduce CO2 levels is that the potential effects are so catastrophic that we can’t afford to risk them. There are no examples of similar situations in the past, though as you rightly point out that is not strong evidence that such situations cannot happen since we might not be around to discuss the issue if they had. On the other hand there are lots of examples of dire/catastrophic predictions that have failed to pan out, although in some cases mitigating action has been taken that means we haven’t had the control experiment of doing nothing.
It seems to me that the burden of proof is still very much on those who argue we must take very economically costly actions now because unlike previous problems which have turned out to be relatively cheap to deal with this problem poses a significant risk of genuine catastrophe.
It’s also important to consider the cost of doing nothing and dealing with the consequences. The trash vortex is a problematic example to use here because there have not been any significant bad consequences yet. It may be a fact that it exists but I haven’t found any estimates of the economic cost it is imposing right now and only vague warnings of possible higher pollutant levels in future.
If the cost of doing nothing about CO2 levels were similar to the cost we appear to be paying for doing nothing about the Pacific Trash Vortex then it would be a no brainer to do nothing about CO2 levels.
Ah that particular idea of all human pleasures being harmful for the environment is pretty much religious. It’s not at all what the impact is like.
Computing is basically blameless in the direct sense for global warming. We should probably enjoy it as much as possible. Electricity is good. Trains are good. Holidaying is good.
Airconditioning is bad. Air travel is bad. Short product lifetime is bad.
The situation is far more positive than some make it out to be. Even the direst climate change predictions necessitates drastic changes in some aspects of life.
AGW can’t take away modern medicine or virtual reality from you.
Why do you think “harmful for the environment” means “leading to global warming”? Lots of things are harmful for the environment. Drying swamps to make railroads harm it. Holidaying leads to decreased “old habitat” biodiversity. Building power plants on small mountain rivers leads to decreased biodiversity, too. Yes, these things are good for us. It just has no bearing on whether they are good for nature.
My favorite one: burning wood for heat. Better than fossil fuels for the GW problem, but really bad for local air quality.
Of course, “leading to global warming” is a subset of “harmful for the environment”. Agreed on all counts.
Computing can’t harm the environment in any way—it’s within a totally artificial human space.
The others (“good”) can harm the environment in general, but are much better for AGW.
Well...
You claim there are significant issues with the climate science process, but admit there are no journal articles criticizing the process. If you know enough to find faults with their science, why haven’t you yourself written an article on the matter?
Do you think there is something inherent in the culture of climatology science that introduces these anti-Bayesian biases? Why is climate science subject to this when other sciences are not?
Are you saying the field is systemically politically driven from the top down?
Have you followed the climategate email leak story at all? One of the more damning themes in the leaked emails is the discussion of ways to keep dissenting views out of the peer reviewed journals. One of the stronger arguments used against AGW skeptics was that there were not more papers supporting their claims in peer reviewed journals. Given the prevalence of this argument, clear evidence of efforts to keep ‘dissenting’ opinions out of the main peer reviewed journals is a big problem for the credibility of climate science. For example:
And this comment is also rather damning:
What, specifically, is “damning” about those quotes?
Suppose creationists took over a formerly respected biology journal. Wouldn’t you expect to find quotes like the above (with climate sceptics replaced by creationists) from the private correspondence of biologists?
AGW skeptics have often been challenged on the lack of peer reviewed papers in credible climate science journals supporting their arguments. Now it is quite possible that this is the case because skeptical papers have been rejected purely due to being bad science (as is the case with the lack of papers supporting the effectiveness of homeopathy in medical journals). However, the absence of papers from the key journals cannot be treated as independent evidence of the badness of the science if there is a concerted effort by AGW believers to keep such papers out of the journals.
It is legitimate to attack the science the AGW skeptics are doing. It is not legitimate to dismiss the science purely on the basis that they have not been published in peer reviewed journals if there is a concerted effort to keep them out of peer reviewed journals based on their conclusions rather than on their methods. Now I’m sure the AGW believers feel that they are rejecting bad science rather than rejecting conclusions they don’t like but emails like the above certainly make it appear that it is the conclusions as much as the methods that they are actually objecting to.
In my opinion the CRU emails mean that it no longer appears justified to ignore claims by AGW skeptics purely because they have not appeared in a peer reviewed journal. They may still be wrong but there is sufficient evidence of biased selection by the journals to not trust that journal publication is an unbiased signal of scientific quality.
Agreed. “No peer-reviewed publications” is not an argument that I’ve ever used or would use, even in advance of the CRU emails, because of course that is how academia works in general.
For the most part, I don’t think you’re quite answering my question.
You present two explanations for the lack of peer-reviewed articles that are sceptical of the scientific consensus on global warming. The first is that there is unjust suppression of such views. The second is that such scepticism is based on bad science. You say that you think the leaked emails support the first explanation, and that there is sufficient evidence of biased (I’m guessing “biased” means “unmerited by the quality of the science” here) selection by journals. What is that sufficient evidence? More specifically, how does the information conveyed by the leaked emails distinguish between the first and second scenarios?
This addresses my questions, but I was asking for more specifics. Let A = “AGW sceptics are being suppressed from journals without proper evaluation of their science” and B = “AGW sceptics are being suppressed from journals because their science is unsound”. Let E be the information provided by the email leaks. How do you get to the conclusion that the likelihood ratio P(E|A)/P(E|B) is significantly above 1?
Personally I can’t see how the likelihood ratio would be anything but about 1, and it seems to me that those who act if the ratio is significantly greater than 1 are simply ignoring the estimation of P(E|B) because their prior for P(B) is small.
(EDIT: I originally wrote P(A|E) and P(B|E) instead P(E|A) and P(E|B). My text was still, apparently, clear enough that this wrong notation didn’t cause confusion. I’ve now fixed the notation.)
I do think the likelihood ratio is significantly above 1. This is based off reading some of the emails, documents and code comments in the leaks. Here’s a reasonable summary of the emails. It looks like dubious science to me. I find it hard to understand how anyone can claim otherwise unless they are ideologically motivated. If you genuinely can’t see it then I’m not really interested in arguing over minutiae so we’ll just have to leave it at that.
It seems to me that AGW skeptics made a variety of claims that AGW believers dismissed as paranoid: there was a conspiracy to keep skeptical papers out of the journals; there were efforts to damage the careers of climate scientists who didn’t ‘toe the party line’; there were dubious and possibly illegal efforts to keep the original data behind key papers out of the hands of skeptics despite FOI regulations. I did not see many AGW believers prior to the climategate emails saying “Yes, of course all of that happens, that’s just the way science operates in the real world”.
When the CRU leaks became public and substantiated all the ‘paranoid’ claims above, including proof of illegal destruction of emails and data to avoid FOI requests, I find it suspicious when people claim that it doesn’t change their opinions at all. The standard response seems to be “Oh yes, that’s just how science works in the real world. I already knew scientists routinely engage in this sort of behaviour and the degree of such behaviour revealed in the emails is exactly in line with my prior expectations so my probability estimates are unchanged”. That seems highly suspect to me and looks an awful lot like confirmation bias.
You’re still talking about how the e-mails fit into the scenario of fraudulent climate scientists, that is, P(E|A) by my notation. I specifically said that I feel P(E|B) is being ignored by those who claim the e-mails are evidence of misconduct. Your link, for example, mostly lists things like climatologists talking about discrediting journals that publish AGW-sceptical stuff, which is exactly what they would do if they, in good faith, thought that AGW-scepticism is based on quack science. Reading the e-mails and concluding that sceptical papers are being suppressed without merit seems like merely assuming the conclusion.
(Regarding the FOI requests, that might indeed be something that might reasonably set off alarms and significantly reduce P(E|B) - if you believe the sceptics’ commentaries accompanying the relevant quotes. But googling for “mcintyre foi harassment” and doing some reading gives a different story.)
(EDIT: Fixed notation, as in the parent.)
My impression from reading the emails is that different standards are being applied to the AGW skeptics because of their conclusions rather than because of their methods. At the same time there is evidence of data massaging and dubious practices around their own methods in order to match their pre-conceived conclusions. The whole process does not look like the disinterested search for truth that is the scientific ideal.
My P(B|E) would be higher if I read emails that seemed to focus on methodological errors first rather than proceeding from the fact that a journal has published unwelcome conclusions to the proposal that the journal must be boycotted.
I think there’s too much attention paid to the emails, and not enough to all of the publicly available information about the exact same events. Maybe it’s because private communications seem like secret information that contain the hidden truth, or maybe it’s just a cascade effect where everyone focuses on the emails because everyone is focusing on the emails.
The second email that you quoted is in response to the publication of a skeptical article by Soon & Baliunas (2003) in the journal Climate Research which generated a big public controversy among climate scientists. Reactions to that publication include several editors of the journal resigning in protest (and releasing statements about why they resigned), the publisher of the journal writing a letter admitting that the article contained claims that weren’t supported by the evidence (pdf), and a scientific rebuttal to the article being published later that same year. I think that you get a better sense of what happened (and whether climate scientists were reacting to the methods or just the conclusions) by reading accounts written at the time than from the snippets of emails. And of course there’s Wikipedia.
Would you expect to see evolutionary biologists discuss the methodological errors of creationist arguments in private correspondence?
(I don’t think this is the place for this, since I don’t think we’re getting anywhere.)
Upvoted for the parenthetical.
FOI requests? Which ones? Those for proprietary data sets that they weren’t allowed at that time to release, or the FOI requests for information availalble from a public FTP site?
Voted you up not for your particular assessment of P(A|E)/P(B|E) but for using this pattern of assessing evidence to guide the conversation.
For the same reason I haven’t personally solved every injustice: a) time constraints, and b) others are currently raising awareness of this problem.
Other sciences are affected by anti-Bayesian biases, and this will be a tendency in proportion to the difficulty of finding solid evidence that your theory is wrong. Which is why I claim e.g. sociology and literature are mostly a waste of time.
Generally speaking, science is in some ways too strict and some ways not strict enough. Eliezer_Yudkowsky has actually pointed out before the general failure to appropriately teach rationality in the classroom, and so scientists in general aren’t aware of this problem.
Politics, of course, does play a part. When it’s not just about “who’s right” but about “who gets to control resources”, then the biases go into hyperdrive. People aren’t just pointing out problems with your research, they’re fighting for the other team! The goal is then about proving them wrong, not stopping to check whether your theory is correct in the first place. (“Ask whether, not why.”)
I basically agree with SilasBarta. If you look carefully, what’s going on in climate science is absolutely apalling.
One can ask a simple probability question: Given that a climate simulation matches history, what is the probability that it will accurately predict the future?
Another question: What evidence is there that climate simulations are accurate besides the fact that they match history?
And another question: If you take 10 or 15 iffy climate simulations, average them, and then use a bootstrap or equivalent method to produce a 95% confidence interval, are you actually accomplishing anything?