Carbon dioxide, climate sensitivity, feedbacks, and the historical record: a cursory examination of the Anthropogenic Global Warming (AGW) hypothesis
Note: In this blog post, I reference a number of blog posts and academic papers. Two caveats to these references: (a) I often reference them for a specific graph or calculation, and in many cases I’ve not even examined the rest of the post or paper, while in other cases I’ve examined the rest and might even consider it wrong, (b) even for the parts I do reference, I’m not claiming they are correct, just that they provide what seems like a reasonable example of an argument in that reference class.
Note 2: Please see this post of mine for more on the project, my sources, and potential sources for bias.
In a previous post, I attempted simple time series forecasting for temperature from the outside view, i.e., as a complete non-expert would. I introduced carbon dioxide concentrations as an explanatory variable near the end of the post, but did not consider in detail the mechanisms through which carbon dioxide concentrations affect temperature. In this post, I switch to what Eliezer Yudkowsky has called the weak inside view. That’s something like the inside view, but without knowledge of all the relevant details. Obviously, I’m somewhat constrained here: I can’t take the full inside view because I don’t know enough about the atmospheric system (partly because the state of human knowledge about the atmospheric system is incomplete, and partly because I know only a very miniscule fraction even of that small amount of human knowledge). But I think that the weak inside view also offers an alternate perspective to the inside view and is valuable in its own right.
One of the reasons I felt the need to switch from the outside view is that the issue is sufficiently complex, but at the same time, the component phenomena are sufficiently well-enumerated that a weak inside view can help. My initial framing of the issue was in terms of separating the roles of theory and evidence in belief in anthropogenic global warming. But a better weak inside view led me to the conclusion that most of the debate didn’t center around the greenhouse effect or the level of direct radiative forcing at all; rather, it focused on the magnitude of positive feedback to greenhouse gas forcing (and therefore, the value of climate sensitivity) and the attribution of recent warming to greenhouse gas forcing versus other phenomena (such as the Pacific Decadal Oscillation and variation in solar activity). Although the theory-versus-evidence framing is still illuminative, I felt that a serious exploration of the issue would have to take at least a cursory look at the leading competing hypotheses.
My approximate takeaway
Overall, I feel that there is considerable uncertainty about the level of positive feedback and therefore about climate sensitivity (the magnitude of temperature increase that would result from a doubling of atmospheric carbon dioxide). The estimates supported by skeptics fall at the low end of the range of uncertainty, and the stories they tell are all quite plausible and consistent with the science. But the IPCC estimate range already includes (or just barely misses) the skeptic range (1.5-4.5 for the IPCC versus 1.3-2 for the main skeptic blogs). While the stories put forth by skeptics are consistent with the science, other stories, including stories of substantially larger warming than the median estimate put forth by the IPCC, seem consistent as well. I don’t see strong evidence that the median estimate of the IPCC (3 C) is wrong (or evidence that it’s right).
I do think that the IPCC consensus estimates underestimate the probability of lower warming, i.e., the models have too narrow a range, at least on the low-warming side. I don’t have sufficient knowledge of whether they are underestimating the probability of high climate sensitivities as well. That could well be the case.
The political, institutional and bureaucratic incentives and constraints of the players involved in the debate did inform my views, but since my overall conclusion is so fuzzy anyway, it probably didn’t affect my bottom line. Again, for simplicity, I avoid explicit discussion of these in the post. I might discuss them in a later post.
I think that temperature trends in the coming 10-15 years will allow us to improve our estimates of climate sensitivity considerably. If warming continues to be as slow over the next 15 years as it has been over the past 15 years, I would incline to the low climate sensitivity estimates put forth by skeptics. If the warming trend returns to the 1978-1998 rate, then I would incline to medium or high climate sensitivity estimates. It would be nice to operationalize this and come up with statements like “If we see less than this much warming over the next few years, then I’ll reduce my confidence in the model by this much” but I don’t think my mastery of the numbers is good enough to make that sort of statement.
Okay, now on to the stuff!
An overview of the AGW hypothesis
The Anthropogenic Global Warming (AGW) hypothesis can be broken down into three simple steps:
Human activity, specifically emissions of greenhouse gases (particularly carbon dioxide), is responsible for increasing the concentration of greenhouse gases, specifically carbon dioxide, in the atmosphere.
The increased concentration of carbon dioxide causes the earth to trap more solar radiation, and thereby causes the earth to become warmer than it otherwise would (aka the greenhouse effect).
Over the decadal to centenial timescale, temperature exhibits positive feedbacks, i.e., slight increases in temperature beget further increases in temperature. Two examples of such positive feedbacks are the water vapor feedback and the ice-albedo feedback (both described later in the post). As a result, the actual level of warming would be more than predicted directly from increasing trapping of radiation.
There are other aspects commonly associated with the AGW hypothesis, such as the view that at the current margin, more warming increases the frequency of extreme-weather events. For simplicity, I will not discuss these in the post. Of course, for an evaluation of the impact of global warming on the environment or on society, an examination of this aspect would matter considerably.
Preliminary question: Does it make sense to talk of global temperature?
How do we measure whether the earth system is warming? In my previous post, I considered global mean surface temperatures, as measured through many different proxies. At the time, I wasn’t concerned with the meaning of those temperatures, because the purpose was simply to use them in time series forecasting. Now that we are getting to the mechanisms involved, the significance of mean surface temperatures starts becoming more relevant.
A paper by Christopher Essex, Bjarne Andressen, and Ross McKitrick in the Journal of Non-equilibrium Thermodynamics takes issue with the very idea of a global mean surface temperature. My first reaction to the paper was one of skepticism. Surely, it’s not wrong to use the average to keep track of how temperatures are changing? It turned out that the paper covered most of my prima facie concerns. Specifically:
I already agreed with the point made in the paper that the global mean surface temperature has no intrinsic physical meaning, whereas the average energy in the system might. But I had thought that ceteris paribus, changes in either reflect changes in the other. The paper made some arguments against that view (specifically, noting that pressure changes also matter and were often of comparable magnitude of or larger than temperature changes). I don’t think I understand the science well enough to offer clear judgment on this point.
The paper argued that weather phenomena are driven by temperature gradients more than by absolute temperatures, and changes in the mean temperature are a poor way of tracking temperature gradients. This is something I hadn’t thought about explicitly, though I do expect that temperature levels have some effect on the type of temperature gradient phenomena we might see. But it’s a point worth noting that focusing on the mean temperature may be a poor way of thinking about the actual weather phenomena at hand.
The paper pointed out that there are many averaging choices than the simple mean, each of which has slightly different justification, and that the same data could show a warming or cooling trend based on the choice of averaging process used. I agreed with this, but I didn’t think it affected the real-world temperature record.
However, the authors used real-world temperature data and two actual choices of means to show how they could produce opposite trends. My main concern is that the two choices of means the authors chose may have been cherry-picked (one of them used negative exponents).
Overall, the paper raises some interesting points, but I’m not convinced. I’m still mulling over it, but in the meantime, I will operate within the paradigm where global mean surface temperatures are a meaningful indicator of how warm the world is. In doing so, I am deferring to both conventional wisdom and my own crude prior intuition that standardized averages carry some sort of value.
An overview of #1: Are carbon dioxide concentrations increasing, and is human activity responsible?
This thesis does not seem controversial. Measured carbon dioxide concentrations have been increasing according to a wide range of measurements, the most famous and reliable of which is the Keeling curve, based on continuous measurements since 1958 at Hawaii:
There are also ways of attempting to reconstruct historic carbon dioxide levels in the atmosphere using proxies, and the general view is that carbon dioxide levels started rising around the time of the Industrial Revolution, and the rate of change was unprecedented. In a blog post attempting to compute equilibrium climate sensitivity, Jeff L. finds that the 1832-1978 Law Dome dataset does a good job of matching atmospheric carbon dioxide concentration values with the Mauna Loa dataset for the period of overal (1958-1978), so he splices the two datasets for his (note: commenters to the post pointed out many problems with it, and while I don’t know enough to evaluate it myself, my limited knowledge suggests that the criticisms are spot on; however, I’m using the post just for the carbon dioxide graph):
Overall, the story checks out at every level:
Prior to the advent of fossil fuels, the main source of carbon dioxide emissions in the atmosphere was the oxidation of food. But this food had to be prepared through processes that used carbon dioxide in the same amounts (i.e., photosynthesis). So the level of carbon dioxide was regulated in that fashion, and remained stable.
With the advent of fossil fuels, “food” that had been prepared a long while ago and over millions of years was being released to the atmosphere in a short span of years. Thus, the release of carbon dioxide to the atmosphere was greater than the ability to absorb it back.
Accounting for the changes in carbon dioxide concentrations shows that carbon dioxide concentrations have risen by a level about half of what emissions are pumping into the atmosphere. This is consistent with the idea that natural sinks (such as plants and the ocean) are still siphoning away some of the excess carbon dioxide, but not all of it.
As far as I understand, these facts are not in much dispute, though there is some uncertainty regarding the timescale over which the excess carbon dioxide will eventually be relinquished by the atmosphere. Could it be centuries or millennia? Either way, it probably doesn’t affect decadal predictions.
Here’s what Skeptical Science (that, despite its name is a website devoted to criticism of global warming skepticism) says:
There are many lines of evidence which clearly show that the atmospheric CO2 increase is caused by humans. The clearest of these is simple accounting—humans are emitting CO2 at a rate twice as fast as the atmospheric increase (natural sinks are absorbing the other half). There is no question whatsoever that the CO2 increase is human-caused. This is settled science.
Global warming skeptic Dr. Roy Spencer describes his agreement with the general consensus as follows:
8 ) Is Atmospheric CO2 Increasing? Yes, and most strongly in the last 50 years…which is why “most” climate researchers think the CO2 rise is the cause of the warming. Our site measurements of CO2 increase from around the world are possibly the most accurate long-term, climate-related, measurements in existence.
9) Are Humans Responsible for the CO2 Rise? While there are short-term (year-to-year) fluctuations in the atmospheric CO2 concentration due to natural causes, especially El Nino and La Nina, I currently believe that most of the long-term increase is probably due to our use of fossil fuels. But from what I can tell, the supposed “proof” of humans being the source of increasing CO2 — a change in the atmospheric concentration of the carbon isotope C13 — would also be consistent with a natural, biological source. The current atmospheric CO2 level is about 390 parts per million by volume, up from a pre-industrial level estimated to be around 270 ppm…maybe less. CO2 levels can be much higher in cities, and in buildings with people in them.
10) But Aren’t Natural CO2 Emissions About 20 Times the Human Emissions? Yes, but nature is believed to absorb CO2 at about the same rate it is produced. You can think of the reservoir of atmospheric CO2 as being like a giant container of water, with nature pumping in a steady stream into the bottom of the container (atmosphere) in some places, sucking out about the same amount in other places, and then humans causing a steady drip-drip-drip into the container. Significantly, about 50% of what we produce is sucked out of the atmosphere by nature, mostly through photosynthesis.
Quantifying the responsiveness of temperature to carbon dioxide concentrations: equilibrium climate sensitivity and transient climate response
In a simple model involving the sun, the earth, and an atmosphere with some concentration of carbon dioxide, the equilibrium temperature attained (measured in the Kelvin scale) is proportional to the logarithm of the concentration of carbon dioxide (note that there are other greenhouse gases for which the dependence has a more complicated functional form). Therefore, the additive change in equilibrium temperature is proportional to the logarithmic of the multiplicative change in the concentration of carbon dioxide. For instance, here’s Wikipedia.
It is reasonable to extrapolate from this that, even in the presence of feedbacks, the relationship between carbon dioxide concentration and temperature remains logarithmic. The coefficient just gets appropriately scaled. This makes sense because feedback mechanisms generally operate proportionally.
Thus, the following question makes sense: if we double atmospheric carbon dioxide concentrations, what is the additive effect on temperature? The answer to that question is sometimes termed the equilibrium climate sensitivity (ECS).
Note that the warming to reach equilibrium climate sensitivity doesn’t happen immediately, so even after carbon dioxide concentrations double, it could take several decades for the temperature to warm to the new equilibrium. The term for the amount by which temperature goes up on a doubling of carbon dioxide, if carbon dioxide is rising at 1% is transient climate response (TCR). This is usually over half the ECS but still well short of the full ECS (I’m currently too lazy to fish for more information on the relation between TCR and ECS).
In this post, when I talk of “climate sensitivity” I am by default referring to ECS. Note, however, that as a general rule, models that have higher ECS will also have higher TCR.
An overview of #2: the greenhouse effect
As far as I understand, the basic mechanics of the greenhouse effect are not in dispute, nor are the numerical estimates of how much warming there would be without feedbacks. On the American Geophysical Union blog, Dan Satterfield writes:
Climate sensitivity is an important and often poorly understood concept. Put simply, it is usually defined as the amount of global surface warming that will occur when atmospheric CO2 concentrations double. These estimates have proven remarkably stable over time, generally falling in the range of 1.5 to 4.5 degrees C per doubling of CO2.* Using its established terminology, IPCC in its Fourth Assessment Report slightly narrowed this range, arguing that climate sensitivity was “likely” between 2 C to 4.5 C, and that it was “very likely” more than 1.5 C.
The wide range of estimates of climate sensitivity is attributable to uncertainties about the magnitude of climate feedbacks (e.g., water vapor, clouds, and albedo). Those estimates also reflect uncertainties involving changes in temperature and forcing in the distant past. But based on the radiative properties, there is broad agreement that, all things being equal, a doubling of CO2 will yield a temperature increase of a bit more than 1 C if feedbacks are ignored.
Skeptical Science says:
Climate sensitivity describes how sensitive the global climate is to a change in the amount of energy reaching the Earth’s surface and lower atmosphere (a.k.a. a radiative forcing). For example, we know that if the amount of carbon dioxide (CO2) in the Earth’s atmosphere doubles from the pre-industrial level of 280 parts per million by volume (ppmv) to 560 ppmv, this will cause an energy imbalance by trapping more outgoing thermal radiation in the atmosphere, enough to directly warm the surface approximately 1.2°C. However, this doesn’t account for feedbacks, for example ice melting and making the planet less reflective, and the warmer atmosphere holding more water vapor (another greenhouse gas).
An overview of #3: feedbacks
This is where things get most interesting. Both empirically and theoretically, there is good reason to believe that over the decadal to centennial time scale, the climate system exhibits positive feedback to temperature changes. So if carbon dioxide levels double and cause a direct increase of about 1 C, the actual increase, accounting for positive feedbacks, would be more.
Three feedback mechanisms often mentioned are:
Water vapor feedback (positive): When the temperature rises, this increases the amount of water vapor the atmosphere can hold, and therefore also increases the amount of water vapor the atmosphere does hold. Water vapor is a greenhouse gas, and therefore absorbs more heat, causing the temperature to rise further. This is a positive feedback loop because an increase in temperature facilitates a further increase in temperature.
Ice-albedo feedback (positive): Cooling causes more water to freeze, increasing the fraction of the surface covered with ice. Ice reflects more heat, therefore resulting in less absorption of heat by the earth, causing the temperature to drop further. This is a positive feedback loop because a decrease in temperature facilitates a further decrease in temperature.
Cloud feedback (uncertain, generally believed to be positive over the decadal/centennial time scale): Changes in the temperature and water vapor level can result in changes in the amount and composition of the cloud cover. The cloud cover plays an important role in how much sunlight enters the atmospheric system and gets absorbed.
The levels of all three feedback mechanisms are uncertain. For water vapor feedback and ice-albedo feedback, the sign is clear, but the magnitude is unclear. Cloud cover feedback is uncertain in both sign and magnitude. Put together, climate scientists generally believe that feedbacks are positive, but are uncertain as to their magnitude.
Where do skeptics stand in relation to the IPCC here? As quoted above, the IPCC estimates climate sensitivity of 1.5 to 4.5 C, with a median estimate of about 3 C. The interval offered by skeptics is narrower and on the lower end, but even among skeptics, the view that feedback is net negative seems a minority view. For instance, browsing the climate sensitivity category on Watts Up With That?, a top climate skeptic blog, I found references to papers estimating climate sensitivity values of 1.3 C, 1.8 C, and 2 C.
UPDATE: I found a blog post by Pat Michaels here that suggests that recent published estimates (since 2010) of climate sensitivity have been around the 2 C median mark. The infographic is below. I don’t know how reliable this data is (Michaels is a global warming skeptic who has received funding from fossil fuel industries, but this infographic doesn’t seem that easy to fudge). Alternative sources are welcome.
Where do climate sensitivity estimates come from?
On the climate system side, the main source of difference in opinion on the amount of global warming that will unfold seems to be due to difference in beliefs about climate sensitivity. (There’s another source of uncertainty, namely the level of emissions going into the future. We’ll ignore this aspect in the post, though again, from the “what can/should we do about global warming” angle, that becomes quite relevant).
Broadly, there does not seem to be a single compelling theoretical argument for a particular climate sensitivity estimate. So the case for a particular value or range of climate sensitivity generally rests on what my friend Jonah Sinick has called many weak arguments. In principle, many weak arguments should work better in demonstrating facts about the climate system than one relatively strong argument. Of course, the arguments shouldn’t be so weak that they basically collapse.
So what are the weak arguments in favor of a particular value of range of climate sensitivity, such as the middle of the IPCC range?
Here’s what Skeptical Science says:
Some global warming ‘skeptics’ argue that the Earth’s climate sensitivity is so low that a doubling of atmospheric CO2 will result in a surface temperature change on the order of 1°C or less, and that therefore global warming is nothing to worry about. However, values this low are inconsistent with numerous studies using a wide variety of methods, including (i) paleoclimate data, (ii) recent empirical data, and (iii) generally accepted climate models.
Data on sensitivity to greenhouse gas forcing in recent times is relatively limited (or more specifically, as I pointed out in my earlier post, the recent data alone paint a very inconclusive picture).
3(i): The use of data from alternate sources of radiative forcing
A lot of exercises that attempt to estimate climate sensitivity do so by looking at other sources of forcing, such as volcanic eruptions (that produce a cooling effect) and variations in solar activity. With the key assumption that the magnitude of the feedback does not depend on the source of forcing, estimates for the size of the feedback in these cases can be used to estimate the climate sensitivity. This assumption is the mainstream view. Skeptical Science says:
In other words, if you argue that the Earth has a low climate sensitivity to CO2, you are also arguing for a low climate sensitivity to other influences such as solar irradiance, orbital changes, and volcanic emissions. In fact, as shown in Figure 1, the climate is less sensitive to changes in solar activity than greenhouse gases. Thus when arguing for low climate sensitivity, it becomes difficult to explain past climate changes. For example, between glacial and interglacial periods, the planet’s average temperature changes on the order of 6°C (more like 8-10°C in the Antarctic). If the climate sensitivity is low, for example due to increasing low-lying cloud cover reflecting more sunlight as a response to global warming, then how can these large past climate changes be explained?
In particular, one of the lines of evidence for current consensus values of climate sensitivity is historical data on the level of warming or cooling in response to forcings due to volcanic eruptions or variations in solar activity.
The point about the nature of feedbacks being independent of whether the radiative forcing is due to solar activity or carbon dioxide concentrations or volcanic eruptions has been disputed by some. See, for instance, here and here. I’m not qualified to judge the validity of these objections.
3(ii): Direct estimation of greenhouse gas forcing from the recent temperature record, and alleged confounding by other factors
The simplest model would presume that the trend of rising temperatures 1975-1998 can be attributed primarily to greenhouse gas forcing. If we attribute it all to greenhouse gas forcing, we get fairly high estimates for climate sensitivity. If, on the other hand, we attribute it to a mix of greenhouse gas forcing and other factors (discussed below) we get climate sensitivities at the low end of the scale. In neither case is there a dispute over the existence of the greenhouse gas effect, or even over the existence of feedbacks. But there is dispute over how much of the already observed temperature rise can be attributed to greenhouse gas forcing.
The lack of a single compelling explanation for the recent pause (or slowdown) in global warming (i.e., the very slow rate of warming since about 1998) is the main Achilles heel of this theory. Note that 1998 in itself was an unusually warm year due to the El Nino, and the lack of warming for a few years after that was not surprising, but the lack of warming after many years is a puzzle. Climate scientists often call it the problem of the “missing heat” (the global mean surface temperature being taken as a proxy index for heat, though the paper questioning global mean surface temperature raises questions about the connection). Fabius Maximus lists a number of possible reasons here. For most of these reasons, it seems the case that if the temperature fails to grow for another 10-15 years, climate sensitivity estimates would need serious downward estimation.
3(ii) alternate theory (a): The oceans: deep oceans as sinks for the missing heat, the Pacific Decadal Oscillation (PDO), and Atlantic Multidecadal Oscillation (AMO)
The Pacific Decadal Oscillation has a positive phase, that leads to warming, and a negative phase, that leads to cooling. Historical data on the PDO isn’t too great, but each phase (positive and negative) is believed to last about 20-30 years. Don Easterbrook identified the phase dates as follows: 1915-1945 and 1979-1998 for positive phases, and 1880-1915, 1945-1977, 1999-2014 for cooling phases. He also showed that the phases he had identified were consistent with the temperature trends: warming occurred during the positive phases and cooling occurred during the negative phases. Easterbrook doesn’t seem to give much weight to the overall secular trend arising from greenhouse gas forcing, but it’s easy to modify this to incorporate a stronger role for greenhouse gas forcing, as follows.
According to this theory, the observed temperature increase during the PDO’s positive phase is a combination of a secular trend of rising temperature due to greenhouse gas forcing, and the increase in temperature due to the PDO being in positive phase. When the PDO entered negative phase, the greenhouse gas forcing continued, but the PDO negative phase was now acting in the opposite direction, resulting in relatively stable temperatures. If we use a time period where the PDO was in positive phase and do not control for the PDO, then we’ll overestimate climate sensitivity. If we use a time period where the PDO was in negative phase and do not control for the PDO, then we’ll underestimate climate sensitivity (or may even ignore the secular trend of warming completely because it is successfully masked by the phase of the PDO).
One of the big arguments in favor of the PDO hypothesis is that it does a better job of explaining the pause (or slowdown) in global warming. Models based purely on greenhouse gas forcing didn’t predict the pause, but models based on the PDO did (though, of course, such models would need to make accurate predictions of the starting and ending years of the phases of the PDO, and I haven’t been able to track down explicit predictions made when the PDO was in positive phase about when it would switch to negative phase).
The Atlantic Multidecadal Oscillation (AMO) is somewhat similar to the PDO in ways relevant to the above discussion, though probably also different. The upshot is that the phase of the PDO/AMO may be controlling the rate of growth of global mean surface temperatures.
One of the common problems pointed out with the PDO/AMO theory is that ocean currents only move heat around. They can’t change the total heat in the system. So, how could they affect the global mean surface temperature? For instance, here is Skeptical Science’s take on the PDO.
Kevin Trenberth (who, many years ago, wondered in emails, later leaked by Climategate, about where the missing heat was going) has postulated that what the PDO/AMO do is to move heat down into the deep oceans, where it doesn’t show up in mean surface temperature measurements. The idea that the missing heat goes into the deep oceans was pointed out in a LessWrong comment as well as by an atmospheric science student in private correspondence. This is listed as (6) in Fabius Maximus’ list. It has been elaborated on in a paper titled An apparent hiatus in global warming by Kevin Trenberth and John T. Fasullo. Maximus also links to Trenberth’s article Has Global Warming Stalled? in The Conversation. Judith Curry blogged about another related paper co-authored by Trenberth here. [Note: My understanding of the papers co-authored by Trenberth may be quite inaccurate].
If heat is being transferred to the deep oceans due to the PDO/AMO, global warming will probably be back, with a vengeance, once the phase of the PDO/AMO changes. If the heat transfer is for reasons that aren’t governed by these oscillations, then heat may keep sinking into the oceans for a very long time. The oceans certainly have the thermal capacity to absorb all the excess heat, but whether they will actually do so is unclear.
A somewhat different view of the PDO/AMO is described in a paper by Marcia Wyatt and Judith Curry. They call their view the stadium wave hypothesis. From what I can understand, the PDO and AMO are both manifestations of a stadium wave that takes a long time to propagate. I am not clear on the differences between the stadius wave hypothesis and Trenberth’s deep ocean sink hypothesis as far as forecasts of future global mean surface temperatures are concerned.
Final note: Over the centennial time scale, the PDO-based model would predict that the temperature trend would be an additive superimposition of a PDO-based sinusoidal trend and a greenhouse gas forcing-based secular linear trend. At any given time, the bulk of the year-on-year change would be driven by the PDO phase, but over the centennial time scale, the role of carbon dioxide would dominate.
3(ii) alternate theory (b): variation in solar activity
Another theory, offered both in some mainstream quarters to explain the recent slowdown/pause in global warming and offered by some skeptics as an alternative theory to greenhouse gases to explain global warming, is that variation in solar activity are driving some of the year-to-year variation in temperature. As we can see the NASA’s sunspot cycle page, solar activity can be described as a combination of many cycles with different periods, the most notable of which is the 11-year cycle. But the heights of the peaks aren’t the same for all cycles, and the most recent peaks have been less high than earlier ones (see also here and here). The reduced recent activity after high activity in the recent past has been attributed to one of the hypothesized longer solar cycles, namely the 210-year old Seuss cycle (aka the de Vries cycle). Note that the magnitude and time length of these longer solar cycles remain speculative because we don’t have enough data to be sure.
Overall, the sun might offer a weak explanation for the recent slowdown in warming (namely, the fact that the peak in 2014 was smaller than the peak around 2003) but otherwise, it does not fit temperature patterns well. Here is Skeptical Science on the sun:
Over the last 35 years the sun has shown a slight cooling trend. However global temperatures have been increasing. Since the sun and climate are going in opposite directions scientists conclude the sun cannot be the cause of recent global warming.
The only way to blame the sun for the current rise in temperatures is by cherry picking the data. This is done by showing only past periods when sun and climate move together and ignoring the last few decades when the two are moving in opposite directions.
3(iii): Climate models
The last of the three reasons Skeptical Science offers for taking the median IPCC climate sensitivity estimates seriously is that climate models predict that sort of sensitivity. I give very little weight to this reason, because the climate models have not done a good job of forecasting (see here). To the very limited extent that they have been able to forecast anything at all during the period of fast warming (1975-1998), a simple theory that “it’s going to warm” could have done about as well.
I’m not claiming that climate models are of no potential use, just that they are not strong enough to provide additional evidence in favor of a hypothesis that is, in some sense, built into the assumptions of those models in the first place. If the models are validated against empirical data (through measurement of their forecast skill relative to simple persistence-based models or random walks with drift), then I’ll accept them as additional evidence. At present, they are neither here nor there.
Piecing together the evidence
Overall, the case for the median IPCC estimate of 3 C seems reasonably strong as a median estimate, but the range of uncertainty is high. I believe that the IPCC confidence interval is too narrow, particularly at the low end (i.e., I would put a higher probability of zero feedback than the IPCC model does). I haven’t investigated the arguments for estimates at the high end, so I’m not sure if the probably of high sensitivities has been overestimated and underestimated.
The main reason is the combined evidence from 3(i) and 3(ii). Though both are individually weak (because of the problems mentioned), in concert, they provide a resonably compelling case.
While some of the evidence for 3(ii) will be collected naturally over the next few years, the case for 3(i) is less clear. How compelling are the arguments against the view that the level of feedback is independent of the source of forcing? And how reliable is the historical data that is used to estimate the level of feedback? If the evidence of 3(ii) weakens further, a closer examination of 3(i) would be warranted.
Finally, climate models aren’t good enough right now, but they could well become better (I discussed the challenges of decadal forecasting in this post). If a climate model, with appropriate initialization, is able to make skilled forecasts for the next few years, I’d give a lot more weight to what it has to say about the next few decades. However, it’s worth noting that the autocorrelation in climate makes the forecasting challenges for the near future different from those for the far future. So successful climate models aren’t in my view a necessary condition for demonstrating a particular climate sensitivity, but they would be a powerful source of evidence if they did work.
Looking for feedback
Since I’m quite new to climate science and (largely, though not completely) new to statistical analysis, it’s quite possible that I made some elementary errors above. Corrections would be appreciated.
It should be noted that when I say a particular work has problems, it is not a definitive statement that that work is false. Rather, it’s simply a statement of my impression, based on a cursory analysis, that describes the amount of credibility I associate with that work. In many cases, I’m not qualified enough to offer a critique with high confidence.
- Tentative tips for people engaged in an exercise that involves some form of prediction or forecasting by 30 Jul 2014 5:24 UTC; 14 points) (
- Domains of forecasting by 9 Jul 2014 13:45 UTC; 9 points) (
- [QUESTION]: What are your views on climate change, and how did you form them? by 8 Jul 2014 14:52 UTC; 9 points) (
Basically he IPCC has a higher uncertainty interval in their data then the skeptics. Shouldn’t this lead one to conclude that a lot of skeptics who argue about the fact that the climate is hard to predict aren’t really serious about what they are saying?
To add to my previous comment: I do agree that some skeptics express very high confidence in low climate sensitivity values, more than is arguably warranted from the evidence (even if it is consistent with the evidence). Overconfidence in such estimates is a sign against being taken seriously. At the same time, conducting an exercise that comes up with a specific estimate, possibly just as an illustration of the sort of thing that seems to be true, doesn’t seem problematic to me.
Interesting point.
First off, very few individual papers give huge uncertainty estimates for climate sensitivity. The IPCC, in its aggregation process, gets an estimate of 1.5-4.5 C. But most of the papers it references give specific values or narrower ranges.
This is not necessarily a bad thing, because the purpose of any individual paper is not to settle the question definitively, but to provide a plausible approach and explore the answer that one might get through that, rather than categorically claim that to be the correct value. The purpose of a summary report (such as that offered by the IPCC) is to look at the totality of such stuff. Therefore, an individual paper (or blog post) that comes up with a narrow range is not a problem unless it claims to be the authoritative source for the climate sensitivity estimate.
Second, note that climate sensitivity is not the only source of unpredictability in climate. There are many others (ocean currents (somewhat predictable), solar activity (somewhat predictable), volcanic eruptions (inherently unpredictable)). It’s quite possible to have the view that climate sensitivity is well-understood, but climate is still very hard to forecast.
Third, I don’t know if there is a gap between skeptics and mainstream scientists in their view of how predictable climate is. Some people who are classified as skeptical have come up with relatively specific predictions about climate based on ocean currents, while others have called it a hopeless task. And some mainstream scientists express high confidence in particular estimates, while others have highlighted uncertainties.
I should add that there is a lot of garden-variety skepticism out there, of the form “Climate’s changed before, so it’s obviously no big deal if it changes now” or “We obviously can’t say anything about the climate!” or “obviously, humans can’t have an effect on the climate” with extremely high confidence, even though these statements are (quite likely) wrong. (Again, the statements may be right at any given time or in a particular circumstance, but they cannot be put forward as general principles with high confidence). I certainly don’t give weight to unsubstantiated views of this sort when I refer to AGW skepticism.
A summary of evidence from multiple sources should have a lower confidence interval when the sources that it summaries if the source accurately reflect the evidence that they have. If it’s the other way around that means that those sources have made mistakes.
If I ask 3 people for a number and one tells me it’s between 11-12, one tells me 14-15 and one tells me 17-19 my conclusion would be that as a group they don’t really know what they are talking about.
Yes, maybe the IPCC should have concluded that we have no idea about climate sensitivity. But they needed to put some sort of estimate range that could be fed into their scenario analyses.
Anyway, I found an infographic of different climate sensitivity estimates here:
http://www.cato.org/blog/still-another-low-climate-sensitivity-estimate-0
Direct link to image:
http://object.cato.org/sites/cato.org/files/wp-content/uploads/gsr_042513_fig1.jpg
PS: I have no idea if the infographic accurately reflects all recent studies. The author is a global warming skeptic who has received money from oil and coal industries, so that should be cause for skepticism. But I think such an infographic would be hard to fudge. If anybody has a better source, I’d be happy to hear.
UPDATE: Added it to post at end of discussion of climate sensitivity estimates.
Why does the year 1998 keep showing up? Well, I know an answer, but it’s not pretty—it’s because it was unusually warm year for the ocean surface (less cold water coming up from below), and thus is a common target for cherrypicking. Every time you pick an outlier as the end of your ranges, you insert bias one way or another—by getting an abnormally high warming rate from “1978-1998” or an abnormally low warming rate from “1998-present” (both used in this post).
This same problem shows up (if your presentation is right) with Easterbrook’s claim to have found a sinusoidal cycle in the ocean—a periodic cycle should not end on an unusually warm year for the ocean, it should end on an average year! But 1998′s high temperature means you can draw nice straight lines through it as an “elbow” in the graph, so of course it’s 1998.
That would be more shocking if the OP hadn’t specifically mentioned this:
To allay my concerns I would have also liked to see a discussion about what kind of bias is introduced by the hand-picked intervals that start/end at 1998, or even better using a presentation method like running means that doesn’t rely on hand-picked intervals.
When it comes to this debate, I would really like to see more predictions from climate scientists that are about the future and not about the past. Having a government funded prediction market where every qualified phd student gets a certain amount of investment capital would be great.
Great point. I had come to a similar conclusion a while back, and Googled around to see if others had come up with the idea.
I came across this post by Robin Hanson:
http://www.overcomingbias.com/2009/11/its-news-on-academia-not-climate.html
(see the last two paras, and note he links to some people critiquing his idea in the last sentence).
I might discuss the possibilities for prediction markets in a later post.
I would have to disagree with this. I’m sure you would agree that you need to be careful before concluding that a bunch of weak evidence, put together, adds up to strong evidence. The classic example is psychic phenomena. There is lots and lots of weak evidence of psychic phenomona; including (allegedly) controlled experiments.
In the case of climate research, there is a potential problem of systemic bias. As climategate revealed, many climate scientists are more than disinterested observers; they are advocates for a position.
By analogy, imagine if Uri Geller wanted to convince the world that psychic phenomena are real. If he had 1 or 2 pieces of really strong evidence, it might be convincing. But if he presented 100 pieces of weak evidence, you would correctly dismiss his argument.