Time series forecasting for global temperature: an outside view of climate forecasting

Note: In this blog post, I reference a number of blog posts and academic papers. Two caveats to these references: (a) I often reference them for a specific graph or calculation, and in many cases I’ve not even examined the rest of the post or paper, while in other cases I’ve examined the rest and might even consider it wrong, (b) even for the parts I do reference, I’m not claiming they are correct, just that they provide what seems like a reasonable example of an argument in that reference class.

Note 2: Please see this post of mine for more on the project, my sources, and potential sources for bias.

As part of a review of forecasting, I’ve been looking at weather and climate forecasting. I wrote one post on weather forecasting and another on the different time horizons for weather and climate forecasting. Now, I want to turn to long-range climate forecasting, for motivations described in this post of mine.

Climate forecasting is turning out to be a fairly tricky topic to look into, partly because of the inherent complexity of the task, and partly because of the politicization surrounding Anthropogenic Global Warming (AGW).

I decided to begin with a somewhat “outside view” approach: if you were simply given a time series of global temperatures, what sort of patterns would you see? What forecasts would you make for the next 100 years? The forecast can be judged against a no-change forecast, or against the forecasts put out by the widely used climate models.

Below is a chart of four temperature proxies since 1880, courtesy NASA:

Global Surface Temperature

The Hadley Centre dataset goes back to 1850. Here it is (note that the centrings on the temperature axis are slightly different, because we are taking means of slightly different sets of numbers, but we are anyway interested only in the trend so that does not matter) (source):

HADCRUT4

Eyeballing, there does seem to be a secular trend of increase in the temperature data. Perhaps the naivest way of calculating the rate of change is to calculate (final temperature—initial temperature)/​(time interval) to calculate the annual rate of change. Using that method, we get a temperature increase of about 0.54 degrees Celsius per century.

But just using final and initial temperatures overweights those two values and ignores the data in the other temperature readings. A somewhat more sophisticated approach (albeit still a pretty unsophisticated approach) is a linear regression model. I was wondering whether I should download the data and run a linear regression, but I found a picture of the regression online (source):

Linear regression for temperatures

Note that the regression line starts off a little lower than the actual temperature in 1850, and also ends a little lower than the actual temperature in the 2000s. The rate of growth seems even less here (about 0.4 degrees Celsius per century). The reason the regression gives a lower rate than simply using initial and final temperatures is that the temperature growth since the 1970s has been well above trend, and those well-above-trend temperatures are given more weight if we just use final temperature than if we fit to a regression line.

Linear plus periodic?

Another plausible story that seems to emerge from eyeballing the model is that the temperature trend is the sum of an approximately linear trend and a periodic trend, given by something like a sine wave. I found one analysis of this sort by DocMartyn on Judith Curry’s blog, and another in a paper by Syun Akasofu (note: there seem to be some problems with both analyses; I am linking to them mainly as simple examples of the rough nature of this sort of analysis, not as something to be taken very seriously). Note that both of these do more complicated things than look purely at temperature trends. While DocMartyn explicitly introduces carbon dioxide as the source of the linear-ish trend, Akasofu identifies “recovery from the little Ice Age” as the source of the linear-ish trend and the Pacific Decadal Oscillation as the source of the sinusoidal trend (but as far as I can make out, one could use the same graph and argue that the linear trend is driven by carbon dioxide).

Here’s DocMartyn’s forecast:

DocMartyn's forecast

Here’s Akasofu’s picture:

akasofu

Autocorrelation and random walks

Simple linear regression is unsuitable for time series forecasting for variables that exhibit autocorrelation: the value in any given year is correlated to the value the previous year, independent of any long-term trend. As Judith Curry explains here, autocorrelation can create an illusion of trends even when there aren’t any. (This may seem a bit counterintuitive: if only temperature levels, and not temperature trends, exhibit the autocorrelation, i.e., if temperature is basically a random walk, then why should we see spurious trends? So read the whole post). Not only can apparent spurious linear-looking trends be detected, so can apparent spurious cyclical trends (see here).

Unfortunately, I don’t have a good understanding of the statistical tools (such as ARIMA) that one would use to resolve such questions. I am aware of a few papers that have tried to demonstrate that, despite the appearance of a linear trend above, the temperature series is more consistent with a random walk model. See, for instance, this paper by Terence Mills and the literature it references, many of which seem to come to conclusions against a clear linear trend. Mills also published a paper in the Journal of Cosmology here that’s ungated and seems to cover similar ground, but the Journal of Cosmology is not such a high-status journal, so the publication of the paper there should not be treated as giving it more authority than a blog post.

Linear increase is consistent with very simple assumptions about carbon dioxide concentrations and the anthropogenic global warming hypothesis

Here’s a simple model that would lead to temperature increases being linear over time:

  • The only secular trend in temperature occurs from radiative forcing due to a change in carbon dioxide concentration.

  • The additive increase in temperature is proportional to the logarithm of the multiplicative increase in atmospheric carbon dioxide concentration (Wikipedia).

  • About 50% of carbon dioxide emissions from burning fossil fuels is retained by the atmosphere. The magnitude of carbon dioxide emissions is proportional to world GDP, which is growing exponentially, so emissions are growing exponentially, and therefore, the total carbon dioxide concentration in the atmosphere is also growing exponentially.

Apply a logarithm to an exponential, and you get a linear trend line in temperature.

(As we’ll see, while this looks nice on paper, actual carbon dioxide growth hasn’t been exponential, and actual temperature growth has been pretty far from linear. But at least it offers some prima facie plausibility to the idea of fitting a straight line).

Turning on the heat: the time series of carbon dioxide concentrations

So how have carbon dioxide concentrations been growing? Since 1958, the Mauna Loa observatory in Hawaii has been tracking atmospheric carbon dioxide concentrations. The plot of the concentrations is termed the Keeling curve. Here’s what it looks like (source: Wikipedia):

Keeling curve

The growth is sufficiently slow that the distinction between linear, quadratic, and exponential isn’t visible to the naked eye, but if you look carefully, you’ll see that growth from 1960 to 1990 was about 1 ppm/​year, whereas growth from 1990 to 2010 was about 2 ppm/​year. Unfortunately the Mauna Loa data go back only to 1958. But there are other data sources. In a blog post attempting to compute equilibrium climate sensitivity, Jeff L. finds that the 1832-1978 Law Dome dataset does a good job of matching atmospheric carbon dioxide concentration values with the Mauna Loa dataset for the period of overal (1958-1978), so he splices the two datasets for his (note: commenters to the post pointed out many problems with it, and while I don’t know enough to evaluate it myself, my limited knowledge suggests that the criticisms are spot on; however, I’m using the post just for the carbon dioxide graph):

law dome

Note that it’s fairly well-established that carbon dioxide concentrations in the 18th century, and probably for a few centuries before that, were about 280 ppm. So even if the specifics of the Law Dome dataset aren’t reliable, the broad shape of the curve should be similar. Notice that the growth from 1832 to around 1950 was fairly slow. In fact, even from 1900 to 1940, the relatively fastest-growing part of the period, carbon dioxide concentrations grew by only 15 ppm in 40 years. From what I can judge, there seems to have been an abrupt shift around 1950, to a rate of about 1 ppm/​year. A linear or exponential curve doesn’t explain the shift. And as noted earlier, the rate of growth seems to have gone up a lot around 1990 again, to about 2 ppm/​year. The reason for the shift around 1950 is probably post-World War II global economic growth, including industrialization in the now-becoming-independent colonies, and the reason for the shift around 1990 is probably the rapid take-off of economic growth in India, combined with the acceleration of economic growth in China.

To the extent that the AGW hypothesis is true, i.e., the main source of long-term temperature trends is radiative forcing based on changes to carbon dioxide concentrations, perhaps looking for a linear trend isn’t advisable, because of the significant changes to the rate of carbon dioxide growth over time (specifically, the fact that carbon dioxide concentrations don’t grow exponentially, but have historically exhibited a piecewise growth pattern). So perhaps it makes sense to directly regress temperature against the logarithm of carbon dioxide concentration? Two such exercises were linked above: DocMartyn on Judith Curry’s blog, and a blog post attempting to compute equilibrium climate sensitivity by Jeff L. Both seem like decent first passes but are also problematic in many ways.

One of the main problems is that the temperature response to carbon dioxide concentration changes doesn’t all occur immediately. So the memoryless regression approach used by Jeff L., that basically just asks how correlated temperature in a given year is with carbon dioxide concentrations in that year, fails to account for the fact that temperature in a given year may be influenced by carbon dioxide concentrations over the last few years. Basically, there could be a lag between the increase in carbon dioxide concentrations and the full increase in temperatures.

Still, the prima facie story doesn’t seem to be boding well for the AGW hypothesis:

  • Carbon dioxide concentrations have not only been rising, they’ve been rising at an increasing rate, with notable changes in the rate of increase around 1950 and then again around 1990.

  • Temperature exhibits fairly different trends. It was about flat from 1945-1978, then grew very quickly around 1978-1998, and then has been about flat (with a very minor warming trend) 1998-present.

So, even a story of carbon dioxide with lag doesn’t provide a good fit for the observed temperature trend.

There are a few different ways of resolving this. One is to return to the point made earlier about how the actual temperature is a sum of the linear trend (driven by greenhouse gas forcing) plus a bunch of periodic trends, such as those driven by the PDO, AMO, and solar cycles. This sort of story was described by DocMartyn on Judith Curry’s blog and in the paper by Syun Akasofu referenced above.

Another common explanation is that the 1945-1978 non-warming (and, according to some datasets, moderate cooling) is explained by the increased concentration of aerosols that blocked sunlight, and that therefore canceled the warming effect of carbon dioxide. Indeed, in the early 1970s, there were concerns about global cooling due to aerosols, but there were also a few voices that noted that over the somewhat longer run, as aerosol concentrations were controlled better, the greenhouse effect would dominate and we’d see rapid temperature increases. And given the way temperatures unfolded in the 1980s and 1990s, the people who were calling for global warming in the 1970s seemed unusually prescient. But the pause (or at any rate, significant slowdown) in warming after 1998, despite the fact that the rate of carbon dioxide emissions has been accelerating, suggests that there’s more to the story than just aerosols and carbon dioxide.

UPDATE: Some people have questioned whether there was a pause or slowdown at all, and whether using 1998 as a start year may be misguided because it was an unusually hot year due to a strong El Nino. 1998 was unusually hot, and indeed the lack of warming relative to 1998 for the next few years was explainable in terms of 1998 being an anomaly. But the time period since then is sufficiently long that the slowness of warming can’t just be explained by 1998 being very warm. For a list of the range of explanations offered for the pause in warming, see here.

Should we start using actual climate science now?

The discussions above were very light on both climate science theory and heavybrow statistical theory. We just looked at global temperature and carbon dioxide trends, eyeballed the graphs, and tried to reason what sort of growth patterns were there. We didn’t talk about what the theory says, what independent lines of evidence there are for it, what sort of other indicators (such as regional temperatures) might be used to test the theory, and what historical (pre-1800) data can tell us.

A more serious analysis would consider all these. But here is what I believe: if a more complicated model cannot consistently beat out simple models such as those based on persistence, random walk, simple linear regression, random walk with drift, etc., then the model has not really arrived at prime time for forecasting. There may still be insights to be gleaned from the model, but its ability to forecast the future is not one of its selling points.

The history of climate modeling so far suggests that such success has been elusive (see this draft paper by Kesten C. Green, for instance). In hindsight from a 1990s vantage point, those in the 1970s who bucked the “global cooling” trend and argued that the greenhouse effect would dominate seemed very prescient. But the considerable slowdown of warming starting around 1998, even as carbon dioxide concentrations grew rapidly, took them (and many others) by surprise. We should keep in mind that there are many stories in financial markets of trading strategies that appear to have been successful for long periods of time, far exceeding what chance alone might suggest, but then suddenly stop working. The financial markets are different from the climate (in that there are humans competing, and eating away at each other’s strategies) but the problem still remains that something (like “the earth is warming”) may have been true over some decades for reasons quite different from those posited by people who successfully predicted them.

Note that even without the ability to make accurate or useful climate forecasts, many tenets of the AGW hypothesis may hold, and may usefully inform our understanding of the future. For instance, it could be that the cyclic trends and sources of random variation are bigger than we thought, but the part of the increase in temperatures due to increasing carbon dioxide concentrations (measured using the transient climate response or the equilibrium climate sensitivity) is still quite large. Which basically means we will see (large increase) + (large variation). In which case the large increase still matters a lot, but would be hard to detect using climate forecasting, and would be hard to use to make better climate forecasts. But if that’s the case, then it’s important to be all the more sure of the other lines of evidence that are being used to attain the equilibirum climate sensitivity estimate. More on this later.

Critique of insularity

I want to briefly mention a critique offered by forecasting experts J. Scott Armstrong and Kesten Green (I mentioned both of them in my post on general-purpose forecasting and the associated community). Their Global Warming Audit (PDF summary, website with many resources) looks at many climate forecasting exercises from the outside view, and finds that the climate forecasters pay little attention to general forecasting principles. One might detect a bit of a self-serving element here: Armstrong isn’t happy that the climate forecasters are engaging in such a big and monumental exercise without consulting him or referring to his work, and an uncharitable reading is that he is feeling slighted at being ignored. On the other hand, if you believe that the forecasting community has come up with valuable insights, their critique that climate forecasters didn’t even consider the insight obtained by the forecasting community in their work is a fairly powerful criticism. (Things may have changed somewhat since Armstrong and Green originally published their critique). Broadly, I agree with some of Amstrong and Green’s main points, but I think their critique goes overboard in some ways (to quite an extent, I agree with Nate Silver’s treatment of their critique in Chapter 12 of The Signal and the Noise). But more on that later. Also, I don’t know how representative Armstrong and Green are of the forecasting community in their view on the state of climate forecasting.

I have also heard anecdotal evidence of similar critiques of insularity from statisticians, geologists, and weather forecasters. In each case, the claim has been that the work in climate science relied on methods and insights better developed in the other disciplines, but the climate scientists did not adequately consult experts in those domains, and as a result, made elementary errors (even though these errors may not have affected their final conclusions). I currently don’t have a clear picture of just how widespread this criticism is, and how well-justified it is. I’ll be discussing it more in future posts, not so much because it is directly important but because it gives us some idea of how authoritative to consider the statements of climate scientists in domains where direct verification or object-level engagement is difficult.

Looking for feedback

Since I’m quite new to climate science and (largely, though not completely) new to statistical analysis, it’s quite possible that I made some elementary errors above. Corrections would be appreciated.

It should be noted that when I say a particular work has problems, it is not a definitive statement that that work is false. Rather, it’s simply a statement of my impression, based on a cursory analysis, that describes the amount of credibility I associate with that work. In many cases, I’m not qualified enough to offer a critique with high confidence.