Is the AMO the explanation for the 1940-1970 temperature standstill?

During the past week we have had interesting discussions on this blog about a new paper that tries to filter the AMO signal out of the global temperature series. Several commenters thought that the paper contained serious flaws. Among them was Guido van der Werf, who is a scientist working at the Free University in Amsterdam. His research mainly focuses on forest fires and the effects they have on the carbon cycle. Guido is a friend and I greatly appreciate his efforts to replicate the results of the Zhou/Tung paper. He presents his finding in this guest post.

Guest post by Guido van der Werf

In a recent post Marcel Crok highlighted a new paper by Jiansong Zhou and Ka-Kit Tung in the Journal of the Atmospheric Sciences which halved recent anthropogenic warming and indicated no acceleration of anthropogenic warming over the past 100 years. If true, this is obviously a very important finding. The paper followed on two other papers by Lean and Rind (2008) and Foster and Rahmstorf (2011) aiming to disentangle anthropogenic and natural influences seen in the instrumental temperature record. Three different datasets of temperature are shown below.

To me, studies using the temperature record to study the relative importance of natural and anthropogenic factors are very interesting, for a part because they can be easily repeated and are based on the best data we have. The disadvantage is that they describe what happened in the past, and although this yields information to what may happen in the future it is not a substitute for climate models.

Just a little bit of background information: climate is obviously impacted by a large number of forcings occurring on different time scales. These are both anthropogenic and natural. For example, after the eruption of Mt.Pinatubo in the Philippines in 1991 the global temperature dropped for 2-3 years. And the year 1998 was much warmer than previous and following years because of a strong El Niño episode in 1997-1998. These two events are pointed out in the temperature graph above.

Over decadal time scales things become more complicated. There are at least two ocean oscillations that may impact climate on these time scales: the Pacific Decacal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO), plotted below with positive phases leading to warming and shown in red.

Now, what Zhou and Tung tried to do is subtract the influence of natural variations (AMO, sun, volcanoes, and the El Niño – southern oscillation or ENSO). In theory, what is left behind is the anthropogenic signal. The tool to do this is multiple linear regression (MLR), which basically gives a certain weight to each variable to best match the temperature record. Thanks to the well written paper I was able to reconstruct their results shown in their Figure 1a:

Original and replicated Figure 1a of Zhou and Tong showing the temperature record after the effect of the sun, volcanoes, and ENSO have been subtracted which yields a smoother curve than the “raw” temperature. It shows that there is still a cycle present which looks very similar to AMO. The novelty of the Zhou and Tung study over earlier studies is in that they tried to compensate for this. Basically you add AMO to the statistical method to filter out the natural signals. I was not able to fully replicate their data but came close:

Original and replicated Figure 1b of Zhou and Tong. The temperature record after the effect of the sun, volcanoes, ENSO, and AMO have been taken out. Suddenly the oscillation seen in Figure 1a is gone, and the trend is smaller. In my data I do find a small acceleration which was not seen in Zhou and Tung but the differences are small and may be due to their smoothing of the AMO data, but I am not sure.

To make these graphs they made one crucial assumption. Since none of the natural forcings or oscillation has a trend that can explain the temperature record one has to predefine the shape of the anthropogenic influence over time to be included in the MLR. Lean and Rind (2008) used the net effect of greenhouse gases (warming), changes in albedo (cooling), and aerosols (cooling) for this. Forster and Rahmstorf (2011) used a linear trend, but their study covered only the 1979 onwards period because they included the satellite temperature records which are only available since 1979. For that period the net effect may be rather linear, at least according to the data used by Lean and Rind (2008).

Zhou and Tong used a linear trend as well, but over the much longer time period. So basically they assumed that anthropogenic climate change in 1856 was of the same order of magnitude as in 2011. Below I will repeat their analysis but instead of a linear trend I use the time-evolution of the CO2 concentration, neglecting other greenhouse gases and aerosols which to some degree level out.

Original and replicated Figure 1b of Zhou and Tong with a pre-defined shape of CO2 concentration for the anthropogenic influence in the replicated data instead of a linear trend. The correlation coefficient between modeled and measured temperature is 0.95.

The difference with Zhou and Tong is clear: the acceleration of anthropogenic warming is stronger. Interesting is that in both approaches indicate no halting of anthropogenic warming after 2000; the prevalence of La Nina’s cancel out the anthropogenic forcing.

1940-1970 standstill
What is even more interesting is that the 32 and 50 year trend is still substantially lower than reported earlier. In other words, according to this simple analysis part of the 1970-2000 warming was due to the upswing of AMO. Similarly, the 1940-1970 flat temperature may have been the result of the downswing of AMO canceling the greenhouse gas warming. This provides in my opinion a more elegant explanation than the aerosol explanation most commonly used, but more research is needed to better pinpoint whether this is true.

If we were to convert the CO2 concentration to radiative forcing (RF) using the well established formula RF = 5.35 * ln(CO2 / CO2pre-industrial) and put this in the MLR, the slope for RF is 0.42 degree C warming per unit W/m2. A doubling of CO2 is 3.7 W / m2 so combining these numbers yields a transient climate sensitivity of 0.42 * 3.7 = ±1.5 degrees C. Clearly, these are not much more than back of the envelope calculations but still valuable.



  • It is important to include AMO in any regression analysis to deduce the warming trend as shown by Zhou and Tung
  • The outcome of this exercise is highly dependent on the assumed shape of the anthropogenic signal. If we assume that the anthropogenic influence is not linear then anthropogenic warming is accelerating
  • The climate sensitivity of the system seems lower than the usually assumed values, although this also depends on the time period covered in the MLR
  • Anthropogenic warming has not stopped in 2000 or so, this result is independent of the assumption on the shape of the anthropogenic influence in the MLR


Major uncertainties in this approach are related to:

  • The temperature record, which is in the 19th and first part of the 20th century based on a limited number of stations
  • The anthropogenic influence on AMO; we simply used the detrended values but accounting more precisely for the anthropogenic impact may give different results
  • And of course the shape of the anthropogenic influence. A linear influence seems unjustified, but the CO2 concentration as a proxy as I did is not perfect either
  • For the calculation of the transient climate sensitivity the relative role and time evolution of other greenhouse gases than CO2 and aerosols is crucial.
  • And anything else I overlooked, please respond!

32 comments to Is the AMO the explanation for the 1940-1970 temperature standstill?

  • Jos Hagelaars

    Hello Guido,

    Very nice investigation of the Zhou-Tung article. I didn’t do the mathematics or read their paper (paywall), so I take your word for it that they used a linear anthropogenic trend from 1856. That would be rather silly.

    I think you jump to conclusions in stating that the AMO is a better explanation for the 1945-1970 standstill than aerosols are. Humanity has produced a lot of sulfur dioxide the last century which increased quickly after WW2. See for instance figure 3a in
    Physics tells us that aerosols do have an influence on incoming solar light (Mie scattering) and indirect effects, like cloud formation. Therefore they can’t be excluded in an analysis over the past century.

    I did some multiple regression in the past on GISTEMP (monthly values) starting in 1900, using TSI, stratospheric aerosols, Nino 3.4 index and the sum of GHG’s and tropospheric aerosol forcings of Giss. If I include the AMO index from KNMI (based on HadSST2 minus the global warming signal), the R2 of the regression only shows a very minor improvement.

    Maybe it is interesting to include the aerosols in your multiple regression (e.g. sulfur dioxide emissions) and then have a look at the influence of every component.
    Figure 3a of Smith et al 2011 shows that the sulfur dioxide emissions started to rise again after 2000, so, besides El Nino’s/La Nina’s, the aerosols must have played a part in the development of global temperature in the last decade.
    As you state here, one should be careful with the temperature record in the first part of this century and the 19th century. Indeed less stations and relatively more stations in the Europe – America region, so relatively more stations around the Atlantic Ocean (the A in AMO). So, could there be a higher relation with AMO in the first part of the 20th century than the latter part?


    PS, the first graph you present here is incorrect, did you forget to take the difference in reference periods into account?
    See for a quick look here:
    Also the trend data in the graph of GISTEMP are incorrect, over de period January 1979 to December 2011 the trends (based on monthly data) are, in °C/decade:
    GISTEMP = 0.16
    HadCRUT4 = 0.16
    UAH = 0.14

  • Guido

    Thanks Jos – good suggestions, I will try to include them in a follow-up or report on the findings here. The SO2 emissions are interesting, and I can understand they played a role in the flat temperature. On the other hand, emissions decreased over 1975-2000, this may thus also explain part of the temperature rise over that period.

    Thanks also for finding the mistake in the temperature graph, I made the mistake of using temperature over land only. The graph is corrected now.

  • Hans Erren

    Guido can you show the residual after subtracting the co2 forcing using a sensitivity of 1.5 degrees C per doubling?

  • Arjan

    Marcel, it is quite well known that aerosols are the predominant cause of the mid 20th century cooling (nb: I’m not saying it is the ONLY reason). The statement that the AMO natural variability is a more elegant explanation is quite outrageous, and though you personally might like it, it is certainly not backed up by the current state of knowledge of the climate system!

    [MC: Arjan, this is a guest post written by Guido van der Werf.]

  • Hans Erren

    Arjan it is quite well known that aerosols are the fudge factor in the climatemodels. High CO2 sensitivity models have strong aerosol cooling and low CO2 sensitivity models have weak aerosol cooling.

  • Guido

    Hans – will do tomorrow, good suggestion, thanks.

    Arjan – you may be right, but I know of at least a few climate scientists that are not fully comfortable with the aerosol explanation. I agree with you that it has contributed, but how much is not resolved I would say. And again, if aerosols have contributed to the standstill then the aerosol decrease has also contributed to the 1970-2000 warming. Probably a more detailed look into the different hemispheres and land versus ocean may help disentangling the effects. I will dig up some literature but if you have good suggestions to back up your claim please let me know.

  • Jos Hagelaars

    Guido, have a look at this picture regarding the period 1998-2011:
    The cooling due to El Nino/La Nina since 1998 is quite clear, also the cooling around Antarctica and the seas in the east part of Azia. The Arctic has warmed a lot in this period, which is an understatement.
    Full text at:

    And of course has the decrease in aerosols contributed to the 1970-2000 warming, e.g. the US Clean Air Act started in 1963 and was expanded in 1967:

  • Guido

    Hans, please see here for a plot with data and MLR model as well as residuals. Based on the CO2 concentration as a proxy for anthropogenic influence instead of a linear trend.

  • Rob Dekker


    Thank you very much for this re-analysis of the Zhou and Tung paper !
    That’s great work, and I admire you for spending the time to replicate their results.

    I argued that their resulting trend is artificially created, because of the AMO index they used. They regressed the AMO index out of the global temperature record, but since the AMO index mostly consists of the global temperature signal itself, they ended up with the linear trend that is subtracted from the AMO index by definition.

    Since Zhou and Tung’s is allegedly a result of their own method, I don’t think it makes a lot of sense to actually use their results as a basis for further analysis.

    You choose to still start with their results, and build on that.

    So, my first question is : which assumptions are you scientifically making by using the paper in dispute as the foundation for your analysis ?

    Further :

    Zhou and Tung argued that if you filter out the AMO, and other natural forcings, that you end up with a 0.08 C/decade over the past century.

    Your conclusion is now that at least for the past 30-50 years, if you remove the AMO and other natural forcings, that you end up with a 0.113 C/decade warming. On this, you state

    What is even more interesting is that the 32 and 50 year trend is still substantially lower than reported earlier.

    which is indeed still smaller than the 0.17 C/decade that Lean and Rind (2008) and Foster and Rahmstorf (2011) concluded.

    Why did you obtain a different trend than Lean and Rind and Foster and Rahmstorf ?

    To explore that question, and potentially find an answer, I first have a few remarks and questions :

    How exactly did you obtain your result that recent (past 30-50 years) trend is increased from Zhou and Tung’s 0.08 C/decade to your 0.113 C/decade ?

    Is it correct that you obtained that new trend theoretically from the non-linear CO2 concentration measurements ?
    If so, which CO2 concentration data did you use ? And did you use the RF = 5.35 * ln(CO2 / CO2pre-industrial) formula to derive the temperature effect of that concentration change ?
    If so, are you aware that this formula is a temperature EQUILIBRIUM formula, which essentially projects the temperature change hundreds or thousands of years into the future ? And that thus this formula has very little to do with a dynamic process of non-linearly increasing CO2 concentrations, which of course will be much smaller than the formula suggests ?

    If indeed you based your analysis on this (“well established”) equilibrium formula, are you really that surprised that you are obtaining a much lower trend than the actual observations that Lean and Rind and Foster and Rahmstorf report ?

  • Rob Dekker

    Sorry, Guido, I take my speculations about the equilibrium formula back.
    In fact, I don’t know where you obtained that formula. Can you tell us ? and also why you think it applies for the increase in CO2 over the past couple of decades ?

  • Arjan

    @Hans Erren: There is a large uncertainty in the aerosol forcing and its coupling with the hydrological cycle. It is terribly frustrating (and maybe also quite bad for our future) that though we have tremendously improved our knowledge on both in the last 30 years, we do not seem to be able to significantly reduce the uncertainties in the climate effects of both. Just adding more details does not seem to work (yet..). The last 150 years of surface measurements are not a good constraint for climate sensitivity to greenhouse gas forcing. In general, on the extreme sides of the spectrum, climate models with high climate sensitivity have large negative aerosol forcing to be able to simulate the observed temperature series. However, there are other ways to estimate both climate sensitivity and aerosol forcing, so that those extreme cases are unlikely. In the AMO observations both the anthropogenic GHG warming and aerosol forcing is included (both lead decadal variations in sea surface temperatures, especially true for aerosols), so you have to be extremely careful to attribute decadal variations in global surface temperatures to decadal variations in a subset of surface temperatures (AMO; you do not accurately know the physical attribution of the contributions to the observed AMO patterns, e.g. how much of the observed sea surface pattern was caused by aerosol forcing, anthropogenic greenhouse gas forcing and how much by ocean internal variability?). Same is true for the PDO, which used to be the “skeptics” toy to explain the observed climate variability.

  • Cees de Valk

    I would have no problem with this type of regression analysis if it were presented as a “what if” exercise (as check of consistency of certain assumptions with the data) but it proves nothing and you can’t attach much meaning to it. It’s a kind of game.

    In particular the interpretation of the regression on a linear trend or CO2 concentration (the log of which is almost linear) as “anthropogenic” is pure speculation (Would there be no long term trend without human effects? That is a strong assumption. All we know is that climate fluctuates on a wide range of time scales. And many other possibly relevant variables have also shown similar trends).

    These regression analyses constitute (implicitly or explicitly) fitted causal models but the agreement to such a simple signal as the century-scale global near-surface temperature proves nothing: a similarly close match can be obtained with other regression models based on widely different hypotheses and/or input signals. Even comparing such predictions from the past against recent data will not help much in practice, because any “inconvenient” outcome can and will be “explained away” with hindsight. There is just not enough information in the signal to differentiate between hypotheses.

    We need to look at measurements of many more variables, their spatial variability, etc. to be able to falsify hypotheses, and even then, the danger of spurious agreement caused by model fitting remains huge, provided that there are enough parameters to adjust (and indeed there are), or we are `’tolerant enough” to mismatch (When should a model be considered fit for forecasting the climate? No one can tell).

  • Jos Hagelaars


    I tried to reproduce your results using multiple regression on HadCRUT4 year-average temperatures against TSI, Stratospheric Aerosols, Nino3.4, LN(CO2) and AMO/PDO/non-ghg-human forcings(Giss), but ended up with quite different numbers.
    I got the following trends for the human part of global warming signal from 1970 to 2010, in °C/decade:
    without AMO: 0.166
    with AMO: 0.141
    with Aerosol-Forcings: 0.162
    with AMO and Aerosol-Forcings: 0.134
    Some of the difference could be due to the fact that I started with the regression in 1900, to leave out the period with the largest uncertainty in the data.

    Did you use the standard AMO index?
    The AMO is based on sea surface temperatures and of course there is a global warming signal present in the SST’s. Using such an index in a multiple regression analysis with global temperatures will most certainly lead to a correlation that is much too large. It is imperative that an AMO index is used where the global warming signal is carefully removed, e.g. like:

    There is always a possibility that even in the KNMI de-regressed AMO index a part of the global warming signal is still present. If that is the case, the reduction of the trend will probably be too large and therefore the remaining trend would be larger than the 0.13/0.14 °C/decade.

    There is a possibility, as I indicated before, that the correlation with AMO is also too high because of the relatively large amount of temperature stations in the Atlantic region before WW2 and certainly before 1900. Another reason why I started in 1900 with my HadCRUT4 regression.
    Some proof for the correlation before WW2 between AMO and global T being too high, can be find with the Foster & Rahmstorf regression method, which uses monthly data. I repeated this but now included the KNMI AMO index. The result is that the influence of the extra parameter AMO is not significant and it’s influence is zero, which could be an indication the AMO does not have any influence at all on global temperature.

    The same applies for PDO, the influence of this oscillation index is also not significant and therefore zero in the HadCRUT4 year-data regression as well as in the F&R regression.

    My conclusion would be that PDO has no influence on the average world temperatures and the AMO maybe a little bit. These two oscillations are often used by ‘so-called skeptics’ to explain a large part of the global warming, but as far as I can tell, this claim is not backed-up by solid science at all.

    @Cees de Valk

    “regression on a linear trend or CO2 concentration (the log of which is almost linear) as “anthropogenic” is pure speculation”

    This is nonsense, physics tells us that CO2 has a warming effect, without feedbacks it is about 1.1 °C per doubling of the atmospheric concentration. Doing a regression with a lot a variables that have no physical meaning would be pure speculation.

  • Hans Erren

    Be aware that global population correlates 99% with CO2, and that e.g. UHI is proportional to log(pop). Statistically it is therefore impossible to make a distinction between effects that are related to log(pop) and log(co2)

  • Rob Dekker


    You mention :

    Below I will repeat their analysis but instead of a linear trend I use the time-evolution of the CO2 concentration,

    The more I look at that result (last 30 year trend 0.113 C/decade) the more confused I get as to how you obtained that number.

    All of the prior studies (Lean and Rind (2008), Forster and Rahmstorf (2011) and even the study you are commenting on Zhou and Tung (2012)) start from the global temperature record, and they try to eliminate natural forcings (volcanoes, ENSO, solar activity etc). Basically, they try to obtain the temperature trends that CANNOT be explained by natural forcings.

    None of these studies “use the time-evolution of the CO2 number” in their analysis, nor “replicated Figure 1b of Zhou and Tong with a pre-defined shape of CO2 concentration for the anthropogenic influence”.

    Of course, if you would use some sort of CO2 forcing as a ‘predictor’ in your regression analysis (trying to eliminate man-made forcings out of the global temp record, after you already eliminated all natural forcings), you should have ended up with a flat line.

    So what exact have you done to produce your last figure (which shows a 0.113 C/decade trend over the past 30 years) ?

  • Guido

    Going through the discussion from top to bottom:

    @Rob Dekker: for the conversion from CO2 concentration to radiative forcing please see for example the IPCC which is partly based on the work of Myhre et al. (1998)

    @Cees de Valk: in essence it is a what-if study. It is a poor-man’s version of a more complicated physical model I would say. What this blog and also Jos’ results below show that one has to be very careful with the assumptions that go into the regression. I disagree with the linear trend as used by Zhou and Tung, but I am also aware that the CO2 proxy I used isn’t perfect either. By the way, I did not fit a linear trend through CO2 concentration (which would give the same result as Zhou and Tung) but I used the CO2 concentration by itself, which is exponential.

    With regard to your comment “Would there be no long term trend without human effects?”: according to this study the answer is “maybe, but without an anthropogenic trend it does not compare well to the observations”. Of course you can get a long term trend with these kind of approaches and here you can see what it looks like. The MLR simply boosts all input data that see a trend which for example leaves you with volcanoes (more prevalent in 2nd half of 20th century) that warm and a very high sensitivity to the sun. Partly it doesn’t make sense (volcanoes) and it for sure doesn’t agree with observations. With an anthropogenic trend the variability of the natural forcings is in the right directions (i.e., volcanoes cool climate) and more in line with observations.

    @Hans – not sure what you are after. Humans combust fossil fuel emitting CO2 so of course there is a relation between humans and CO2 emissions. I can repeat the analysis with population density and get the same results. But humans are not a greenhouse gas, CO2 is. Of course it is just a statistical approach and one has to be careful with the results.

    @Rob Dekker: it looks we are not that far off with results actually. I used the same datasets as Zhou and Tung to best match their results, which includes this one for AMO. I fully agree that the shape of the AMO is crucial. That is (besides the potenital importance of AMO) the take home message from this exercise: it makes a big difference what you use as input data, so one should never base his/her opinion on one single article.

    @Rob Dekker: what software do you use? I did the analysis in Python and will post the code and the array of input data tomorrow on my website so you can have a look at it (can’t access it from home). If I include PDO (small effect) and limit myself to 1900-2011, then trend is higher as well: 0.14 degrees C per decade for anthropogenic impact. Basically the MLR gives less weight to AMO because that is one of the only options to match the (uncertain) temperature record before 1900. But I have to do more work and talk to a few people who understand the AMO to figure out what makes most sense.

    In the end the values will be somewhere between Zhou & Tung on one hand and Foster & Rahmstorf on the other hand, depending on 1) the true impact of AMO, 2) the shape of the anthropogenic influence pre-defined in the MLR, and 3) the time period investigated and details about smoothing, monthly / annual data, lags, etc. Nothing is simple 🙂

  • Hans Erren

    @ Guido
    I was thinking of the repeated warnings by Roger Pielke Sr. that land use change is underexposed in climate models, and landuse change is proportional to popoulation growth. So if you find purely by correlation that CO2 has a sensitivity of 1.5 without taking land use into the equation, then you overestimate climate sensitivity. By how much is extremely difficult to establish, given the 99% correlation between CO2 and population.

  • Guido

    Hans – I see, thanks for explaining. I have spend quite a bit of time looking at albedo changes due to land use change, and it is a factor that can more important for regional climate than higher GHG concentrations. On a global scale I don’t think there is a lot of evidence that it a major player though.

    My thinking of choosing CO2 concentration is that the other positive forcings (CH4, N20, O3, halocarbons) would cancel out of some of the negative forcings (aerosols, albedo). According to the IPCC they do on a global scale. But as you and Marcel have pointed out many times, the science is not settled on the aerosol effect, especially not the indirect effect.

  • Cees de Valk

    @Guido, thanks very much for your response.

    Just to clear up a few things:
    You wrote “I disagree with the linear trend as used by Zhou and Tung, but I am also aware that the CO2 proxy I used isn’t perfect either”. Now, CO2 may be a better representation of something proportional to the anthropogenic contribution to radiative imbalance (its log would be better though, and if sensitivity would be high, you would expect a significant lag due to ocean warming etc), but the OUTCOME of your regression does not (and cannot claim to) distinguish a CO2 effect from e.g. a long-term natural cycle giving rise to a positive trend over the last century. Because if that caused the trend in temperature, your regression on CO2 would also look like what has been measured. I.e., your can use the CO2 signal to match the observed increase, regardless of the true cause.

    Leaving out CO2 (or any other clearly trending signal) of your set of regressors as in your figure, the temperature signal is fitted poorly. But that is trivial: it only shows that you can’t describe the clearly trending temperature as a linear combination of signals which do not (or hardly) exhibit such a trend. If you can really exclude a priori any other process possibly causing a century-long upward trend in temperature (natural internal variability on long time scales, effects of long-term solar variations through whatever mechanism, effects of land use and vegetation change, and of nearby heat sources ad sinks on thermometers on land, bias in seawater temperature measurement corrections, and not to forget the unknown unknowns) then you would have a case for CO2 an an explanation of the data. It is not as easy as it seems.

    @Jos Hagelaars thank you too. About your comment
    “This is nonsense, physics tells us that CO2 has a warming effect, without feedbacks it is about 1.1 °C per doubling of the atmospheric concentration. Doing a regression with a lot a variables that have no physical meaning would be pure speculation.”

    I called it speculation not because CO2 would not be a sensible choice from a physical point of view, but because interpreting the results as a measure of the CO2 effect is unjustifiable from a statistical point of view. See my comments above. Of course I am not claiming to throw in all kinds of arbitrary signals at the same time. The point i am trying to make is that even with the CO2 signal as input, such a regression does not prove much; it can only demonstrate that “the data do not falsify such a hypothesis”, but cannot be considered positive evidence.

  • Jos Hagelaars


    I repeated my analysis with the NOAA AMO data you used in your analysis and I get the same result as you do: 0.112 °C/decade for 1970-2010. This drops a little to 0.109 °C/decade when I add the Giss non-natural forcings without ghg’s (= aerosol effect + land-use change + black carbon + tropospheric ozone).

    Looking at your results and mine, your assumption that CO2 and the other human induced forcings (e.g. aerosols) almost cancel each other out is quite reasonable. I would not calculate any climate sensitivity from such a regression analysis, because it is unclear if the regression overestimates one or the other. Better build a climate model and base a determination of the climate sensitivity on the physics of the model, but that’s been done before.

    I do not agree that you can draw conclusions about the period 1940-1970 and about the remaining warming trend after 1970 when you use the NOAA AMO data. It is clear that these contain a part of the global warming signal, which is the same as the Y variable you want to regress. As I said before, even when the global warming signal is removed with de-regression, you can’t be sure that it can be done for 100%.

    If Zhou-Tung also used the NOAA AMO in their analysis together with a simple linear trend as the anthropogenic signal, I think their paper is flawed science, as Rob Dekker told us before. I agree with his last statement on the previous blog:
    The title of the previous post is wrong and question mark in the title of this post should be emphasized a lot.

    See Oldenborgh et al 2009 for a description of the KNMI AMO index:
    Trenbert et al 2006 also described a method to create an AMO index without the global warming signal, because as they state: “To deal with purely Atlantic variability, it is highly desirable to remove the larger-scale global signal that is associated with global processes, and is thus related to global warming in recent decades.”

    @Cees de Valk

    “natural internal variability on long time scales, effects of long-term solar variations through whatever mechanism, effects of land use and vegetation change, and of nearby heat sources ad sinks on thermometers on land, bias in seawater temperature measurement corrections, and not to forget the unknown unknowns”

    Which long term natural variabillity? As you probably know the oceans have soaked up enormous amounts of energy since the 1950’s, so when CO2 would only play a minor part in the recent global warming, the oceans would have lost some energy to warm the atmosphere. This clearly has not happened.
    Which long-term solar variations? Since at least the 1940’s the solar output shows no trend at all.
    About your heat sources et cetera: unsubstantiated claims. The planet has warmed, see the top figure on this page, the satellites are in quite good agreement with the surface temperature measurements.
    I would not revert to unknown unknowns when normal physics can explain a very large part of what is happening with earth’s climate.

  • Rob Dekker

    Jos Hagelaars said

    your assumption that CO2 and the other human induced forcings (e.g. aerosols) almost cancel each other out is quite reasonable

    That’s not really what Guido is saying. He states :

    neglecting other greenhouse gases and aerosols which to some degree level out.

    I think that Guido is correct on this.
    Other (not CO2) greenhouse gases (Methane and other halocarbons, ozone, etc) may indeed level out with aerosols to some extent :
    But it’s very clear that CO2 is the dominant greenhouse gas which does not “cancel out” other human induced forcings.
    I assume your statement was a typo…

  • Rob Dekker

    Guido said

    I did the analysis in Python and will post the code and the array of input data tomorrow on my website so you can have a look at it

    Looking forward to that Python code, Guido. But, before we start debugging your code, it would be nice to understand what it is supposed to do. Since you are a scientist, could you please explain how you introduced the “pre-defined shape of CO2 concentration for the anthropogenic influence” into the multiple linear regressions ?

    In other words : what exact have you done to produce your last figure (which shows a 0.113 C/decade trend over the past 30 years) ?

  • Jos Hagelaars

    @Rob Dekker,
    You are absolutely right, I certainly did not mean that the effects of CO2 and aerosols cancel out.
    I noticed with these regression analysis that when you add the forcings of the other GHG’s + [all the other non-natural/non-GHG] forcings (e.g. aerosols) the result is about the same as when you use only LN(CO2). Which is about the same as the IPCC picture says and what Guido said.
    Sorry about the confusion, my English needs to be improved a lot.

  • Guido

    @Rob Dekker – please see here for the python code I used to get Figure 1a and 1b. To get the last graph just toggle lines 129 and 130. I found another mistake which underestimated the transient climate sensitivity, it now changed from 0.41 to 0.50 for dt / dRF (or, 1.8 degrees per doubling, transient). I will start exploring other AMO datasets when I find some time, thanks for the suggestions.

    @Cees de Valk – you are totally right, if one would build in a lot of delays in the system and non-linearities one could probably match the temperature graph as good as the simple approach I took. I rather put my money on the simple explanation though 🙂

  • Rob Dekker

    Thanks for your Python code. It clarifies a lot.

    In fact, it confirms my concerns with the method you use, and explains why you are getting a lower ‘anthropogenic’ trend than Lean and Rind (2008) and Foster and Rahmstorf (2011) for the past 30 years.
    The problem, as it was in Zhou and Tung, is again (extreme) ‘contamination’ of the variables in MLR.

    Here is the problem :

    You did NOT change the AMO definition. You still use Zhou and Tung’s flawed definition which is heavily contaminated by (correlated with) the global temperature record). So, the entire global temperature signal (including the non-linear anthropogenic signal) is still present in the AMO index you used.

    Instead, you added a second variable (predictor), which is your estimate of the influence on temperature based on CO2 concentration.

    So, now you are faced with TWO variables, which BOTH contain the non-linear anthropogenic effect.

    When faced with two variables, which share a common term, MLR will attempt to balance between the two variables based on the differences between them. This means that the common term (non-linear CO2 sensitivity) will be split between both variables.

    In your case, variable one (linearly de-trended Northern Atlantic SSTs) differs from variable two (your model of transient sensitivity of CO2 to temperature) show more or less equal variance, and thus MLR will more or less equally distribute the non-linear part of CO2 over AMO and CO2.

    Thus, the trend line you extracted (not just the last 30 years but all across the record) in you last figure are all more or less half of the actual influence of CO2.

    What you HAVE TO do to avoid ‘contamination’ and flawed results from MLR is to remove the global temperature signal from the AMO definition BEFORE you enter AMO as an ‘independent’ variable into MLR.

    Jos Hagelaars recognized that when he stated “It is imperative that an AMO index is used where the global warming signal is carefully removed”, and I pointed out before that many scientists have recognized that issue long ago and developed methods to do so :

    In conclusion, you did a little bit better than Zhou and Tung 2012 (who obtained the trend line they introduced by their own AMO definition), but your results still suffer from severe ‘contamination’, which is why you are getting unrealistically low trend lines in your result, which are inconsistent with results found in Lean and Rind (2008) and Foster and Rahmstorf (2011).

    Finally, it is important to note that neither PDO nor aerosol forcing nor anything else but AMO definition is important to show the flaws. Your method (and Zhou and Tung’s method) are fundamentally flawed simply because of contamination of the global temperature signal in your AMO definition.

  • Cees de Valk

    @Guido: So you want to bet! I like betting. But only on verifiable future outcomes, not on an explanation for something. For the explanation of global average temperature, I prefer Solid statistical analysis with carefully motivated choice of explanatory variables. More evidence: the number pirates seems somewhat on the increase again, and guess what happened recently?

  • Guido

    @Rob Dekker, let’s recap:

    – The 1979-2010 trend in the temperature datasets: 0.16 degrees per decade
    – Foster and Rahmstorf say this is an underestimate because in the latter part of the record natural fluctuations lower the trend, the anthropogenic trend should be 0.18 degrees per decade
    – Zhou and Tung used the AMO as an additional natural fluctuation, focus on a much longer timeframe, and find 0.07 degrees per decade
    – I think that was due to at least one flawed assumption and repeated the analysis with a more sensible (but not perfect) assumption, boosting it again to 0.11 degrees per decade. Thanks to Marcel I could post this here. I also mention some of the uncertainties you bring up.

    Now, the difference between 0.11 and 0.18 is for a large part due to the time period considered and the AMO. If you subtract SST temperature from the AMOas you or Jos suggested earlier you may get a trend close to 0.18. But I don’t think that makes sense, and if I am not mistaken the folks at KNMI think the same. Again, simple detrending is not perfect either, but taking out the oscillation of the AMO is strange at best. Also keep in mind that my numbers for transient climate sensitivity (1.8) are very much in line with the average climate model outcomes.

    @Cees de Valk:
    Temperature over 2011-2020 minus 2001-2010 0.10 you give me 100 euro to be donated to charity (my choice)
    In between: we go drink a beer and discuss the current state of climate science

    Temperature based on average of most recent versions of GISS, UAH, HADCRUT.

    How about that?

    Now, the Bounty unfortunately sank so keep in mind we lost a few pirates!

  • Guido

    @Cees de Valk: apologies, that was not an honest bet, here we go again without the special characters:

    Temperature over 2011-2020 minus 2001-2010 above 0.10 degrees C: you give me 100 euro to be donated to charity (my choice)
    Temperature over 2011-2020 minus 2001-2010 below 0.05 degrees C: I give you 100 euro to be donated to charity (your choice)

    In between: we go drink a beer and discuss the current state of climate science

    Temperature based on average of most recent versions of GISS, UAH, HADCRUT.

    How about that?

    Now, the Bounty unfortunately sank so keep in mind we lost a few pirates!

  • […] That’s quite a prediction — I admire scientists willing to go out on a limb — and wouldn’t it be fortunate if that prediction proved true. But before you get too carried away, you should keep a few things in mind. In the first place, both Tung and Zhou’s and Li et al.’s works are largely based on statistical manipulation of temperature records to show correlations — there’s not a whole lot of explanation by way of physical mechanisms. And don’t forget that correlations establish a potential connection between phenomenon, not a cause and effect. Moreover, there are a number of scientists who find potential flaws in Tung and Zhou’s analysis — see here and here. […]

  • Guido van der Werf

    A bit late, but think this is a nice example of how a blog discussion can help in writing a scientific paper: the comments of everyone above have helped me in improving a paper that is now published in Earth System Dynamics. Thanks!

  • Marcel Crok

    Congratulations Guido and wellcome back from Indonesia!

  • The big upsurge in industrial production after World War Two was almost entirely in the older
    industrial countries- North America, Western Europe and (to a lesser extent) the Communist
    countries. Industrial production outside these areas began to take off in the late 1950 (the four small ASsian “tigers”) and really got under way during the 1970s and 1980s- India, China and many others.Aerosols (sulfide dioxide and particulates), visible in clouds and smog, certainly
    increased after 1945. until air pollution policies began to be implemented in the late 1950s (Britain
    1956). If these were the cause of falling temperatures- offsetting, so it is claimed, the
    simultaneous rise in CO2 from anthropogenic sources- should not the cooling have been largely
    confined to the industrial areas, especially cities, in the Northern Hemisphere? In fact it
    was most marked in the Arctic and also took place, I believe, in the Southern Hemisphere.
    Should we not, as a matter of routine, be given long-term temperature trend by the main regions,
    distinguishing industrial areas from others, and Northern from Souther Hemispheres?- and not just
    global averages. see my blog “theuncommittedsocial scientist” on “statistical fallacies in the
    global warming debate”.
    I think we are back at looking for “natural” causes of the 1940-1970 cooling.

Geef een reactie

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>






Donate to support investigative journalism on global warming

My blog list