Credit where credit is due; more on homogenisation

One of the basic principles of blogging is to give credit when credit is due. I always try to mention the source of my information and most blogs do this using an acronym like h/t (hat tip). This week one of my own blog articles was not referred to as the source for another blog article on The Hockey Schtick. This blog then alerted WUWT? who brought the same news quite loudly and then also Steve McIntyre picked it up and Bishop Hill.

Now this is all fine, being given the credit for my blog post is not the most important thing in the world. However, a lot of noise in the blogosphere could have been saved if WUWT? had seen my original post. Hockey Schtick namely is writing about a “paper” presented at the EGU, which then became a “peer reviewed paper” on WUWT? (later corrected) which then irritated fellow Dutchman Victor Venema who happens to work on the same topic as the topic of my blog post, the homogenisation of temperature data.

A short history. In April I attended the EGU conference for the first time. I had lunch with Demetris Koutsoyiannis whom I spoke extensively in 2008 during the research phase of my Dutch book De Staat van het Klimaat. His work on long-term persistence is very interesting and also his analyses of climate models. I later also interviewed him for the Dutch magazine De Ingenieur. He invited me to attend the session in which his student Eva Steirou gave the now well-known presentation. He sent a link to the presentation shortly after the EGU but only this week I came about writing something about it.

It’s a pity that the posts on WUWT? and Climate Audit generated so much irritation, part of it caused by the misrepresentation of the status of the work (a presentation during a conference and not yet a peer reviewed paper). The topic itself is pretty important as the recent work of Venema shows.

Venema reacted on his blog. Some excerpts:

I have never seen an abstract that was rejected at EGU; rejection rates are in the order of a few percent and these are typically empty or double abstracts and are due to technical problems during submission. It would have been better if this abstract was send to the homogenisation session at EGU. This would have fitted much better to the topic and would have allowed for a more objective appraisal of this work. Had I been EGU convener of the homogenization session, I would probably have accepted the abstract, but given it a poster because the errors signal inexperience with the topic and I would have talked to them at the poster.

Now this reaction is a bit arrogant. I agree though that this session was the better place for the Steirou presentation. Steirou presented in the session that was organised by Koutsoyiannis himself. I will ask Koutsoyiannis to comment on this.

Now what are the major errors that Venema is talking about?

The first statement cited by Anthony Watts is from the slides:

of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in: “Increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”This is plainly wrong. You would not expect the proportions to be 1/2, inhomogeneities can be have a typical sign, e.g. when an entire network changes from North wall measurements (typical in the 19th century) to fully closed double-Louvre Stevenson screens in the gardens or from a screen that is open to the North or bottom (Wild, Pagoda, Montsouri) to a Stevenson screen, or from a Stevenson screen to an automatic weather stations as currently happens to save labor. The UHI produces a bias in the series, thus if you remove the UHI the homogenization adjustments would have a bias. There was a move from stations in cities to typically cooler airports that produces a bias and again this would make that you do not expect that the proportions are 1/2. Etc. See e.g. the papers by Böhm et al. (2001) Menne et al., 2010; Brunetti et al., 2006; Begert et al., 2005 or my recent posts on homogenization. Also the change from roof precipitation measurements to near ground precipitation measurements cause a bias (Auer et al., 2005).

Now these are some interesting arguments. Anthony Watts has frequently suggested that part of the warming could be due to the growth of airports. Now Venema is turning this argument around. The relocation of stations from cities to airports would actually lead to a cooling bias. I would be interested to see some examples.

Now the second problem that Venema has with the talk of Steirou is in this paragraph:

“homogenization practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”

He defends homogenisation by writing:

The WMO recommendation is to first homogenize climate data using parallel measurements, but also to perform statistical homogenization as one is never sure that all inhomogeneities are recorded in the meta data of the station.

I was involved in the COST Action HOME, which just finished a study with a blind numerical experiment, which justified statistical homogenization and clearly showed that statistical homogenization improves the quality of temperature data (Venema et al., 2012). Many validation studies of homogenization algorithms have been published before (see references in Venema et al., 2012).

In a different approach, the statistical homogenization methods were also validated using breaks known in meta data in the Swiss (Kuglitsch, 2012). The size of the biased inhomogeneities is also in accordance with numerous experiments with parallel measurements; see Böhm et al. (2010) and Brunet et al. (2010) and references therein.

Definitely, it would be good to be able to homogenize data using parallel measurements more often. Unfortunately, it is often simply not possible to perform parallel measurements because the need for the change is not known several years in advance. Thus statistical homogenization will always be needed as well and as the validation studies show produces good results and makes the trends in temperature series more reliable.

I think based on my conversations with Koutsoyiannis that he and Venema more or less agree on this. As I said, Koutsoyiannis is very interested in long term persistence and he has found this kind of ‘behaviour’ in most if not all climatic time series. In the presentation they show that SNHT (the homogenisation protocol that they investigate) has the tendency to correct time series with long term persistence when no correction is needed. That is, the method detects an inhomogeneity that isn’t there, that just is part of the natural behaviour of a time series with long term persistence.

Now this is maybe the most interesting observation in the whole presentation and one that Venema should find very interesting as well. Much more work needs to be done as Koutsoyiannis this week already wrote me in an email.

Koutsoyiannis is always in favor of transparancy. Steven Mosher complained on one of the blogs that there is no station list available yet. I am convinced that Koutsoyiannis is willing to make this available and would have done so if a peer reviewed paper had been published.

The input of Venema who seems to be very active in the homogenisation community is very welcome of course. Hopefully this can lead to a more constructive exchange than so far is the case on the different blogs.

 

Share

27 comments to Credit where credit is due; more on homogenisation

  • Taken out of context, the citation of my post may sound arrogant. All I wanted to argue was that there is no real review for EGU abstracts and certainly none for the slides presented at the conference. Anthony Watts giving this analysis the status of a “peer reviewed paper” misleads his readers, most of whom will not look at the slides and many of which do not have the background to judge the quality of the work. Even a single peer reviewed paper should be taken with a grain of salt and it is a pity that the media mainly report on single articles, but that is another discussion.

    I am interested in Long Range Dependence. A paper on this topic was the first paper I wrote about homogenisation. Together with Henning Rust and Olivier Mestre (Rust et al., 2008), I have written a paper on homogenisation and long range dependence and we found that homogenisation did not remove LRD, more precisely: the correction method did not reduce the Hurst coefficient. Homogenisation does remove LRD in the difference time series, but not in the station data itself. We did not test the detection part of the homogenisation algorithms, not expecting any problems there and it would have been much more work.

    The influence of homogenisation on the Hurst coefficient can be tested on the HOME benchmark dataset, which provides true data, inhomogeneous data and data homogenised with various homogenisation algorithms. In the current analysis of this dataset (Venema et al., 2012), we looked at the reproduction of secular trends and decadal variability and homogenisation algorithms generally improved these estimates. As the Hurst coefficient is related to the secular trends and long term variability, it would be very surprising if these measures are improved, while the Hurst coefficient would be as wrong as the presentation suggests.

    Unfortunately, the slides of Koutsoyiannis provide too little information to understand the differences.

    Rust, H.W., O. Mestre, and V.K.C. Venema.Less jumps, less memory: homogenized temperature records and long memory. (low quality (0.5 Mb) | high quality (24 Mb)) JGR-Atmospheres, 113, D19110, doi: 10.1029/2008JD009919, 2008.

    Venema, V., O. Mestre, E. Aguilar, I. Auer, J.A. Guijarro, P. Domonkos, G. Vertacnik, T. Szentimrey, P.Stepanek, P. Zahradnicek, J. Viarre, G. Müller-Westermeier, M. Lakatos, C.N. Williams, M.J. Menne, R. Lindau, D. Rasol, E. Rustemeier, K. Kolokythas, T. Marinova, L. Andresen, F. Acquaotta, S. Fratianni, S. Cheval, M. Klancar, M. Brunetti, Ch. Gruber, M. Prohom Duran, T. Likso, P. Esteban, Th. Brandsma. Benchmarking homogenization algorithms for monthly data. Climate of the Past, 8, pp. 89-115, doi: 10.5194/cp-8-89-2012, 2012.

  • Marcel Crok

    Victor, thanks for commenting here. Koutsoyiannis is planning to write a guest post in a few days.
    Marcel

  • The relocation of stations from cities to airports would actually lead to a cooling bias. I would be interested to see some examples.

    Didn’t Menne et al. show that poorly sited sites had a cool bias (see for example this article)?

  • Oh, and BTW, what exactly was arrogant in that first quote by Venema? Which part?

  • Marcel Crok

    Neven, yes Fall et al :) showed a cool bias in Tmax at the poorly sited sites and a warm bias at the same sites in Tmin. The biases cancelled out and therefore the effect on Tmean was minimal. As far as I know both the Fall et al and the Menne et al paper do not explain why there is this cool bias.

  • Marcel Crok

    Neven, the presentation was given by a student but you can be sure that Koutsoyiannis has checked the results. He is a very senior and experienced scientist. Of course you can disagree about the conclusions but to say based on the abstract that this is not interesting enough for a 15 minute talk is a bit arrogant. I think Victor was so annoyed about all the claim at WUWT? that it was “peer reviewed” that he was a bit too harsh for Steirou/Koutsoyiannis. Their approach to select 163 long and relatively complete series is interesting and deserves some serious discussion as hopefully will take place here and elsewhere in the coming weeks.
    Marcel

  • He is a very senior and experienced scientist.

    Experienced when it comes to homogenisation of temperature data? I thought he was an oceanographer or something.

    Their approach to select 163 long and relatively complete series is interesting and deserves some serious discussion as hopefully will take place here and elsewhere in the coming weeks.

    I’m not sure what this is about exactly, but my first impression is that we have some people who imply something is wrong with temperature data, but chances are that their lack of experience/knowledge is at the root of the perceived problem. This happens a lot in the land of skeptics (real and fake ones). But of course, the debate must not move on until every single one of them has had as much attention as he/she clamours for.

  • Steven Mosher

    You can see an example of cool bias from station relocation to airport on my site.

    http://stevemosher.wordpress.com/2012/03/22/metadata-dubuque-and-uhi/

    It’s part of a longer study i’ve been doing on cooling stations in the US. using unadjusted data from berkeley earth I’m isolating those stations will long complete records that have statistically meaningful negative trends. many are in the US. looking further at them, a good number look to have discontinuties that correspond exactly with a move from a city to a airport.

    In the example I present you can see why the airport is cooler. I look at the “built area”, the proxity of water.
    the land surface temperature and cloudiness. Oh ya and albedo.

    I suppose when time permits I could finish this, but in general if dr, K is looking at long stations in the US and comparing unadjusted versus adjusted data, he has to be very careful about the large number of sites where there is a discontinuity due to the station moving BUT keeping the same number in GHCN.

  • Steven Mosher

    Marcel.

    Menne has explained the bias in Diurnal trend found by fall et al. Simply listen to the presentation given at AMS ( I think christy or spenser did it) menne is in the audience and explains. As I recall it has to do with how the series are adjusted for MMTS discontinuity. I could ask matt, I dont think he will be writing anything up. Hmm, sorry I dont have a link. In any case the orginal work done by Leroy on mircosite bias only suggested a small bias (.1c) in mean. the principle effect is an increase in variance. basically micro site bias gives you higher higher and lower lows, but the bias to tave is small and positive ~.1C.

  • The bias in the US diurnal range arises mainly from the transition to MMTS from stevenson screens. This occurred over approximately 5 years in the late 1980s. It was confirmed by several side-by-side field studies. Maximum temperatures recorded by MMTS sensors are systematically cooler and minimum temperatures systematically warmer. It is also discussed in a recent paper (available outside a paywall) linked from http://surfacetemperatures.blogspot.com/2012/01/benchmarking-and-assessment-applied-to.html. This goes into some detail about the known and suspected issues with the US record and undertakes a network-wide benchmarking of the pairwise homogenization algorithm NCDC use.

    I find the discussion that has accrued across WUWT, ClimateAudit etc. depressing from the viewpoint that the charges by and large are specious and not related to what is actually happening in the real world. Until that is bridged it is hard to see how further progress can accrue. That we should do side-by-side comparisons – to the extent possible without a back to the future delorean these have been done. That we should involve statisticians and measurement scientists – well look at the constitution of the International Surface Temperature Intiative or the various efforts that have been made to give talks at statistical and metrological meetings over the last few years. That we should do benchmarking – look no further than the COST action that Victor was involved in or the paper linked above. That we should do a better job on the raw data holdings – take a look at all the work going on on creating a new international databank holding documented in the databank area of http://www.surfacetemperatures.org.

    There are multiple ways to engage – productively – in these various activities. We’d like to see more people create their own products tackling the issue of homogenization starting from the same raw data and running against consistent benchmarks. That way we may start to lift the fog of uncertainty. We’d like leads to data we have not yet recovered, or for people to help digitize the millions of images never digitized and therefore inaccessible to scientific enquiry. But it really does need the discussion and engagement to start from a position of knowledge of what the state of the art is and what the real rather than imagined problems are. Otherwise the discussion ends up being pointless and self-defeating.

  • That’s one cool website and effort, Peter! I didn’t even know it existed.

    BTW, I’ve been thinking about it some more. It is not arrogant at all for an expert to state that someone is making rookie errors. What is arrogant, is for a rookie to barge into an area that he/she has no expertise in and come up with all kinds of far-fetched conclusions, which then get propagated by people who should know better For instance, because they tried to do that themselves and then got burned, such as Watts and McKitrick.

    Acknowledge the errors due to inexperience, make more efforts to learn and study before concluding, and stop playing the victim card.

  • Spence_UK

    Marcel, thanks for posting this and I look forward to reading Dr. Koutsoyiannis’ response.

    I’m slightly taken aback by Peter Thorne’s comment, though.

    We’d like to see more people create their own products tackling the issue of homogenization starting from the same raw data and running against consistent benchmarks. That way we may start to lift the fog of uncertainty.

    Really? The best way to lift the fog of uncertainty is to have lots of people using different ad hoc, sui generis statistical methods which are poorly understood instead of having a small number of people using well understood methods?

    I guess this is the “never mind the quality, feel the width” school of statistical analysis.

    If you were really interested in lifting the fog of uncertainty you would:
    – Try to have a small number of well understood statistical methods
    – Publish turnkey code and data for these methods – and criticise Menne for refusing to provide McIntyre with homogenization code
    – Publish methods in the statistical literature as well as climatology literature along with published code to allow statisticians to weigh in with criticisms

    The third step here is important. Chatting to statisticians at conferences is not going to provide the thorough testing of your assumptions and methods that you need to do.

    However, I’d like to thank Dr Venema for his contributions from which I have learned a few new things, plus his explicit support for openness.

    Neven: Dr Koutsoyiannis is not an oceanographer, but a hydrologist who has a lot of experience working with hydroclimate statistical methods and data sets – that does include both instrumental and proxy temperature data, although I am unaware of any prior publications specifically on homogenization. Dr Koutsoyiannis’ work tends to focus on the impact of long-term persistence in hydroclimatological indicators on applied statistical methods – which I suspect was intended to be the main thrust of the homogenization paper (which was missed by most in the various discussions).

  • TINSTAAFL

    “Experienced when it comes to homogenisation of temperature data? I thought he was an oceanographer or something.”

    And your credentials in homogenisation of temperature data is exactly what?

    “I’m not sure what this is about exactly, but my first impression is that we have some people who imply something is wrong with temperature data”

    “Some people imply something” is hardly scientific, so what’s your expertise on this, really?

  • Spence_UK,

    1. Define small number. My definition of small number is at least ten distinct methods so we can weed out the wheat from the chaff; run on the same raw data and consistent benchmarks. And why do expert statisticians have a monopoly on this? Why is your insight any less valuable a priori? I don’t get this argument. If we provide the robust testing environment of a single raw source and a set of consistent benchmarks you can post facto sort wheat from chaff and doing it twenty or thirty different ways you’d learn something about the strengths and weaknesses and might get substantially closer to the one true climate trajectory as a result. The fundamental problem set is not some nice neat, well posed, well constrained problem with an a priori knowable and verifiable right answer. Or do you just not wish a priori to know the answer? That just seems silly from whatever side of the aisle you approach it. And how come this approach and the Initiative is endorsed by WMO (meteorologists), BIPM (metrologists) and ISI (statisticians) if its such a duffer?

    2. You mean the code that is publicly available from NCDC and has been for several years? The criticism is not warranted and is a zombie argument. Period. See for example the efforts of the Climate Code Foundation last summer to convert this to python. Could they have done this if the code were not available???

    3. Chatting to statisticians at conferences? That’ll be why we have an article in press in Significance? The initiative sponsored by ISI? Why we have engaged e.g. SAMSI on the issue and NIST?

    This semi-uninformed criticism is precisely why it is so hard to engage with certain segments of the blogosphere productively. All that ends up happening is that its seen as a needless resource sink and everyone ends up pissed with each other. That gets us precisely tiddly squat sum of nowhere. Which is a pity because numerous citizen scientists such as Steve Mosher, Nick Stokes, Zeke Haausfather, Tamino, Nick Barnes et al. have contributed substantially and such contributions are key to the advances in our understanding. But if the majority of what we get is these kind of zombie arguments where, precisely, is the incentive to engage?

  • Dear mr Spence,

    Spence: “I’m slightly taken aback by Peter Thorne’s comment, though.” … “Really? The best way to lift the fog of uncertainty is to have lots of people using different ad hoc, sui generis statistical methods which are poorly understood instead of having a small number of people using well understood methods?”

    Did you read anywhere that the methods were required to be ad hoc?

    Currently there is a limited number of people computing global mean temperature series. This is often wrongly used as sign of a conspiracy by people who do not realize that the trends found in these global database for specific regions need to fit to other studies for these regions. Thus at least all climatologists at all weather services should be in the conspiracy, because each and every on of them could show that trend in the CRU/GISS/GHCN data for their country does not fit.

    Furthermore, there are currently not many homogenization algorithms, which are able to homogenize a huge global dataset. Having more and comparing their strengths and weaknesses is an asset. There is currently no need to worry that the number of algorithms will be too large.

    surfacetemperatures.org will benchmark all homogenization algorithms and provide multiple datasets. The user can use the data homogenized with the method he trusts most based on first principles and the performance in homogenizing the benchmark dataset.

    Spence: “- Publish turnkey code and data for these methods – and criticise Menne for refusing to provide McIntyre with homogenization code”

    surfacetemperatures.org will only work with open codes. The pairwise homogenisation algorithm of Menne can be downloaded from the NOAA ftp site. You can find the link to this and many other codes on http://www.homogenisation.org (click on software in the menu to the left).

    Spence: “- Publish methods in the statistical literature as well as climatology literature along with published code to allow statisticians to weigh in with criticisms”

    Statisticians are mainly interested in the problem of one break in white noise. That is a problem you can proof theorems about. There is only little interest in the problem of having multiple breaks and in the relative homogenization approach with an inhomogeneous reference needed for climatological applications. During the 11th International Meeting on Statistical Climatology in Edinburgh, Scotland, there was a session on homogenization. As far as I could tell there were no statisticians in the room; there were plenty at the meeting. This is a pity, homogenization is a beautiful statistical problem. The interest of statisticians may be increasing, Philippe Naveau just wrote an article and Robert Lund is working on the multiple-break point problem and member of the Surface Temperature Initiative. Let’s hope this gets better. Cooperation between climatologists and statisticians is a good thing, but expecting climatologists to publish in statistical journal is about a productive as physicists publishing in sociological journals. Every group has its own language and problems of interest.

  • Spence_UK

    Peter, your response is absurd and completely departs from your original point.

    You claimed that you wanted to clear “fog and uncertainty”. And you wanted many different approaches. You now say you want to sort the “wheat from the chaff”.

    Lots of different people doing different things is just more chaff. Period. If I thought the instrumental record were important, I would want to look at the wheat. In detail. I’m surprised you have so much difficulty in understanding that.

    Sorting the wheat from the chaff is where you engage statisticians.

    And no, the code was not on-line when McIntyre asked for it, and your re-framing of the question like a true politician is noted. Climategate changed a number of things and increased openness since is a good thing. There is still much to achieve in this regard.

    And I’m glad you have one paper in press in a statistics journal. If you think that is anywhere near adequate, try again. At this rate, in ten years time, perhaps we will be back where we should have been ten years ago, had activist scientists not been at the helm of the good ship climate science.

    The best contributions from “citizen scientists” – like Mosher, Zeke, McIntyre, et al – is looking at the supposed wheat in the literature. And often finding it not to be wheat at all. I’m not belittling their contributions, just noting you don’t understand where the real value in their contributions lie.

  • Mr Spence, thank you for your nice words on my openness, but not even I am going to waste any more of my precious life time on answering you. Clear up your act.

  • And your credentials in homogenisation of temperature data is exactly what?

    None, which is why I don’t do presentations at EGU, claiming that adjustments are responsible for half of the temperature rise of the past century or so, to be then happily quoted by Marcel, Hockey Schtick and Anthony Watts.

    And even if I did, I wouldn’t immediately play the victim card if people told me I made some basic errors due to my lack of expertise.

  • Anthony Watts

    Peter Throne writes

    “The bias in the US diurnal range arises mainly from the transition to MMTS from stevenson screens. This occurred over approximately 5 years in the late 1980s.”

    This is ridiculous, and indicates Mr. Thorne has no clue about what is actually going on with the US COOP Network. The conversion to MMTS is ongoing, well through the 90’s into the 2000’s and has actually accelerated after 2007 due to my work showing shoddy siting issues with Stevenson screens, as NOAA/NCDC reacted to it. In some cases, stations were closed because the MMTS placement would have been no better, or even worse.

    In Fall et al 2011, we showed that the diuranl range has no century scale trend.

    Marcel, I’ve made a note on WUWT, and left an admonishment comment on the Hockey Schtick blog asking for him to post a correction giving you credit. I always give credit where it is due, and had I known, you would have been cited. I agree, knowing of your original blog post would have saved much trouble and speculation.

  • [...] with the blog The Hockey Schtick about not giving appropriate credit. Marcel Crok writes on De staat van het klimaat One of the basic principles of blogging is to give credit when credit is due. I always try to [...]

  • Steven Mosher

    Folks,

    Matt menne’s code was online from around the day he first published his paper, as I recall. It’s been a while since I downloaded it. I suppose if you look around you will find me pointing other folks to it.

    For folks who havent been following dr. Thorne’s project I highly recommend it to people.

    Many of us raised a stink ( and said some nasty things) about the state of climate data and it’s accessibility.

    Guess what? some folks saw past the nastiness and are actually working to improve the situation. A thank you would be in order. For those of us who actually asked for the data because we wanted to do something the change has been great.

  • climate sceptic

    Menne’s code online? Where?

  • “Menne’s code”, the pair-wise homogenization algorithm, can be found under:

    ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/software/

    Many more homogenization algorithms can be found under:

    http://www.meteobal.com/climatol/DARE/#Homogenization_packages

  • climate sceptic

    Thank you. But that is not the currently used version, is it?
    Also that is USHCN, which is different (I think?) from the algorithm for GHCN.
    There is also a folder /pub/data/ghcn/v2/source/inhomog/
    with GHCN v2 codes. Again this is not the version they currently use but it might be useful.

  • I had a look at the article describing GHCNv3. According to Lawrimore et al. (2011), the homogenization algorithm for GHCNv3 is the same as for USHCNv2: “The quality control algorithms are a combination of algorithms applied in version 2 with others adapted from those used to QC the GHCN-Daily data set [Durre et al., 2010] and to produce the USHCN-Monthly version 2 data [Menne et al., 2009].”

    Thus the link in my previous comment is the homogenization code you were looking for, which was also tested in the HOME blind benchmarking study. Sorry, for the confusion.

    Jay H. Lawrimore, Matthew J. Menne, Byron E. Gleason, Claude N. Williams, David B. Wuertz, and Russell S. Vose, Jared Rennie. An overview of the Global Historical Climatology Network monthly mean temperature data set, version 3. JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 116, D19121, 18 PP., 2011
    doi:10.1029/2011JD016187. http://www.agu.org/journals/jd/jd1119/2011JD016187/

  • climate sceptic

    The way I interpret that sentence is that GHCNv3 is a combination of some things from GHCNv2, some things from GHCN-Daily, and some things from USHCNv2.

  • What is different is the quality control (QC) (outliers). If you want to be sure just send Menne a polite e-mail.

Geef een reactie

  

  

  

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Agenda

Loading...

Donate to support investigative journalism on global warming

My blog list