“Waterboarding the Data”

That’s how Charlie Martin puts it.  He reviews an East Anglia programmer’s notes in detail (in HARRY_READ_ME.txt) as he tries to make sense of the climate models.  The upshot: He finds complete confusion, and can’t replicate the research team’s results!

I think there’s a good reason the CRU didn’t want to give their data to people trying to replicate their work.

It’s in such a mess that they can’t replicate their own results.

Now, it’s worth looking at this in some detail.  It’s not unusual that, in dealing with very large databases, one has to make corrections to the data.  Stations stop transmitting data for a time, and then resume; mechanical problems lead to ridiculous outliers; things break and have to be replaced; and so on.  Sometimes, problems are severe enough that stations have to be thrown out of the sample altogether.  But often problems seem relatively minor, and missing or bizarre data is extrapolated.  There are ways of doing this that are entirely respectable, and improve the quality of the results. (You really don’t want to work from data sets that show that the temperature in Norwich was zero for two weeks in August, or that the temperature in St. Andrews was 938° C one day in March.) Of course, such changes should be documented, and the method for extrapolating should be specified.

The manipulation and confusion in the East Anglia data, however, goes far beyond anything like this.  There are significant errors in the program; a sum of squares ends up negative!  A commenter has a nice joke:

[why does the sum-of-squares parameter OpTotSq go negative?!!]

Because, obviously, the climate data are imaginary.

In the end, the programmer simply plugs in the published 1901-1995 results, because he can’t recreate any of them using the data!  Near the end he comments,

I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more.

WMO is the World Meteorological Organization, whose data might reasonably be thought to be trustworthy.  But no; it was “corrected” for unspecified reasons in unspecified ways.

Here are some other excerpts (from Declan McCullogh):

I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation – apparently linear as well. As far as I can see, this renders the station counts totally meaningless.

There are hundreds if not thousands of pairs of dummy stations, one with no WMO and one with, usually overlapping and with the same station name and very similar coordinates. I know it could be old and new stations, but why such large overlaps if that’s the case? Aarrggghhh! There truly is no end in sight… So, we can have a proper result, but only by including a load of garbage!

Knowing how long it takes to debug this suite – the experiment endeth here. The option (like all the anomdtb options) is totally undocumented so we’ll never know what we lost.

There are other problems:

Programmer-written comments inserted into CRU’s Fortran code have drawn fire as well. The file briffa_sep98_d.pro says: “Apply a VERY ARTIFICAL correction for decline!!” and “APPLY ARTIFICIAL CORRECTION.” Another, quantify_tsdcal.pro, says: “Low pass filtering at century and longer time scales never gets rid of the trend – so eventually I start to scale down the 120-yr low pass time series to mimic the effect of removing/adding longer time scales!”

I like Bob Bleck‘s summary: “The emails prove that Mann made global warming is real…we’ve just been spelling it wrong.”

3 thoughts on ““Waterboarding the Data”

  1. On one hand, these climatologist can claim they can measure the temperature from 1000 years ago down to a .1C degree of accuracy by measuring tree rings, on the other hand their own computer code cannot replicate actual temperature measurements for the last 100 years.

    I think we should demand a public display of how they come up with and publish their temperature measurements. It would be analogous to a used car salesman trying to interest someone while the tires leak air, the oil runs out of the engine and will not start while the bumpers are falling off.

    “Robust” just does not describe what our confidence level should be regarding these temperature measurements.

    Expose the code and bust the Anti-trust Cabal of Climatology.
    Shiny
    Ed

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s