Hiding the Decline: Part 1 – The Adventure Begins

From the CRU code file osborn-tree6/briffa_sep98_d.pro , used to prepare a graph purported to be of Northern Hemisphere temperatures and reconstructions.

;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,- 0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
;
yearlyadj=interpol(valadj,yrloc,timey)

This, people, is blatant data-cooking, with no pretense otherwise. It flattens a period of warm temperatures in the 1940s 1930s — see those negative coefficients? Then, later on, it applies a positive multiplier so you get a nice dramatic hockey stick at the end of the century.

All you apologists weakly protesting that this is research business as usual and there are plausible explanations for everything in the emails? Sackcloth and ashes time for you. This isn’t just a smoking gun, it’s a siege cannon with the barrel still hot.

UPDATE2: Now the data is 0.75 scaled. I think I interpreted the yrloc entry incorrectly last time, introducing an off-by-one. The 1400 point (same as the 1904) is omitted as it confuses gnuplot. These are details; the basic hockey-stick shape is unaltered.

Huge tits

?ll that the enormous tits are here bouncing for the camera. It’s possible to see boobs and all of them nude. Chose ebony or white, receive a sexy version with natural breasts or nice fake bests. Do whatever you enjoy, and also have all the fun you’ll have with nice porn models. A number of them are from your nation, others are not. That means you’re able to see the series with any kind of woman.

We love boobs

Boobs are good situation to play. Videos are hype online. So we like mega boobs constantly and glad to present you the world of huge breasts. Only have a look, it’s like the Mardi Gras but on your room.

Mega boobs are bouncing if she makes and just moves something common like texting, typing or drinking. A man turns on. Or can be addictive so that you join the cam and start all things with your favorite breasts movie. Big tits will turn us, and make even a bad day better. It.

We welcome all kinds of tits because different Men and Women love them different ways:

  • Big breast on a chubby college girl which are tight and soft and appearing as white as some clouds on the market;
  • Huge boobs on a latina hottie which maybe fake but nevertheless are the hottest items you have ever noticed;
  • Mature boobs of some hot milf or even a granny;
  • Big boobs on a lean and fit body;
  • Large breasts of some BBW which are enormous enough she’ll lick her own nipples;
  • Natural boobs of All of the shapes and forms

You all can view them all in the porn chat. Busty babes are here to chat, make shows and discuss their lovely bodies.

Massive boobs or megamastia

Massive boobs are rarely belong to lanky woman. But the character does wonders. And you can locate a busty girl in almost any area. Some models reside in the countries so you can not just see them showing off their bodies but practicing vocabulary skills.

How folks become a busty version. They need to get born as girl. Then — combine a cam service. We provide a place for girls who want to show off their gorgeous large breasts. Some versions are famous through instagram or twitter. They can reveal themselves and advertize on the media. So this women will find a lot of fans on the webcam service and are porn stars.

Those girls aren’t rare. They can be found by you in almost any nation and become acquainted to some girls in the chat.

Giant breasts are more famous than a woman herself. They are submitted as gif files everywhere and people don’t even know who’s shaking their assets that are hot. The attractiveness of the webcam is that you do fun things, can chat and ask the woman to do some fun stuff and not only proceed through the scenes. The internet chat is like the actual life communication. It sense hotter than in actual life and may be fun and hot.

The Frequent stuff you see in the camera

You can see here some things which is not strictly sexual. A woman is sitting communication with her followers and doing things that are sexy but not considered One of the huge boobs models trained her body leaping on the match ball. Others are speaking or writing remarks. The folks here are creating the real life chatting although not only reveal. Or maybe just a bit hotter than the true life chatting and each of the items can turn on each other.

Are the boobs naked? Some women are putting on fancy gowns and stuff. Some are as natural as they wanted to be. Being naked is their choice however, you can spend.

Some busty babes love to work in pairs and oil every other, grab the titties and massage them as the dialogue becomes hotter. Girls from the webcam series are in the play and understand what to do to give you all the satisfaction you want here.

Some busty girls just love to perform dildos. And yeah the tittyfuck is demanded. See them riding the cock, bouncing and using orgasms for the camera.

Why men love busty babes

We accumulated the scientific data for you to know why the desire to see the nude breasts is totally fine and means that you are a real guys. A busty mama can not be too young and immature for sex. The tits are sexy and cannot lie that a woman is too young to get laid.

And the next explanation is that huge tits are appearing trendy because of their shape. And you can use them as a pillow when you sleep. Women conceal their boobs under the wear. They place a lot of clothes and a bra. And here on the xxx chat cam you can see the big boobs.

Big boobs are natural sexual vibra toy. You use them as addresses may play with them and enjoy titty fuck. Big boobs are cool but online shows with them are cooler. You can join the conversation to enjoy a sex show with a busty lady. Each of the web models are cool and sexy and have great talents in showing themselves off.

So busty babes are looking healthy and attractive and everyone can enjoy them in the conversation.

You can do a girl with breasts good. Read through the chat and discover a girl who attempts to work to get a job. It is possible to go private donate her tokens for the sex show and make everybody happy and enjoy their new body. You watch her transformation and jet a tour of their boobs here. Alright, fake breasts are hated by some people today but we must help and prepared to enjoy all kinds of imitation boobs all the time. Natty become more out of shape. A number of them even have stretch marks and things like the big veins; individuals who adore big boobs are enjoying them free and natty or around and fake. Both breasts are bouncy and all fun but we all have our tastes.

Young of older?

We’ve got a good deal of mature BBWs with big natural boobs here. They are experienced and know how to turn a guy on. Melons are being hung like by their monster boobs and they’re ready to have the best sex of their life since they know how to get it done. Section of this tits graph is amazing and the chat members will need to see it at least once.

Yong girls with large tits are cool also. We can enjoy this gaze of youth. And have curves and they are naturally born to be amazing.

Men will love huge tits. They’re feminine, sexy and turn anyone on, You could join the conversation or browse through the cams to view all of the busty models , Switch cams as you enjoy it you are free to enjoy all the busty things , You can just see what’s going on in the chat, have virtual sex with the version of choice, watch porno chatting at the public space or join the dialogue. Any person can choose the way to participate on the porn chat and enjoy the show occurring here.

You can see the cam women. Couple where the woman chose whatever you like and also that which make you hard and is busty.

Do your thing with all the HD sex chat and never underestimate the talking. It can client potentiel to the real orgasms and make you cum as hard as you want when you search the web for a babe.

Some males also hunt the big breasts infant in their region to create a connection of some sort perhaps not only a random net sex chatting. But it is not necessary. You chose the form of relationship.

Join the conversation today to observe the hugest tits on the web. New faces, shows along with the satisfaction is guaranteed.

UPDATE3: Graphic is tenmporily unavailable due to a server glitch. I’m contacting the site admins about this.

246 comments

  1. From the “I hate to be that guy” dept., a minor typo correction: Sackloth →
    Sackcloth

  2. Are you referring to the black curve onthis temperature reconstructions graph? The primary sources are on the image description page. The black curve (CRU’s data) does seem to be cooler in the 40’s and hotter in the 2000’s than the other data sets…

  3. oops… the black curve is the instrument record, not a reconstruction… The blue curve might be the culprit, because it’s the one by Briffa published in 1998, but of course I’m just speculating.

  4. >How did you find this? Just grepped for ‘fudge factor’?

    There was a brief note about it in a comment on someone else’s blog, enough to clue me that I should grep -r for ARTIFICAL. I dusted off my Fortran and read the file. Whoever wrote the note had caught the significance of the negative coefficients but, oddly, didn’t notice (or didn’t mention) the much more blatant J-shaping near the end of the series.

  5. You know, I always knew all this was bs. Not because I had any data: Sometimes I can just smell the bs in the air.

    Good heuristic: any time you hear these kind of hysterical claims that make it sound like the world is coming to an end, they are lying.

    This whole thing is a scam to control our lives.

  6. Yeah, “Nothing unethical about a ‘trick'”, they say. How they consider themselves even remotely ethical (or credible) after circumventing FOIA requests and this sort of fudgery is beyond me.

    Thankfully, this is finally getting some press.

  7. Oh, yeah, account for the fact that some of them can actually spell. Grep for ARTIFICIAL also :)

  8. New post up on Data & Demogogues that covers a different aspect of this fracas. The skeptic’s side has its own instances of idiocy going on.

  9. I’ve been trying to puzzle out for myself what this code is actually trying to do. My IDL-fu is effectively non-existant however…

    If you expand ESRs original quote a little bit you get…


    plot,timey,comptemp(*,3),/nodata,$
    /xstyle,xrange=[1881,1994],xtitle='Year',$
    /ystyle,yrange=[-3,3],ytitle='Normalised anomalies',$
    ; title='Northern Hemisphere temperatures, MXD and corrected MXD'
    title='Northern Hemisphere temperatures and MXD reconstruction'
    ;
    yyy=reform(comptemp(*,2))
    ;mknormal,yyy,timey,refperiod=[1881,1940]
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=22
    yyy=reform(compmxd(*,2,1))
    ;mknormal,yyy,timey,refperiod=[1881,1940]
    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor
    if n_elements(yrloc) ne n_elements(valadj) then message,'Oooops!'
    ;
    yearlyadj=interpol(valadj,yrloc,timey)
    ;
    ;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    ;oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21

    I read this as being responsible for plotting the ‘Northern Hemisphere temperatures and MXD reconstruction’.

    Note however the commented out code. The way i’m reading this, any graph titled ‘Northern Hemisphere temperatures, MXD and corrected MXD’ with a thick red line released before the brown matter hit the whirly thing probably has cooked data. Likewise if you see a thick blue line you might be ok.

    Where do i get thick red line from?

    According to the IDL Reference Guide for IDL v5.4, thick=5 means 5 times normal thickness. and color=21 links to (i believe)


    def_1color,20,color='red'
    def_1color,21,color='blue'
    def_1color,22,color='black'

    from just above the code segment itself.

    However there’s one thing that i’m not sure about without either being able to play with IDL or seeing the end graph. It does actually plot the uncooked data in black (oplot,timey,tslow,thick=5,color=22) so whats the point of showing a cooked data line along with the uncooked dataline? Not to mention that the uncommented version apparantly prints the same data series twice.

  10. bah… and if i’d just looked again at the lower plot call, what it was doing might’ve make sense.

    The red (cooked) line is labelled as “Northern Hemisphere MXD corrected for decline’. So if the line labelling matches the comment i’d be worried how much of a smoking gun it is.

    Decline in instrument accuracy perhaps?

  11. >Decline in instrument accuracy perhaps?

    Reminder: Here’s Phil Jones writing to Ray Bradley and friends: “I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd from 1961 for Keith’s to hide the decline”

    The output of this program may have been their check to see if a visualization of the cooked data wouldn’t look obviously bogus before they shopped it to the politicians and funding sources. That’s the only way I can think of to explain plotting both crocked and uncrocked datasets in the same visualization.

  12. I wonder what the odds are that Ian Harris gets thrown under the CRU bus: “We had NO idea the data was so awful…he should have consulted with us…” etc.

  13. >I wonder what the odds are that Ian Harris gets thrown under the CRU bus:

    Dangerous move. If he got backed into a corner, he might decide to rat out the cabal.

    I had already had the thought that, if this was an inside job, Harris is the most plausible candidate for being the leaker.

  14. @Darrencardinal: my personal heuristic is that if the solution to a proposed problem is the rollback of western civilization, the problem probably doesn’t exist.

    1. >if the solution to a proposed problem is the rollback of western civilization, the problem probably doesn’t exist.

      I have a closely related heuristic: any eco-related scare for which the prescription would result in a massive transfer of power to the political class is bogus.

  15. >Dangerous move. If he got backed into a corner, he might decide to rat out the cabal.

    Sure, but at that point it can be painted as a he said/she said situation, which would provide some cover that the MSM could utilize. Also, if he’s a true believer, which seems possible, he might just take the bullet.

    > I had already had the thought that, if this was an inside job, Harris is the most plausible candidate for being the leaker.

    What do we know about him, besides what little the CRU site says? According to http://www.cru.uea.ac.uk/cru/about/history/people.htm he’s been at the CRU since 1996. He does have his name on 3 papers from the CRU: (http://www.cru.uea.ac.uk/cru/pubs/byauthor/harris_ic.htm) concentrating in tree-ring interpretation, which matches his job role descriptions.

  16. > I have a closely related heuristic: any eco-related scare for which the
    > prescription would result in a massive transfer of power to the political
    > class is bogus.

    I would have thought that the ‘eco-related’ qualifier is a bit unnecessary there. You could apply that heuristic to almost every topic on your blog (admittedly the forge-scraper/data jailing link is a bit of a stretch).

  17. I’ve always preferred the term “finagle factor”, but to each his own. I had an instructor in college who often employed the “eraser factor”.

  18. Aaron Davies: my personal heuristic is that if the solution to a proposed problem is the rollback of western civilization, the problem probably doesn’t exist.

    Isn’t that just wishful thinking, in that you’re begging the question of whether or not critical and extraordinarily costly threats to civilization can exist in the first place?

  19. Wait just a second. Explain this to me like I’m 12. They didn’t even bother to fudge the data? They hard-coded a hockey stick carrier right into the program?!!

    ESR says: Yes. Yes, that’s exactly what they did.

  20. There’s way more – the archive is a target rich environment. It’s clear from a short reading that this has never been QA’ed at all – no design or code reviews, and no testing. It’s a hack, in the worst sense of the word.

    With trillions of dollars riding on it. No wonder they resisted the FOIA request.

    There’s good reason to suspect the data, as well as the code. Even ignoring urban heat island effects, it looks like “adjustments” made to the raw data may account for most (or possibly even all) of the 20th century’s warming. CRU conveniently “lost” the raw data – seems they didn’t have enough disk space, and nobody knew how to spin a backup tape. Or something.

  21. Edmund Burke—
    If an idiot were to tell the same story every day for a year, we would in the end believe him. Then we will defend our error as if it were our inheritance.

  22. Wouldn’t we have to know that this code was actually used for something, say a published paper, before it could be considered a smoking gun? Without that, it seems that the most you can say is that someone was possibly contemplating releasing distorted results.

    My question is how do you tie the code to actual published results?

  23. You may know that realclimate.org has comments on what appears to be this specific issue: you’re talking about a “VERY ARTIFICIALcorrection” in “osborn-tree6/briffa_sep98_d.pro”, they mention at http://www.realclimate.org/?comments_popup=2019#comment-144081 a “briffa_sep98_e APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE osborn-tree4” with realclimate.org’s usual bracketed response:

    [Response: All this is related to a single kind of proxy – maximum
    latewood density (MXD) whose problems have been discussed in the literature
    since 1998. If you want more variety in proxies, go to the NOAA Paleoclimate
    pages and start playing around. They have just set up a homogenous
    set of proxies that anyone can use to do reconstrucitons in any way
    they like. Knock yourself out. – gavin]

    If I understand correctly, he’s saying that what you found is indeed a VERY ARTIFICIAL CORRECTION on one of the ways that some trees can be used as “proxy thermometers”, one which has been openly discussed for a decade, where nobody understands why some tree data doesn’t match other tree data or the actual thermometer data where that’s available. This could be a “siege cannon with the barrel still hot” fact about conspiracies, or it could be an utterly unimportant fact about certain kinds of trees; it depends on how the correction was used. “Gavin” (that’s http://en.wikipedia.org/wiki/Gavin_Schmidt says that all his code and data are open; as a programmer who made fun of climate models back in the 80s (I was an asst prof of computer glop at UDel at the time) but hasn’t done so lately, I’d be most interested to see interactions between the two of you.

  24. This isn’t just a smoking gun, it’s a siege cannon with the barrel still hot.

    They didn’t just cook the data; they marinated it for a week, put on a rub, laid it in the smoker for a day and a half, sliced it up, wrapped it in bacon, dipped it in batter, rolled it around in flour, and deep fried it.

  25. They didn’t just cook the data; they marinated it for a week, put on a rub, laid it in the smoker for a day and a half, sliced it up, wrapped it in bacon, dipped it in batter, rolled it around in flour, and deep fried it.

    They’re Brits. That still doesn’t compare with American barbecue!

  26. This is not some random crack from the outside. This is almost certainly an inside leak. 61 Mb is nothing. The probability that any 61 Mb of data, pulled off a file or email server, containing this much salient and inculpatory information is virtually nil. This data was selected by someone who knew what they were doing, what to look for and where to find it.

  27. “The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” – H.L. Mencken

  28. What do we know about him, besides what little the CRU site says? According to http://www.cru.uea.ac.uk/cru/about/history/people.htm he’s been at the CRU since 1996. He does have his name on 3 papers from the CRU: (http://www.cru.uea.ac.uk/cru/pubs/byauthor/harris_ic.htm) concentrating in tree-ring interpretation, which matches his job role descriptions.

    It also matches the graph labels and the comments. MXD refers to maximum latewood density, which is particular factor in studying tree growth rings. It seems this particular program is plotting a comparison between MXD reconstructed temperatures and actual recorded temperatures.

    Also, @esr

    I dusted off my Fortran and read the file.

    That code isn’t any Fortran dialect I’m familiar with. I think JonB said something about IDL about, and it does, in fact, look vaguely OMG IDLish.

    1. >That code isn’t any Fortran dialect I’m familiar with. I think JonB said something about IDL about, and it does, in fact, look vaguely OMG IDLish.

      You’re right, but at first glance it looked just enough like Fortran 77 that I assumed it must be some odd variant of same. Doesn’t matter; whatever it is, it’s quite readable.

      And, in fact, it appears I wasn’t even entirely wrong. Reading the IDL docs, it looks like the language was designed to be least surprising to scientific Fortran programmers.

  29. One of my metrics:

    If a paper is submitted by a scientist who’s listed as the “Chief Scientist” of a Washington DC advocacy group, advocacy comes before science in their priority list. This applies to both sides of the divide.

  30. Would it be possible for you to rerun this using constant data as an input and graph it? That is, input a straight line and see what the output is? I think that this would provide a good way for people to get an handle on what’s going on. If the variance is small, then perhaps it is a legitimate correction for some abnormality (ignoring that we don’t know what it is and that it’s undocumented). If, however, it is largely responsible for the blade of the hockey stick , we can be reasonable certain that the whole thing has been cooked.

    1. >Would it be possible for you to rerun this using constant data as an input and graph it?

      Yes, actually. I’ll see what I can do with gnuplot. Just plotting valadj against date ought to be interesting. 5 year intervals starting from 1904, I think; that would end the graph in 1999 and the code is dated Sep 98.

      Here’s the data. Plot when I can free some time from preparing for GPSD release.

      Hm, actually this turned out to be easy. Grab the file, put it in (say) “artifical.plot” and just type plot “artifical.plot” at the gnuplot command line.

  31. Borepatch:
    > It’s clear from a short reading that this has never been QA’ed at all – no design or code reviews, and no testing. It’s a hack, in the worst sense of the word.

    > With trillions of dollars riding on it. No wonder they resisted the FOIA request.

    That kind of quality is not unusual for code written by scientists with no training in computer science or software engineering. Pretty much the only thing that matters is getting the calculation right (hmm… or not, as the case may be). You don’t get style points for usable/flexible/elegant code when you publish the paper. A lot of the stuff is terrible, even the packages with expensive licenses. I’ve seen a C program that used ‘YES’ and ‘NO’ strings for booleans and strcmp for comparing the values, in time-critical code (the code was written in an academic group in the molecular modeling / drug discovery field).

  32. Pardon my ignorance, but what’s that “*0.75” doing in the valadj definition? Does that imply that that multiplier gets applied to each element in the array?

    ESR says: Yes, I think so. The effect would be to decrease the extrema of the graph while preserving its shape.

  33. esr… another tip from another blog:

    > From the file pl_decline.pro”: “Now apply a completely artificial adjustment for the decline only where coefficient is positive!)”

    More to grep

  34. That kind of quality is not unusual for code written by scientists with no training in computer science or software engineering. Pretty much the only thing that matters is getting the calculation right (hmm… or not, as the case may be). You don’t get style points for usable/flexible/elegant code when you publish the paper. A lot of the stuff is terrible, even the packages with expensive licenses. I’ve seen a C program that used ‘YES’ and ‘NO’ strings for booleans and strcmp for comparing the values, in time-critical code (the code was written in an academic group in the molecular modeling / drug discovery field).

    Having worked in R&D labs for two separate Fortune 500 companies, yes, I totally agree. Chemists and metallurgists and physicists tend to write very ugly code with little or no documentation. Lots of stuff hard-coded, doing moronic stuff like your example with converting numbers to strings in places it makes no sense to do so, etc.

    OTOH, I’ve seen bad code in open source programs, too, but usually it’s quick-sketch code that just kinda stuck around and the authors usually know how bad it is and plan on rewriting it — just, you know, later. ;) Many scientist programmers I’ve seen actually think they are good coders. But you try explaining to them that “it works” is insufficient criteria for establishing code quality…

  35. This could be a “siege cannon with the barrel still hot” fact about conspiracies

    Since the divergence problem has been openly discussed in the literature and at conferences for many, many years, and since it has been and is an area of active research (why do some tree ring sequences show this while others don’t? what growth factor has come into play in the last few decades for those that do show it?), I should think the barrel would be quite cool by now …

    I think the “smoking gun” here is that some people have just been exposed to some of the issues for the first time and think they’ve uncovered some big secret, so secret that, well, you know, umm, it’s like in the literature. A secret smoking gun hiding right there in plain, published view – imagine that!

  36. Oh, BTW, the “decline” in this context is another term used to describe the “divergence problem”.

    The “blatant data cooking” is to use the actual thermometer data where it’s available, which, of course, shows no decline over those decades …

    Nothing “secret” about this at all, as mentioned above it’s been openly discussed for years.

    1. >The “blatant data cooking” is to use the actual thermometer data where it’s available, which, of course, shows no decline over those decades …

      Oh? “Apply a VERY ARTIFICAL correction for decline!!”

      That’s a misspelling of “artificial”, for those of you slow on the uptake. As in, “unconnected to any fucking data at all”. As in “pulled out of someone’s ass”. You’re arguing against the programmer’s own description, fool!

      In fact, I’m quite familiar with the “divergence problem”. If AGW were science rather than a chiliastic religion, it would be treated as evidence that the theory is broken.

  37. The output of this program may have been their check to see if a visualization of the cooked data wouldn’t look obviously bogus before they shopped it to the politicians and funding sources. That’s the only way I can think of to explain plotting both crocked and uncrocked datasets in the same visualization.

    Or, perhaps, plotting both shows you the divergence which is umm the heart of the divergence problem.

    Like – here are the diverging tree proxy results plotted next to the corrected (with real temperature data) results for these decades in which the two time series diverge.

  38. Do you guys really think you can understand the field of dendro paleoreconstructions of climate by reading one graphing program and some e-mail taken out of context? There have been people spending their careers doing this (though AFAIK not a large number), and a body of published literature. Don’t you think looking at that literature might be something to do before accusing people of outright scientific fraud?

  39. As in, “unconnected to any fucking data at all”. As in “pulled out of someone’s ass”. You’re arguing against the programmer’s own description, fool!

    The fudge-factor’s derived from the instrumental record.

    I don’t expect the paranoids who are convinced all of climate science is one huge fraud to believe it, but hell, your paranoia ain’t going to change the science.

    1. >The fudge-factor’s derived from the instrumental record.

      “VERY ARTIFICAL”

      But keep digging. It’ll just bury you deeper.

  40. In fact, I’m quite familiar with the “divergence problem”. If AGW were science rather than a chiliastic religion, it would be treated as evidence that the theory is broken.

    At best, it might be evidence that building reliable temperature proxies using tree ring data is a hopeless endeavor.

    But given that CO2’s role as a GHG has been known to physics for roughly 150 years, it’s hard to see how that’s going to be overturned. Or do you think the divergence problem trumps physics?

  41. OK, engaging ranting lunatics has been humorous for the last half hour or so, but I’m done here.

    Have fun with your conspiracy theories and your misreading of science!

  42. I have to say the CRU’s work has had a funny aroma around it for several years now. When I heard a couple years ago that they were refusing to make their data and algorithms available to other scientists, the sneaky, nasty, and obvious thought crossed my mind: “Guess they don’t dare show their work.” And for me, that was the end of their credibility.

    1. >“Guess they don’t dare show their work.” And for me, that was the end of their credibility.

      And, of course, they now claim that crucial primary datasets were “accidentally” deleted.

      After reading some of the emails about evading FOIA2000 requests…accidentally, my ass.

  43. krygny – Looking at the paths/filenames I’ve seen, I think the speculation I read earlier today is correct.

    The data here had been assembled to fulfill a FOIA request.

    And then when it was denied, it was probably leaked.

    (There’s a non-zero chance that it was a fortuitously-timed bit of hacking, or that a hacker had gained entry some time ago and waited to see… but a leak seems more likely.

    Defintely not just a random data-grab by a hacker, but…)

    esr: Even beyond the probability of it being a deliberate evasion of the FOIA request… it’s astounding incompetence to let the data get deleted.

    I keep my worthless personal data backed up redundantly and offsite… and these guys doing (in theory) real scientific research, with professional funding, with high stakes “for the world” and all of that… can’t keep their primary data sets intact?

    I’m not sure what’s worse; the idea that they’re this corrupt, or the idea that they’re that incompetent.

  44. “But given that CO2’s role as a GHG has been known to physics for roughly 150 years…”

    To what degree? Is there a saturation point? Are there other effects that have much greater magnitudes?

    And what does that fact have to do with the complete lack of scientific ethics among climate researchers? Read the emails — they were gaming peer review, black-listing researchers who didn’t toe the line, destroying data rather than submit it to review…

    That’s not science. Anyone who thinks science is a worthy endeavor should be disgusted at the behavior these emails show.

  45. Morgan,

    “Chemists and metallurgists and physicists tend to write very ugly code with little or no documentation.”

    Can we say being good at math makes it less likely you write readable code? The reason I’m asking that is that because I’m exactly the opposite extreme. I’ve always avoided math and the hard sciences because I just cannot process highly succint symbolic expressions like p=(s-c)/s*100 and I feel totally scared. Add a Greek letter to it and I’ll by running home to mom. But write it out properly like profit_percentage=(sales-costs)/sales*100 and I understand it in an instant because I will know what it really means, I no longer need to process it symbolically but can peg it to real-world logic and experience. Which means I suck at math and hard sciences even on a high school level but of course I write excellently readable code, I simply have to or I’d totally lose all clue at the fourth line. So I suspect people on the other end of the scale who don’t have any problems with succint symbolical expressions which aren’t tied directly to real-word common-sense experience, and therefore have no problems in becoming hard scientists may be less inclined to write readable code. Does it sound likely?

  46. Thank you for looking at this. It is amazing you have time to do this kind of code archaeology, but so much better.

    Keep grepping! :-)

  47. ‘I think the “smoking gun” here is that some people have just been exposed to some of the issues for the first time and think they’ve uncovered some big secret, so secret that, well, you know, umm, it’s like in the literature. A secret smoking gun hiding right there in plain, published view – imagine that!’

    As a wandering archaeologist who off and on encounters tree rings, I think the “smoking gun” is the peculiar idea that tree rings would be particularly sensitive to temperature. The general view outside a tiny community of climatologists who consider the Mediaeval Warm Period paleoclimate – it’s historic and isn’t “paleo” anything really – is that temperature can account AT BEST for far less than half the variability in tree ring density. Worse, the amount varies within the SAME tree. There is no way to reliably convert tree rings to temperature. They are much more sensitive to rainfall but even there not much more than half the variability is accounted for.

  48. “But given that CO2’s role as a GHG has been known to physics for roughly 150 years…”

    So? Are you trying to measure CO2, or temperature? If the former, why are you adjusting for thermometer readings? Since when is CO2 measured with a thermometer? I’d have thought that prefix “thermo-” would give you a clue to what it measures.

  49. From a software project perspective – this code/project is utterly appalling.

    It’s dailywtf.com stuff.

    Now, this might be funny or interesting if it were a small commercial project etc.

    But this code – the results it generates – is being used as a major foundation for trillions of $ of new taxes and unprecedented worldwide regulation.

    If that’s the case. It better be bloody good stuff. It better have been reviewed. It better be checked, double checked. Documented, explained and tested.

    But it’s not.

    It’s the worst code/project I’ve seen all 2009. It’s a joke.

    Showstopping bugs. No tests. Manual adjustments required before each run. Can’t repeat program results. No structure. No documentation, no source control. No build scripts.

    Both skeptics and alarmists feel that climate change is incredibly important. Whatever “side” you think yourself on – you’ll agree its important.

    Given the importance of this issue – all the code and data needs to be released into a public repository, refined into a project that actually works (i.e documentation / data and build scripts anybody?)

    This way it can be inspected by more than a few pairs of eyes.

    And we can be sure that whatever decisions are made regarding climate change, we make them on solid foundations.

    1. >The program you are puzzling over was used to produce a nice smooth curve for a nice clean piece of COVER ART.

      Supposing we accept your premise, it is not even remotely clear how that makes it OK to cook the data. They lied to everyone who saw that graphic.

  50. “but hell, your paranoia ain’t going to change the science.”

    Science. Ah yes. That’s that business where you:

    Gather data, recording meticulously both the data and how you gathered it.

    Analyse the data, explaining how you are analysing it, presenting program sources and so on if used, together with the data.

    Draw conclusions.

    Publish your work. Peer review filters papers for publication, it doesn’t validate them.

    Others reproduce your work (or can’t) and validate your analysis (or not). If they can’t shake it down after trying, it stands, provisionally.

    Notice any difference between this and the way that CRU appear to conduct themselves? Of course, there’s a way open to clear this up, and that’s to present the data, they’ve no doubt recorded so meticulously, and present the sources of the programs they used to analyse it, once again, all carefully documented with version control etc. Surely, they must be able to do this. After all this is work informing the IPCC and governments in trillion dollar programs.

  51. Rob Crawford:

    Before you step on that train saying that the CRU team ‘blacklisted’ their opposition, it’s worth knowing what that opposition was.

    See my post here:

    http://data-n-demagogues.blogspot.com/2009/11/peer-review-skepticism.html

    In particular, Soon and Baliunas wrote their 2003 paper after having a nine year (Baliunas) and 7 year (Soon) gap in their professional publication histories, and three years after each of them took jobs at Washington DC think tanks.

    They did at LEAST as much cherrypicking of dendro proxies as we’re accusing Mann and Briffa of. Hell, they didn’t even run their own data sets and algorithms, it was just a literature review.

    Just because their paper holds to your preconception does not mean it should be held to any lower of a standard.

    Hy dog in this fight is getting the science out. That means I need to hold the skeptical side to an even higher standard of rigor to avoid being blinded by things I want to hear.

  52. At best, it might be evidence that building reliable temperature proxies using tree ring data is a hopeless endeavor.

    That’s essentially game, set, and match.

    The computer models collectively predict a fundamental “flatness” with carbon dioxide being responsible for the vast majority of all recent warming. That is, they predict a hockeystick. This isn’t exciting in the period 1850-2000, we’re basically in a hockeystick’s blade. (The tip is quite unexpectedly blunted, but that’s a separate issue.)

    But when the global circulation models are used on pre-industrial conditions, they predict essentially unremitting flatness.

    In other words: the models are falsified by the existence of a non-localized Medival Warm Period that isn’t caused by carbon dioxide somehow. (They never bother explaining the Roman Climate Optimum, but that’s just yet another fatal flaw in the best extant models.)

    The key isn’t that the high church of warming has to stop screaming “Unprecedented!” It is that both the Little Ice Age and Medieval Warm Period aren’t properly modeled by the current state-of-the-art approaches. So either they didn’t exist – or the models suck. The tact used since 1998 is to scream “They’re just local phenomenon! Tree-reconstructions don’t show that!”

    But…
    it might be evidence that building reliable temperature proxies using tree ring data is a hopeless endeavor.

  53. esr said:
    Here’s the data. Plot when I can free some time from preparing for GPSD release.

    Hm, actually this turned out to be easy. Grab the file, put it in (say) “artifical.plot” and just type plot “artifical.plot” at the gnuplot command line.

    Don’t forget that the actual values they’re using for valadj are multiplied by 0.75. So plotting the literal array may give you a false impression (depending on what you’re looking for). Modified data is here

    Can we say being good at math makes it less likely you write readable code?
    <snip>
    So I suspect people on the other end of the scale who don’t have any problems with succint symbolical expressions which aren’t tied directly to real-word common-sense experience, and therefore have no problems in becoming hard scientists may be less inclined to write readable code. Does it sound likely?

    My first thought is that I doubt there’s a direct link between math ability and readable code.

    However your last couple of sentences made me think that while I believe the above, remember that “readable code” is somewhat subjective. Someone could write nice readable code in russian or german and it’d mean absolutely nothing to me. Doesn’t mean it’s not (objectively) readable but I just don’t speak the language it’s readable in. More or less the same with you and math.

    Having said that, it would be a rare(or practiced… i don’t discount it being a learn-able skill) individual that can write good comments while focusing on the math side of the brain. The code itself can be fine (if mathy) but i’d expect any text to be restatements of the math involved with effectively boilerplate text around it. I’d suggest that any heavily math based code with nice readable comments has probably been cleaned up after the fact.

  54. Just a visitor clicking through, but I figured I might as well share a response to a bit of dhogaza’s inanity:

    Do you guys really think you can understand the field of dendro paleoreconstructions of climate by reading one graphing program and some e-mail taken out of context?

    A metaphor: A home is being built. The owner-to-be notices some wood labeled ‘Oak’ and recognizes the wood as actually being the much softer White Pine. He complains to the construction foreman, confronts the contractor, and, having been ignored by those, spreads word of the deception. At this point, a random person attacks the owner-to-be, saying, “Do you really think you can understand the field of domicile structural constitution by looking at one cord of wood?”

  55. So i’m wandering through the various links to climategate posts and I came across this gem from ‘HARRY_READ_ME.txt’.

    These are very promising. The vast majority in both cases are within 0.5
    degrees of the published data. However, there are still plenty of values
    more than a degree out.

    Is this the data they’ve been using? If so wouldn’t that mean that their error margin is at least 0.5degrees? Which would mean that the error margin is almost the size of the graph in question?

    Am i making an incorrect assumption somewhere or do they actually want me to believe that any of their data points could be actually anywhere on that graph?

  56. I wonder if anyone answered Bret’s comment about this actually leading to published work? I know most people’s hard drives have all sorts of crap on them, a bit of code to develop a graph might or might not have made it into something published and even then it only matters if it wasn’t properly documented.

    I am sorry, but I have stick with tempest in a tea pot for this whole hacked archive.

  57. >The fudge-factor’s derived from the instrumental record.

    “VERY ARTIFICAL”

    But keep digging. It’ll just bury you deeper.

    Perhaps ‘ARTIFICAL’ is meant to imply that it’s not backed by theory, but simply designed to make the tree-ring data correspond with the thermometer data. In that case, the correction (if it actually does that) is quite justified, at least for making those data-sets correspond (and if it’s made obvious that such has been done, rather than used as a device to falsely claim that all sources agree).

  58. All one has to ask here is: what would happen if this were data from a clinical pharmeceutical trial instead of a climate change modle?

    Doctors paid by the drug company to run the trial wont release data and their notes state “falsified data that did not agree with hope-for outcomes. Death rate seems to high so we are ignoring it.”

    In any other matter of similar or lesser importance, there would be no defense of the scientists in question. It is only the reiligious nature of the belief in global warming driving a defence of what is clearly at least questionable science. Add it actual data that conflicts with the models and really, really crappy models (I build them for a living) and it is clear what is going on with the defenders

  59. @esr:

    In fact, I’m quite familiar with the “divergence problem”. If AGW were science rather than a chiliastic religion, it would be treated as evidence that the theory is broken.

    Nice word! :)

    @shenpen:

    Can we say being good at math makes it less likely you write readable code? Which means I suck at math and hard sciences even on a high school level but of course I write excellently readable code, I simply have to or I’d totally lose all clue at the fourth line. So I suspect people on the other end of the scale who don’t have any problems with succint symbolical expressions which aren’t tied directly to real-word common-sense experience, and therefore have no problems in becoming hard scientists may be less inclined to write readable

    Well, it’s not necessarily so that the code is unreadable. It’s possible for code to be ugyly and still be readable. :) See this article written by an ex-Microsoft employee. I’m sure others around here have other, similar articles. But I’ll bet no one has any that are as funny as this one. (Or maybe I just had a little too much wine with my dinner.) I’ve seen all sorts of stuff like that.

    BTW, readability doesn’t have everything to do with variable names. Good variable names help, but they aren’t the only thing.

  60. Excuse me for asking a rather basic question, but what do the valadj numbers mean? How are they applied to the data? I’m assuming they’re not degrees C – that would imply very easy-to-detect fudging.

  61. It’s unfortunate that the project to use the info in the leaked emails to uncook the data and present it to the public openly and honestly seems to be regarded as low priority, compared to focusing on the lies for the AGW skeptics and rationalizing the lies for the AGW believers. Both sides claim to be interested in science over politics but they both seem relatively uninterested in the real data gathered by the CRU.

  62. Thomas Covello- The code creates the “yrloc” array containing the values [1400, 1904, 1909, 1914, …, 1994]. It then creates a second array, “valadj”, that contains values that correspond element-by-element to the elements of the “yrloc” array. The final line interpolates between those entries to find a “yearlyadj” (yearly adjustment) value. For example, anything in or after 1974 gets a “yearlyadj” of 2.6*0.75, due to the string of 2.6 entries. The year 1920 falls between the entries 0.0 (for 1919) and -0.1*0.75 (for 1924), and would get a “yearlyadj” value of -0.015.

  63. I notice another variable called “Cheat” in cru-code/linux/mod/ghcnrefiter.f90 in subroutine MergeForRatio

    !*******************************************************************************
    ! adjusts the Addit(ional) vector to match the characteristics of the corresponding
    ! Stand(ard), for ratio-based (not difference-based) data, on the
    ! assumption that both are gamma-distributed
    ! the old (x) becomes the new (y) through y=ax**b
    ! b is calc iteratively, so that the shape parameters match
    ! then a is deduced, such that the scale parameters match

    It seems to get the value of Cheat=ParTot-AddTot or Cheat=(ParTot/En) depending on a couple of things. It then gets added to New thusly:

    New=Cheat+(Multi*(Addit(XYear)**Power))

    Which then becomes the value of Addit(XYear)=New

    Might be nothing, just a shortcut of some sort. But going by what we have seen so far, a variable named Cheat involved in an adjustment of any sort is suspect.

  64. In fact, “New=Cheat+(Multi*(Addit(XYear)**Power))” looks a lot like “the old (x) becomes the new (y) through y=ax**b” mentioned in the subroutine comment except the comment doesn’t mention anything about something being added to it.

  65. “But given that CO2’s role as a GHG has been known to physics for roughly 150 years…”

    Wow, Dhogaza seems to have eaten a bit of humble pie in the last few weeks. Good to see him falling back to safer territory – the hard science of laboratory CO2 physics. The odd thing about this is that the argument was never about CO2, it was about the use/abuse of questionable temperature proxies to manage our expectations of normal global temperature so that the term “unprecedented global warming” can be held over our heads as proof of our misdeeds. But we all know how fallacious this whole AGW debate is.
    I were to bite, and I normally wouldn’t, I’d question how much we really know about how CO2 acts in the atmosphere. And supposing we accepted that an increase in atmospheric CO2 concentration causes a subsequent temperature rise (causation not correlation), my question would be – exactly how much actual warming does this produce? As far as I am aware, that answer based on laboratory CO2 physics is very well known and relatively small (does that make me a luke warmer?). The catastrophic warming projected in the IPCC AR4 depends on the assumption of positive feedbacks in the climate system that dramatically amplify the effects of the extra CO2; something that is even more uncertain than the temperature record- but that doesn’t seem to stop it being put into the climate models. I tend to find that when you know a little about this subject and can refute a lot of the supposed settled science, the arguments for AGW tends to boil down to this: It has to be CO2 because we have accounted for all other known variables. This is a bit rich coming from possibly the most complicated science ever undertaken in whose participants can’t even put together an accurate temperature record.

    Also, please find a present my creative other half came up with when she read of the plight of out intrepid coder:

    http://www.freeimagehosting.net/uploads/6fa0eea5a0.jpg

  66. > “VERY ARTIFICAL”

    Yes, but the problem there is you found this by grepping for nasty sounding terms, so it’s not particularly incriminating if you find a few examples in a huge codebase. It’s not (as is suggested elsewhere) like picking a random board in a new house and noticing the wood is bad. It’s like nitpicking an entire construction project, picking the worst example of work you can find, and claiming the whole house is bad because of it.

    So it seems for some cover art they “cheated” by using model data for dates they don’t have real data for, and adjusted it to match the pretty picture of the real data for recent dates. The real data apparently paints a far more severe problem than the model. Yes, it is a sign of some problem in the models, which nobody would claim are particularly accurate anyway. Are we supposed to take comfort in that!?

  67. Read and understand the code before you speak!

    #1 The variable is not actually used in the code.

    #2 The values in the variable are in the context of tree ring data, not the climate model.

    #3 Since the value is not actually used it is not clear what the units are for the correction. But a commented section of the code suggests it may once have been used as an input to a routine called filter_cru which, as far as I can tell, is not present in the leaked data. The assumption that the numbers are degrees is certainly without basis.

    #4 This is a draft version of a routine. Later versions don’t even include the unused section. Those more developed and commented routines seem like more obvious candidates for the code that would have been used to generate publication quality output, not some scientists scratch paper.

    #5 The legend for the plot(also unused) indicates that the correction’s usage was clearly indicated if a different version of the code actually used the correction.

    plot,[0,1],/nodata,xstyle=4,ystyle=4
    ;legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ; ‘Northern Hemisphere MXD’,$
    ; ‘Northern Hemisphere MXD corrected for decline’],$
    ; colors=[22,21,20],thick=[3,3,3],margin=0.6,spacing=1.5

  68. There is also a bug in *your* code.
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,- 0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75

    If you notice the “*.75” than means vector multiply by 3/4, you don’t do this in your output. Your values are incorrect.

  69. Amazing, I had no idea computer programmers were so qualified to make so many claims about climate science. Some computer programmers are also apparently gifted in mysticism as they are able to come to conclusions without any knowledge or context. Peer review journals are another interesting topic computer programmers would know plenty about. After all, computer science is communicated mostly through conferences instead of peer review journals.

    With so much talk about valid science being published in peer review, I would think people would be happy about the pressure placed upon the journal in question by climate scientists. But of course, some programmers may not have looked deeply enough into their crystal ball to see that the papers in question were faulty and were written for political purposes. Or am I misunderstanding the claims being made here? Perhaps some programmers feel that faulty papers should be published for political purposes in science journals. Perhaps some programmers believe that science journals should be more like the humanities journals, and people should just spend their time arguing how they feel, In fact, why not just allow everyone to publish whatever in a journal no matter if it is right or wrong.

    I’m also amused at how some programmers applied their notions about things they aren’t even qualified to judge. Since some programmers believe the calculations of tree data were wrong or falsified, they have assumed all calculations by all organizations and disciplines from around the world on a variety of different sources (ice, sats, etc) are wrong too. To make matters more interesting, the same programmers had no clue about the purpose or usage of said code.

    I’m also curious how everyone seem to believe that climate scientists are hiding all the data. Climate scientist must be conspiring to fraud the public if they are unable to talk commercial weather servers into providing their source of revenue to the public for free. Of course, all of these organizations funded by the fossil fuel industry could pay the commercial entities and get their own access to the data but perish the thought. Of course, people could get public data sets and models….

    http://tamino.wordpress.com/climate-data-links/

    But hidden data ( or should I say commercially owned data) sounds more interesting and important. Since the public data shows all the trends of global warming, perhaps the unavailable data-set from commercial weather services show the smoking gun. EH?

    Junk science indeed.

  70. So let me get this straight. The Global Warming Believers (hence referred to as GWB’s, which is ironic since the former US President with the same initials was labeled by the GWB’s as “stupid, moronic, etc.” while GWB wasn’t a confused GWBer) tell us that these are just snarky e-mails that don’t prove anything (supressing alternate evidence and denouncing those who provide it) AND that the computer code is also explainable (sure, ARTIFICAL refers to last nights Antiques Roadshow episode…..not some public school idiot who can’t spell, MUCH LESS WRITE COMPUTER CODE).

    YOUR god is DEAD!!! Your prophet Gore has been PROVEN a liar by the same data he uses to gain (the other) profits!

    This needs to be OPENLY investigated, I suspect that others have manipulated data for greed (federal funding). Strange how all scientists funded by Gov’t. get a pass as being altruistic (Gov’t wanting more taxes and control of your life) while any past affiliation or funding by ANY corporation is suspect. If you’re backed by the PETA, Planned Parenthood, and ACORN ilk you’re a saint. Fill up your pick-up truck at a BP gas station and you’ve become their “mouth piece”.

    On a personal note, I feel sorry for these people who just got SMACKED in the face with reality. It’s like when my kids found out that I was Santa Claus. “But, but, but…..how about when…?”. They busted me when they CAUGHT me putting gifts under the Christmas Tree labeled “From: Santa”.

    In the end, your seemingly “altruistic” Santa Claus turned out to be a lie. The “Tooth Fairy” and the “Easter Bunny” can’t help you out because they are as real as Santa Claus is (and I know what the definition of “is” is).

    GWB’s seem to be taking the Larry Craig “Wide Stance” response. Way funny !

    Truthfully, this is like the “palm reader” in my town who’s house burned down. It’s strange how she could predict everyone else’s future FOR MONEY and she couldn’t see herself in the hospital or being homeless. After the fact, the Crystal Ball person TOTALLY saw this coming.

    The moral of the story is: Don’t believe people who CLAIM to be able to predict the future, computer programs that can’t be reviewed are as accurate as “crystal balls”, and that they think that they “have the FORCE” behind them like obi wan kenobi, “These aren’t the FACTS that we’ve been looking for. Move along, move along”.

    As an afterthought, the GWBs are caught in the same scandle as the Catholic Church pediphiles are. Supressing information and covering up known problems, all for the sake of their “religion”. Wanna bet that if Mann got fired, he’d get huge money to lecture at colleges and tenure at another school??? Relocating.

    These people have to play hard and up their game now. Otherwise they’ll be marginalized as the HACKS and LIARS that they are.

    We’re supposed to believe that their computer model that can’t predict the past is going to predict the future. Meanwhile, Bill Gates has smarter people than the IPCC and can’t get Vista or Windows 7 to work right.

    It’s strange how these people can decry a 1500 year old book (the Bible) as fantasy while claiming a PROVEN fraud computer program can predict the future.

    Talk about faith…..and being stupid!

  71. Interesting comment by Mark Sawusch on RealClimate:
    http://www.realclimate.org/index.php/archives/2009/11/the-cru-hack-context/comment-page-14/#comment-144828

    Includes a quote from a paper: “To overcome these problems, the decline is artificially removed from the calibrated tree-ring density series, for the purpose of making a final calibration. The removal is only temporary, because the final calibration is then applied to the unadjusted data set (i.e., without the decline artificially removed). Though this is rather an ad hoc approach, it does allow us to test the sensitivity of the calibration to time scale, and it also yields a reconstruction whose mean level is much less sensitive to the choice of calibration period”

    Sounds like our VERY ARTIFICIAL correction, described openly some time back, along with what it was used for and why. If so, it’s neither hidden nor fraudulent. There may be other problems with this (the commenter mentions some) but the cannon might have cooled down suddenly.

  72. >Would it be possible for you to rerun this using constant data as an input and graph it?

    One of the things Ross McKitrick did, allready years ago, was just that. Feed noise in to the model and see what came out.

    Read this:
    http://www.uoguelph.ca/~rmckitri/research/McKitrick-hockeystick.pdf

    For those that have been following the debate surrounding the hockey stick the whole “Climategate” scandal does nothing but confirm stuff that was allready known.

    See http://www.climateaudit.org

  73. Have a look at the last part of the pr_decline.pro file. They are fitting a parabola to the data set, constrained to pass through a point created by averaging the temperatures between 1856 and 1930 and placing it at 1930, and then reducing temperatures that fall above it (and leaving those below it alone). The effect is not only to reduce the 1930s temperatures, but also to add a parabolic effect to the data, making it look like temperature increase is accelerating. If this sort of filter is commonly applied, it’s no wonder that the 1990s show such a steeply rising curve.

  74. Um, you do realize you’re dramatizing over the code to generate a piece of artwork, right?

    It’s a POSTER, people, not a “graph deciding how trillions will be spent.”

    I applaud the zeal of your inquiry, but shouldn’t you apply it to data that actually matters, rather than dissecting cover art?

    It’s just this sort of thing that makes the anti-science movement look…well, kind of dumb.

    Just asking.

    CBB

  75. You’re forgetting to mention what it hides a decline in, and how.

    The code in question fudges for a divergence between temperature as calculated from tree growth data (which has declined since about 1960), and temperature as actually measured in every other way (which shows warming). Your blog post horribly misrepresents this as some kind of fudge applied to the global temperatures reported as evidence of global warming; it isn’t, and they don’t even use tree growth data over the period in question.

  76. dhogaza Says:

    November 25th, 2009 at 3:21 pm
    OK, engaging ranting lunatics has been humorous for the last half hour or so, but I’m done here.

    Have fun with your conspiracy theories and your misreading of science!

    Well, since they won’t release their data, or their methods; it can’t be replicated. Therefore what they are doing is not science.

  77. C.B.B., I think I can answer your question properly.

    *ahem*

    Yeah, poster art that was used to convince policy makers that we have a CRISIS on our hands, which is now known to be a LIE and a HOAX made by Marxist Communist Socialist AL GORE to institute a Communist Socialist ONE WORLD GOVERNMENT and to TAKE ALL OUR MONEY!!! Why are you excusing a clear example of MASSAGING DATA that will be used to TAKE OVER THE WORLD by Socialist Communist Kenyan Muslim OBAMA and his little crony Socialist Communist Nazi Climate Hoaxer AL GORE!! NOW EXPOSED TO BE A LIE AND A HOAX!!!!!! HOAX AND A LIE!!!!

    You know who else used posters to convince people there was a CRISIS?

    Hitler.

    Socialist Communist AL GORE and these so-called scientist/propogandists are now EXPOSED as being JUST LIKE HITLER!!!! HOAX!!!! LIE!!!! AL GORE!!!!! HITLER!!!! SOCIALIST!!!!!

  78. “It’s just this sort of thing that makes the anti-science movement look…well, kind of dumb.”

    The anti-science movement? That would be the folks who delete data rather than share it, corrupt the peer review process, and consider “skeptic” an insult?

  79. >It flattens a period of warm temperatures in the 1940s 1930s — see those negative coefficients?
    >Then, later on, it applies a positive multiplier so you get a nice dramatic hockey stick at the end of the century.

    If you read the code you’ll see that (a) this is not a ‘multiplier’ it’s additive (it couldn’t be a multiplier otherwise those negative numbers would have resulted in negative maximum temperatures in the 1930s and those zeroes would have meant 0 degrees in the early part of the century) and (b) the yearlyadj part is not actually used in the code.

    ESR says: I don’t understand this assertion. I can see where it’s used.

  80. For some deeper insight into dendro and its problems, it’s definitely worth getting up to speed on Climate Audit.

    Incidentally, if the code was open source all along, poor Steve McIntyre and friends would be having a *much* easier time of it.

  81. “This, people, is blatant data-cooking, with no pretense otherwise.”

    I don’t think so.

    If you look at the code in its entirety, you’ll see that the only line that uses yearlyadj is commented out and there are no references to anything in this code snippet code that isn’t commented out. So they calculate this “VERY ARTIFICAL correction for decline”, but it never actually gets used. Had you shown the very next few lines of code that would have been clear:

    yearlyadj=interpol(valadj,yrloc,timey)
    ;
    ;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    ;oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21

    Note that the line that uses yearlyadj is commented out – the line that replaces it does not use yearlyadj

    That’s quite a trick to get commented out code to cook data….

  82. Just been posting some stuff at Bishops Hill http://bishophill.squarespace.com/blog/2009/11/26/smoking-gun.html

    In pl_decline.pro it loads

    ; Use the calibrate MXD after calibration coefficients estimated for 6
    ; northern Siberian boxes that had insufficient temperature data.
    print,’Reading MXD and temperature data’

    it does a whole load of stuff then it does something odd

    Pre 1930 (or 1900 when running the program in regression mode) it calculates an average decline value of the average of everything before 1930.

    Post 1930 and up to 1994 it fits the data to a formula = 1930value + (1/7200) * (year – 1930)^2 which is a parabola.

    Finally it seems to remove the 1930 value offset (everything pre 1930 = 0 ?) and post 1930 is a curve starting from 0.

    Then it saves the values – the original 1930 value to a file implying they are corrections (this could just be lazy programmer bad variable naming)

    save,filename=’calibmxd3’+fnadd+’.idlsave’,$
    g,mxdyear,mxdnyr,fdcalib,mxdfd2,fdcorrect

    Plot this in excel if you want to see the curve the data is fitted to. Not sure where the data goes next but its a very odd thing to do.

    year f
    1930 0.252306
    1931 0.252444889
    1932 0.252861556
    1933 0.253556
    1934 0.254528222
    1935 0.255778222
    1936 0.257306
    1937 0.259111556
    1938 0.261194889
    1939 0.263556
    1940 0.266194889
    1941 0.269111556
    1942 0.272306
    1943 0.275778222
    1944 0.279528222
    1945 0.283556
    1946 0.287861556
    1947 0.292444889
    1948 0.297306
    1949 0.302444889
    1950 0.307861556
    1951 0.313556
    1952 0.319528222
    1953 0.325778222
    1954 0.332306
    1955 0.339111556
    1956 0.346194889
    1957 0.353556
    1958 0.361194889
    1959 0.369111556
    1960 0.377306
    1961 0.385778222
    1962 0.394528222
    1963 0.403556
    1964 0.412861556
    1965 0.422444889
    1966 0.432306
    1967 0.442444889
    1968 0.452861556
    1969 0.463556
    1970 0.474528222
    1971 0.485778222
    1972 0.497306
    1973 0.509111556
    1974 0.521194889
    1975 0.533556
    1976 0.546194889
    1977 0.559111556
    1978 0.572306
    1979 0.585778222
    1980 0.599528222
    1981 0.613556
    1982 0.627861556
    1983 0.642444889
    1984 0.657306
    1985 0.672444889
    1986 0.687861556
    1987 0.703556
    1988 0.719528222
    1989 0.735778222
    1990 0.752306
    1991 0.769111556
    1992 0.786194889
    1993 0.803556
    1994 0.821194889

    pre 1930, judging by this comment in the code everything is set to the 1930 value

    ;*** MUST ALTER FUNCT_DECLINE.PRO TO MATCH THE COORDINATES OF THE
    ; START OF THE DECLINE *** ALTER THIS EVERY TIME YOU CHANGE ANYTHING ***
    ;

  83. Besides the comment from Patrick on November 26th, 2009 at 12:57 am, this was answered here: http://www.realclimate.org/index.php/archives/2009/11/the-cru-hack-context/comment-page-15/#comment-144890

    Now, instead of jumping to conclusions you could have asked Gavin at Realclimate.org, using this advice: “Try to find an answer by asking a skilled friend.” He’s been very responsive and if framed properly (“Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help.”) he would have gladly answered, although “Never assume you are entitled to an answer. You are not; you aren’t, after all, paying for the service. You will earn an answer, if you earn it, by asking a substantial, interesting, and thought-provoking question — one that implicitly contributes to the experience of the community rather than merely passively demanding knowledge from others.”

    Oh yeah, “Don’t rush to claim that you have found a bug.”

    1. >Now, instead of jumping to conclusions you could have asked Gavin at Realclimate.org,

      realclimate.org is a wholly-owned subsidiary of the “hockey team”. One of the leaked emails offers it for use as a propaganda arm.

  84. #1 The variable is not actually used in the code.

    Correct. As the code is now, the line that uses yearladj is, in fact, commented out. But the fact that the code is in there but commented out shows that the code was likely used to generate at least some graphs.

    #2 The values in the variable are in the context of tree ring data, not the climate model.

    Given the prevalence of filter_cru in the programs (it appears to be called in 191 different files) it might be reasonable to assume that it does some sort of standard data massaging to the various data sets. Since, as you say later, we don’t know what this function provides, we don’t necessarily know that the yearlyadj variable is not used against the climate model.

    #4 This is a draft version of a routine. Later versions don’t even include the unused section. Those more developed and commented routines seem like more obvious candidates for the code that would have been used to generate publication quality output, not some scientists scratch paper.

    Perhaps, but we don’t really know what was used to generate the publication output, do we? We can only guess.

    #5 The legend for the plot(also unused) indicates that the correction’s usage was clearly indicated if a different version of the code actually used the correction.

    That’s a disingenuous statement at best. Just looking at the code, we don’t know what lines were commented or uncommented when the routines were run, or why the routines were run, or which particular runs were used to generate the particular graphs.

    What we do know is that at least some point there were “VERY ARTIFICAL” corrections used to “hide the decline.”

    What is “the decline” and why did they need to hide it?

  85. I suspect the adjustment code was commented out because the underlying data had been “adjusted” so the graph itself no longer needed to adjust for it. The fact that the entire series of data since 1904 was overlayed with these adjustments is pretty telling. The MXD reconstructions since 1960 from tree-ring data is what was suppose to be in question. Why completely replace data since 1904 instead of starting in 1960?

  86. “Besides the comment from Patrick on November 26th, 2009 at 12:57 am, this was answered here:”

    Oh, in that case we can all move on. Thanks for clearing it all up.

    We need to draw a line under this and progress.

    A good way of drawing the line would be to have an audit of CRU, their data management procedures, their code development procedures, QA, archiving, all that boring stuff. I’m sure they’ll come up with a clean bill of health, after all, it must be stuff they’ve been doing as routine. It’ll cost a few million but it will do far more to silence the doubters than hand waving about how this was used to produce some poster artwork.

  87. The question is why was there ever a need to fit a data set to a computed ie manufactured hockey stick trend with hardcoded values pre 1930 OK, post 1930 softly increase.

    Whether it was for a poster, an IPCC report or an internal powerpoint presentation. Questions have to be asked, they may have a very boring and reasonable answer but they do have to be asked.

  88. >realclimate.org is a wholly-owned subsidiary of the “hockey team”. One of the leaked emails offers it for use as a propaganda arm.

    Instead of addressing RealClimate’s argument, you are attacking the blog itself. This is, of course, a well known fallacy called “ad hominem.”

  89. E.L. writes “Amazing, I had no idea computer programmers were so qualified to make so many claims about climate science.”

    The same can also be said in reverse, “Amazing, I had no idea climate analysts were so qualified to make so many claims about computer science”. Or more specifically, write code that seems difficult, if not impossible, to verify when its use is for something as important as climate change.

    I am a software engineer and I make no pretense about knowing much about climate. In reviewing original Fortran code within the cru-code directories (these are Fortran, not IDL) I did find one programming defect in one set of code that I examined. However, each and every time I have a mentioned the specifics of such things (which I’m omitting here) in a blog comment I have specifically noted that this (or other issues I have raised) may or may not be a problem. For example, the defect I found only occurs if the data exceeds a certain range (the range is tested but then the following code is incorrect and does not fix the problem). Without the original data, it is entirely possible that this “error correction fix” is never executed, and thus, never has any impact on the output.

    It is fair to ask questions about issues that seem odd. But that does not necessarily mean the code or the model is wrong – in much of the code I have looked out, there is essentially zero comments. For example, one Fortran source file (of which there are dozens) has over 1,000 lines of source code and 6 comment lines at the beginning explaining only how to compile the file with the other source file. Without a specification or a test plan, we have no way of knowing what is considered the “correct answer” for the output. (One definition of “quality” is it “conforms to the specification” – which unfortunately leaves open the problem of what is quality if the spec is wrong … but that is another story.)

    There is much that I have viewed in the code that ranges from defective, to poor programming practices, to data adjustments that seem odd – but even in the case of the defect, I cannot say that it has caused a problem (code may never have been executed, as above example explains).

    There are other issues – was there ever a spec for the code? Why did they implement their own DBMS in many thousands of lines of Fortran (see cru-code directory) versus using an off the shelf, tested and considered reliable DBMS? What was their test plan? Test scenarios? Test scripts?

    These are reasonable questions to ask.

    Would I fly in an airplane that was built based upon aerodynamic model simulations having this evident s/w engineering quality? No.

    Climate change is a serious subject. However, some of the code I have seen (again, I’ve focused on the cru-code directory) gives the impression that the climate analysts did not take their software development as seriously as a serious subject should be treated. They oh it to us to treat the code as seriously as they treat the climate issues – and it is not clear that they have taken their s/w quality seriously. (Read the HARRY_README for further concerns on that topic.)

    But I am trying to be fair and not second guessing the intent of the code (I’m not playing climate analyst) and I am not drawing conclusions – except for the one that the overall impression of SQA is not good. But I also expect that climate analysts will at least respect that there are reasons that modern s/w development has advanced beyond hacking.

    To summarize – its fair to examine the code and ask pertinent questions. I discourage jumping to conclusions – and that applies to both those who say “there’s nothing to see here” and those who are crying “fraud”.

  90. re the various comments saying that this project was to generate a graphic for a piece of artwork.

    According to Gavin at RealClimate.org:

    “HARRY_read_me.txt. This is a 4 year-long work log of Ian (Harry) Harris who was working to upgrade the documentation, metadata and databases associated with the legacy CRU TS 2.1 product, which is not the same as the HadCRUT data (see Mitchell and Jones, 2003 for details). The CSU TS 3.0 is available now (via ClimateExplorer for instance), and so presumably the database problems got fixed. Anyone who has ever worked on constructing a database from dozens of individual, sometimes contradictory and inconsistently formatted datasets will share his evident frustration with how tedious that can be.”

    No institution would assign someone to a project lasting 4 years for a piece of artwork.

    I can certainly understand the tendency of people to minimize what may be potentially damaging information, but try at least to do it in a credible and not so easily disproved way.

    Here’s the link to Gavin’s post:
    http://www.realclimate.org/index.php/archives/2009/11/the-cru-hack-context/

  91. >realclimate.org is a wholly-owned subsidiary of the “hockey team”. One of the leaked emails offers it for use as a propaganda arm.

    Cop-out. You didn’t even try to get more information. Apparently you didn’t even read further down to find out whether the ARTIFICAL correction had been used in this iteration. Was this procedure used in the final product? Apparently not.

    So if this was just part of a work in progress (labeled boldly and conveniently “ARTIFICAL correction” maybe so nobody accidentally used it? I can speculate as well) and never saw the light of day, why is it such a big deal?

    BTW, when I submit a manuscript for publication, do I need to submit all the drafts, showing my various errors, as well?

  92. I think everyone misses the point about the ARTIFICAL CORRECTION.

    The takeaway is this: If 40 years of tree ring data diverges from the last 100 years of instrumental records, how can you have ANY confidence in tree-ring data as a proxy for temperature?

    How do you know the entire tree-ring record isn’t full of such divergences giving you spurious and inaccurate recostructed temperature in the period prior to the instrumental record?

  93. BTW: Note that the line that uses yearlyadj is commented out – the line that replaces it does not use yearlyadj

    That’s quite a trick to get commented out code to cook data….

    Wow. Just… wow. As Eric said in “Ego is for little people”, he’s “unusually capable” of admitting and correcting his own errors. I’m sure we’ll see a correction or retraction here any minute.

    1. >I’m sure we’ll see a correction or retraction here any minute.

      As other have repeatedly pointed out, that code was written to be used for some kind of presentation that was false. The fact that the deceptive parts are commented out now does not change that at all.

      It might get them off the hook if we knew — for certain — that it had never been shown to anyone who didn’t know beforehand how the data was cooked and why. But since these peiple have conveniently lost or destroyed primary datasets and evaded FOIA requests, they don’t deserve the benefit of that doubt. We already know there’s a pattern of evasion and probable cause for criminal conspiracy charges from their own words.

  94. Were the instruments as sensitive in the 1930’s as they were in later decades? I would image not.

    I think it would be necessary to apply a correction to the older data to bring it in line with current accuracy levels. I am not claiming they got the correction right (I think that would be impossible because we simply did not have sensitive instruments then), but it seems as if it would be more dishonest to display the data without correction, no? If anything, should we not be looking closely at where they derived their correction algorithm, instead of jumping to conclusions about motives?

    ..Ch:W..

  95. I’m sure we’ll see a correction or retraction here any minute.

    For what? Commented-out code is not being used now, but the fact it was in there strongly suggests it was used at some point. Even if it wasn’t used in the final product, there was a stage at which someone thought “fudge” was required. We need to know about that. What other algorithms were tried and found wanting? When data-culling was done, what criteria determined that certain samples were unreliable? Was it nothing more than that those samples didn’t confirm the hypothesis? We need to know that, too.

    The problem with the AGW folks is that they haven’t shown us the raw data and the methods they used to produce their Hockey Sticks. The emails uncovered recently indicate that their response to FoIA requests was to delete them rather than reveal them. If you don’t show your observed data, and all the algorithms (including computer code) used to produce your charts and graphs, so that other researchers can repeat your work and confirm it, you are not doing science.

  96. To all the various folks defending CRU and realclimate.org:

    There’s one very simple way to end all of the arguing – release the raw base data and the methodologies applied to make it useful. Oh, you can’t do that because the data were “accidentally” deleted.

    Neither CRU nor GISS has shown any willingness to allow open review of any of their models – neither the math nor the code. Both have fought tooth and nail to prevent the release of any data or methods.

    And now, it seems, CRU has actively destroyed data and conspired to delete legally-protected communications rather than turn them over to “hostiles” for analysis.

    If this were truly science, they would not take the attitude of “Why should I give you my work when all you want to do is tear it to pieces”; they would happily submit to such tearing as the work would stand on its own.

    However, they know that they’ve been engaged in high-level bunkum for decades, and they know that it will become painfully obvious to everyone if their datasets and models ever see light of day.

    Unless you have a better explanation, that is.

  97. Interesting discussion. I’ve been having a similar thread with someone in my family for the past 15 years.
    Me: Some of the results I’ve read don’t really make much sense. Why don’t they release their data?
    Him: Look, you’re a physician, you can’t really comment on the work.
    Me: With all due respect, you dropped out of college in your second year. If I said all medicine was settled I’d be fired for general stupidity.
    Him: Well, Global warming is real.

  98. I’ve tracked down the osborn-tree6/briffa_sep98_d.pro file, and here is the code from the point referenced by ESR down. If you examine the code, it doesn’t take a higher degree in computer science to see that this calculation IS NOT ACTUALLY USED (hint: in IDL, a “;” comments out the rest of the line). Why would it be there? Possibly to compare with the real data, before producing the final version. I don’t know. I have a PhD in computer science, not telepathy.

    Once computed, valadj is only used to compute yearlyadj which is subsequently only appears in a comment.

    ESR, I a disappointed in you for lending your name to such a shabby conspiracy. When confronted by quote mining, go back to the original.

    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor
    if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
    ;
    yearlyadj=interpol(valadj,yrloc,timey)
    ;
    ;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    ;oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21
    ;
    oplot,!x.crange,[0.,0.],linestyle=1
    ;
    plot,[0,1],/nodata,xstyle=4,ystyle=4
    ;legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ; ‘Northern Hemisphere MXD’,$
    ; ‘Northern Hemisphere MXD corrected for decline’],$
    ; colors=[22,21,20],thick=[3,3,3],margin=0.6,spacing=1.5
    legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ‘Northern Hemisphere MXD’],$
    colors=[22,21],thick=[3,3],margin=0.6,spacing=1.5
    ;
    end

  99. “But given that CO2’s role as a GHG has been known to physics for roughly 150 years, it’s hard to see how that’s going to be overturned. Or do you think the divergence problem trumps physics?”

    I know man…the buffoons here don’t even realize that it’s not CO2 that heats the atmosphere so much but the positive feedbacks in the system. CO2 is indeed a trace gas but we have increased it by a whopping 25% already! Soon we will have almost doubled it! Why are you all so stubborn that you can’t read about how it’s right there in the peer-reviewed literature that the sensitivity of the OVERALL SYSTEM to CO2 is much higher than merely CO2 acting alone as a greenhouse gas. *Any* warming is highly amplified by evaporation of water which has a massive greenhouse effect. And because *any* warming is so amplified we know full well that there could never have been this warm in human history before or we would have died out since you can’t HAVE just slight warming above a critical level but only catastrophic warming. If the so-called Medieval Warm Period was hotter than today then these feedbacks would have fried the earth and killed the Polar Bears. But Polar Bears are still around, right? That says it was never hotter than today!

  100. “I’m also curious how everyone seem to believe that climate scientists are hiding all the data. Climate scientist must be conspiring to fraud the public if they are unable to talk commercial weather servers into providing their source of revenue to the public for free. Of course, all of these organizations funded by the fossil fuel industry could pay the commercial entities and get their own access to the data but perish the thought. Of course, people could get public data sets and models….

    http://tamino.wordpress.com/climate-data-links/

    Ah our great hero Tamino, who appears in EIGHTEEN of the released e-mails. The guy who moderates posts that prove beyond a shadow of a doubt that he FAKES THE GRAPHS ON HIS SITE?

    I was shocked by what I found. Here is an example that I audited. He tries to explain away the fact that the oldest T record in existence shows no AGW signal at all:

    http://camirror.wordpress.com/2009/11/26/the-trick/#more-62

    Graphing the actual data got my attention in a most disturbing way:

    http://i45.tinypic.com/fwknyh.jpg

    So made a nice chart out of the real data to post far and wide:

    http://i45.tinypic.com/iwq8a1.jpg

  101. ESR writes: “As other have repeatedly pointed out, that code was written to be used for some kind of presentation …”

    A presentation which, you know for a fact, was actually made, to some audience, using graphs generated from this code when the part in question was not commented out, and with intent to deceive?

    You know, when I’m writing code, and need to stub something or fake something for purposes of testing something else, I often write a quick comment to that effect. Since the memo-to-self is usually quick and dirty, accuracy in spelling falls by the wayside, especially since I’m not a shouter who writes in all caps, all the time, so I can’t necessarily type as accurately with my pinky riding on the shift key. (Not that the shouters tend toward pristine mechanics, exactly.) And that’s what this comment looks like to me.

    I was once unwittingly roped into a rigged demo that helped net $5M in first-round financing. I doubt that, I had been complicit, I would have left some comment in my code to that effect. I wouldn’t have written a comment like “This is only for teh RGGED DEMO”. You see, there’s this legal recourse stage called “discovery”, for one thing ….

    Oh, I know, I know: the climate cabal had become so self-confident they’d started thinking they’d never get caught! And that they were such a tight crowd that nobody who saw their code would ever become perturbed about some “blatant data-cooking” (oxymoron, much?) and leak it. There’s always some rationalization to reinforce confirmation bias. And as one rationalization after another gets debunked openly, what will be left? The ones that can’t be proven, like “It might get them off the hook if we knew — for certain — that it had never been shown to anyone who didn’t know beforehand how the data was cooked and why.”

    Yeah. And it might get Obama off the hook about those wife-beating charges if somebody could turn up a set of secretly planted, continuously running video cameras in all of the houses he and Michelle had ever lived in, and could produce complete videos from their entire time of residence, showing where both of them had been at every moment, with no evidence (or even possibility) of tampering. Of course, if it turned out that the Obamas’ mutual fund holdings happened to contain stock in companies that owned subsidiaries that made the video equipment — well, need I say more? Why, someone should file charges of criminal conspiracy, right this minute! My intuition never fails me in such matters!

    1. >A presentation which, you know for a fact, was actually made, to some audience, using graphs generated from this code when the part in question was not commented out, and with intent to deceive?

      We know the hockey team has conspired to evade FOIA because that conspiracy is described in the emails. We know this is part of a larger pattern of evading critical scrutiny by Steve McIntyre and others because they talk about that larger pattern too. Because we know they have violated proper standards of scientific intergrity in these specific and relevant ways, they are not entitled to a presumption of innocence on any related issues.

  102. Dear WarmEarther,

    Please read my earlier response to the standard positive feedbacks and CO2 laboratory physics argument. As for your polar bear argument, it’s wonderfully circular.

  103. Eric, you’ve already accused them of “blatant data-cooking” based on a rather perfunctory code review, without any evidence that the commented-out portion of code was ever part of any deception — AND you haven’t owned up to that error. So, to judge you now by the standards you wish to apply to them, you must be trying to conceal your “error”, and therefore you’re not entitled to a presumption of innocence on any other “errors” you might have made.

    Do you want to be treated like you’re a fair judge on these matters? Then at least admit you leveled charges of criminal conspiracy without performing due diligence on the available evidence.

    You might also, for the sake of credibility, try to show us some daylight between your position and that of, say, James Inhofe, who seems to believe (or at least finds it politically convenient to charge) that pretty much the entire climate science community is in some global conspiracy (or, if you prefer, “chiliastic” religious crusade) to make us all poor and stupid. Can it be that the rest of the scientific community (almost all of which views AGW as well-supported) hasn’t picked up on this festering corruption within their rationalist ranks, despite their considerable interdisciplinary overlap (and therefore their significant social and professional interactions) with climate scientists? How credible is that? What is AGW in your view — just some kind of highly virulent meme, to which scientists are particularly susceptible, but to which you have miraculous pychological anti-bodies? Are you now basically right on all related AGW-ssues, no matter how many technical errors you make, no matter how ignorant you might be of details of how the research is actually conducted?

    Well, I will continue to cite you as one of the wiser heads (and pithier wits) on matters of open source development. After all, you are. But it’s come to this? Very sad.

    1. >Eric, you’ve already accused them of “blatant data-cooking” based on a rather perfunctory code review,

      The code says what it says. It’s not my job to produce evidence that they disseminated it; given their willingness to destroy primary datasets, their bullying of skeptics, and their contempt for basic standards of scientific conduct, we’d be idiots to give them the benefit of the doubt about it.

      >Then at least admit you leveled charges of criminal conspiracy without performing due diligence on the available evidence.

      Look, try to keep up, will you? The accusation of criminal conspiracy isn’t on the basis of this code, it’s based on their planning to destroy primary datasets rather than allow them to fall into the hands of skeptics wielding FOIA requests. That is a criminal violation of both the U.S. and U.K. versions of FOIA. It’s a separate issue from what the code demonstrates about their willingness to cook data, and it’s a reason they shouldn’t get the benefit of the doubt about how the code was disseminated.

      >You might also, for the sake of credibility, try to show us some daylight between your position and that of, say, James Inhofe

      OK. If Inhofe actually believes that the entire scientific community is embroled in monolithic AGW conspiracy, he’s an idiot; I agree with that. What I believe is actually going on is a lot more complicated and ambiguous than that. There are a lot of players in this dance. I’ll round up a few….

      First, the scientists. Most are caught up in, or struggling against, an error cascade of humongous proportions. What’s an error cascade? Somenbody gave one of the type examples upthread, over the mass of the electron. This is not conspiracy, it’s a result of a tendency to use seniority or authority as a shortcut when it’s technically difficult to evaluate evidence and socially difficult to be skeptical. All humans do this, even scientists.

      Next, the Gaianists – term I made up for people in whom “Save the Earth!” has psychologically substituted for traditional religion (in more or less chiliastic forms). They mean well, they really do; they recycle as an act of virtue, they worry about composting and buying local produce — and they’re totally subject to being manipulated by the other players, which is important since most of the action is going on in democracies. They’re not usually manipulated directly by the scientists, except occasionally a very wealthy one (er, think dot.com millionaire) might get hit up for funding. The Gaianists aren’t a conspiracy; they’re not organized enough. There’s some overlap with the scientists at the non-chiliastic end of this group.

      Next, the green-shirts. These are political hacks of all varieties who just love the ideas of more carbon taxes, more regulation, and the general expansion of state power, especially if they can posture as virtuous eco-saviors while they’re arranging this. They’re not a conspiracy either, just a bunch of careerists who compete for the Gaianists as a voting bloc. They sometimes behave a bit like a conspiracy, but only because their behavioral incentives tend to push them all in the same direction. Er, they’re not scientists. They’re Al Gore, or they’d like to be, only with political power too.

      Any conspiracies in sight? Yes, actually…

      Conspiracy #1: Most of the environmental movement is composed of innocent Gaianists, but not all of it. There’s a hard core that’s sort of a zombie remnant of Soviet psyops. Their goals are political: trash capitalism, resurrect socialism from the dustbin of history. They’re actually more like what I have elsewhere called a prospiracy, having lost their proper conspiratorial armature when KGB Department V folded up in 1992. There aren’t a lot of them, but they’re very, very good at co-opting others and they drive the Gaianists like sheep. I don’t think there’s significant overlap with the scientists here; the zombies are concentrated in universities, all right, but mostly in the humanities and grievance-studies departments.

      Conspiracy #2: The hockey team itself. Read the emails. Small, tight-knit, cooperating through covert channels, very focused on destroying its enemies, using false fronts like realclimate.org. There’s your classic conspiracy profile.

      My model of what’s been going on is basically this: The hockey team starts an error cascade that sweeps up a lot of scientists. The AGW meme awakens chiliastic emotional responses in a lot of Gaianists. The zombies and the green-shirts grab onto that quasi-religious wave as a political strategem (the difference is that the zombies actively want to trash capitalism, while the green-shirts just want to hobble and milk it). Pro-AGW scientists get more funding from the green-shirts within governments, which reinforces the error cascade — it’s easier not to question when your grant money would be at risk for doing so. After a few times around this cycle, the hockey team notices it’s riding a tiger and starts on the criminal-conspiracy stuff so it will never have to risk getting off.

      Overall, is this conspiracy? No. Mostly it’s just people responding to short-term incentives, unaware that they’re caught up in an error cascade and/or being politically fucked around. Nobody involved is what you could reasonably call evil…well, except for the zombies. It would be pretty evil if the hockey team had planned all this, but I’m not cynical enough to believe that. Not yet, anyway, but I haven’t read all the emails either.

      Back to Inhofe: he wants us to think the zombies did it (which is half the reason I included them in the taxonomy) but I don’t buy that either. They’ve certainly had a major contributing role in the feedback loop, but they don’t run the scientists (I don’t think and certainly hope not) and weren’t responsible for the error cascade.

      Is that enough daylight for you?

      And as for “miraculous psychological antibodies”: I know what junk science looks and smells like, having seen and correctly diagnosed several previous outbreaks. And I have one quality which you may interpret as virtue or not, however you wish: contrarian stubbornness. When I see that the emperor has no clothes, it is close to psychologically impossible for me not to yell “Naked!” at the top of my lungs.

  104. Oh, and as for the FOIA requests, at least insofar as they relate to the data — much of the data in question was bought from sources under NDA, the release of which would put them in conflict with their contractual obligations. IIRC, one Moberg did a reconstruction based on some FTP-available data that shouldn’t have been FTP-able, and that’s mentioned in at least one of the e-mails about clamping down on leaks.

    As has been pointed out already, if people want to go after this so-called “hockey team” about this data that should supposedly be in the public domain, why do they bother with FOIA? Couldn’t they just buy the same data from the same sources? It’s not like the fossil fuel industry can’t afford it — and just think what you can do with cloud computing these days, they don’t need no stinkin’ federal lab supercomputers! It’s an industry that commands considerably numerical (and, perforce, environmental) expertise. I’m sure they could do it. Maybe better than it’s already being done.

    In fact, the fossil fuel and auto industries could have gone that route, they could have financed and open sourced all of their own home-grown modeling code, they could have made royalty deals with data sources to make all of the data available to the public. The big problem with that scenario, however, is genuine uncertainty about the outcome.

    Precisely BECAUSE they’d be under perpetual suspicion of conflict of interest, they’d have to be very open and very scrupulous and very arm’s-length about the whole effort. And, when all the serious bugs in the code and uncertainties in the data were resolved, their models wouild probably say much the same thing as the models coming out of government labs and academia. You can imagine how that scenario would play in the boardroom: “All that R&D consortium money for no product, and maybe it just turns into a big gun pointed back at our own snoots? Where’s the percentage in that? Why bother?”

    But the issue still has to be managed, in political terms and in PR terms, and for the holdouts (who is that, now? Down to Exxon-Mobil and some coal companies, last I checked), it’s actually much cheaper to just buy some scientists to say conveniently confusing things, to complement the handful of gadflies, contrarians and curmudgeons — who are also a reliable phenomenon in any profession.

  105. I missed this – what presentation? Where was the output, with the ARTIFICAL correction, used?

  106. OK [Deech56|Michael Turner|Philip Machanick], we have these people admitting to trying to find ways to evade FOIA and agreeing to destroy communications if they are unsuccessful at that. We have these people cooking the data because reality doesn’t match their models. And you want to handwave all that away because one of their “adjustments” is commented out.

    Would you be so willing to bend over for the Bush administration on Iraq intel?

    Didn’t think so.

    AGW is a religion. You are acolytes, defending your prophets. These e-mails show them engaging in defense of holy writ, the excommunication of heretics, and the shunning of the insufficiently Godly.

    This is not science as any scientist would recognize it. If the guys who first got net heat output from their (flawed) cold fusion experiments had behaved as the warmists are, we’d still be chasing phantoms trying to figure out where the energy is.

  107. Philip Machanick says:
    I’ve tracked down the osborn-tree6/briffa_sep98_d.pro file, and here is the code from the point referenced by ESR down.

    Ok i know it’s a little annoying because this thread has gotten so long, but do please at least skim the rest of the thread before you just post and blurt things out. The only thing in your post that I hadn’t already posted two days ago was the “disappointed in you” paternal routine.

    Seriously… it’s like the 11th post.

  108. @esr: That’s about how I see the whole mess, as someone who more or less very loosely falls into the non-chiliastic end of the ‘Gaianists’ category. The only thing I’d add from my perspective is that there is a small, but growing awareness in that category that global warming is something of a distraction from the real problems of growing pollution, dwindling biodiversity and rapid deforestation.

  109. ESR – interesting diatribe, but this type of viewpoint could lead to confirmation bias. So what’s the beginning of the “error cascade”? That CO2 is a greenhouse gas? That’s been known since the 1800s. That the earth is getting warmer? There are many independent lines of evidence for that. That the climate sensitivity is about 3 degrees K for a doubling of CO2? Again – lots of independent confirmation.

    If you are making an accusation and you think you have a smoking gun (the VERY ARTIFICAL correction) you do need to see which scientific papers are affected; you need to find the body. The scientific basis behind climate is in the scientific literature, and if there is a paper that you think is affected by this code you need to produce it. Without context (besides WORLDWIDE GREEN CONSPIRACY) this charge is nothing.

  110. It is sobering (and horrifying) to contemplate the political response of AGW (which has never rigorously demonstrated): governments around the world are poised to spend hundreds of billions – nay, trillions – to solve a problem that probably doesn’t exist.

    One of several dangers presented by the IPCC (and as the recent CRU hack indicates) lies in their habitually tendentious recasting of serious political debate as incorrect conclusions about plainly observable facts. Such intellectual corruption preemptively labels all disagreement as uninformed or nefarious and renders democratic processes – and all those that demand it – tiresome and frustrating. The IPCC and others dismiss skeptical views because they fail the test of their philosophical presuppositions (i.e., that AGW must be real). Thus, the IPCC transforms a policy debate into a choice between the light of reason and darkness of ignorance; it comprises a heavy-handed dogmatism that inevitably creates a schism between those whose conclusions follow from observations and those who arrogantly condescend their guidance.

    CRU Director Phil Jones stated in one email that he would rather destroy data than comply with a FOIA request. Is that the spirit of free inquiry satisfying a scientifically respectable temperament? The answer is no, and I have trouble with the idea that his position on the matter is anything other than a politically motivated business case.

    The CRU hack has questioned the legitimacy of the entire scientific enterprise and the scientific community would serve itself well by a full vetting of the vested interests in AGW.

  111. It is sobering (and horrifying) to contemplate the political response of AGW (which has never been rigorously demonstrated): governments around the world are poised to spend hundreds of billions – nay, trillions – to solve a problem that probably doesn’t exist.

    One of several dangers presented by the IPCC (and as the recent CRU hack indicates) lies in their habitually tendentious recasting of serious political debate as incorrect conclusions about plainly observable facts. Such intellectual corruption preemptively labels all disagreement as uninformed or nefarious and renders democratic processes – and all those that demand it – tiresome and frustrating. The IPCC and others dismiss skeptical views because they fail the test of their philosophical presuppositions (i.e., that AGW must be real). Thus, the IPCC transforms a policy debate into a choice between the light of reason and darkness of ignorance; it comprises a heavy-handed dogmatism that inevitably creates a schism between those whose conclusions follow from observations and those who arrogantly condescend their guidance.

    CRU Director Phil Jones stated in one email that he would rather destroy data than comply with a FOIA request. Is that the spirit of free inquiry satisfying a scientifically respectable temperament? The answer is no, and I have trouble with the idea that his position on the matter is anything other than a politically motivated business case.

    The CRU hack has questioned the legitimacy of the entire scientific enterprise and the scientific community would serve itself well by a full vetting of the vested interests in AGW.

  112. “…planning to destroy primary datasets” — where’s that, Eric?

    I see *one* email where *one* dyspeptic researcher expresses a *preference* for erasing *one* file rather than remand it to the custody of amateur climate researchers who he thinks are nothing but a pain in the ass. And even in that case, it’s far from clear that the data he was referring to wasn’t ultimately from other sources, obtained under NDA, so it would amount to little more than civil disobedience to make a loudly public point, not actual data destruction on the sly. In fact, when googling on “primary datasets”, “foia” and other relevant keywords in this debate to narrow the search, I get only other blogs that don’t provide any more substantiation for this charge than you do — including your blog, rather high in pagerank on those particular terms. I guess I just can’t keep up. I’m still puzzled as to how they could destroy “primary datasets” without — OMG — *criminally* hacking into the servers of the people they bought (or otherwise acquired) that data from, and erasing it. Could this “primary dataset” thing be … uh-oh, pot-kettle-black:

    I might be the victim of a “*humongous error cascade” as you call it, and if so, I’ll be pretty pissed, because I love the smell of gasoline in the morning. No, I’m not joking here. I’ve looked into biofuel performance and characteristics and it just left me all the more impressed with gasoline, a truly great invention. I think coal is wonderful because there’s so damn much of it, it’s so cheap, and it can probably be burned even more cleanly, with the possibility of getting uranium and heavy metals out of the fly ash for their industrial (and, in the case of uranium, potential energy) value. Yeah, maybe I’m just hoodwinked. But so far, I don’t think so. What I mainly see is a pattern of distortion, error, grandstanding, and, let’s face it, bribe-taking, on the part of skeptics (many of them amateurs, or third-rank researchers long out to pasture) that strikes me as far more troubling.

    “The hockey team itself. Read the emails. Small, tight-knit, cooperating through covert channels, very focused on destroying its enemies, using false fronts like realclimate.org. There’s your classic conspiracy profile.”

    Even if this supposed “hockey team” were somehow everything you say they are, they would still be a small team, when there are tens of thousands of people involved in climate research, at different national research centers and universities around the world, all competing for attention and esteem in their specialties. But, oh, I see it now: all of those people are also caught up in your supposed catch-all of “error cascade” — even if one of them happened to notice serious flaws in the research, they wouldn’t see if for what it was, or even be able to think for themselves at all, much less use their relative credibility (compared to, say, Climate Audit) as climate researchers to begin to reverse the cascade. Because that’s how brainwashed they are — every single one of them. Without exception. I see.

    Since these e-mails don’t quite prove you right, I’m also starting to wonder: is there actually *anything* that could prove you *wrong*? If you’re wedded to an unfalsifiable sociological position, isn’t is possible that you’re *also* caught up in an error cascade, just on the other side of a divide?

    And, um … “green-shirts”? Why is it a Godwin’s Law violation when Gore does it, but not when you do it? I don’t particularly like Al Gore, I don’t like the way he panders to fuzzy-polar-bear-lovers everywhere, and in fact his writings and that “documentary” mostly made my skin crawl. However, a guy who goes out into the private sector and makes (by one estimate) about $200M in industries unrelated to profiteering from any notional “greenshirt” eco-dictatorship doesn’t strike me as a candidate for being particularly motivated to choke the economy with bloated government and over-regulation (or “carbon taxes”, for that matter — last I checked, his preference was for the more market-oriented cap-and-trade.) This sounds to me like the old liberals-and-environmentalists-are-closet-or-unconscious-stalinists canard, and I, for one (as a former Libertarian, now *very* pro-market liberal, who loves economists like Hayek and Vernon Smith for their undeniable brilliance) am sick to death of it.

    I have to admit, though, your picture would make for a very compelling novel with great Hollywood oversimplified-sociology-of-science potential. Oh, wait, Crichton already got there…..

    1. >I see *one* email where *one* dyspeptic researcher expresses a *preference* for erasing *one* file rather than remand it to the custody of amateur climate researchers who he thinks are nothing but a pain in the ass.

      One email is enough. Especially in view of later representations that CRU “accidentally” lost primary datasets.

      As for where the error cascade started, I can identify several critical points; I’ll give two examples. One is in dendrochronology. Tree-ring growth is lousy proxy for temperature because they’re more sensitive to rainfall, something I learned more than 30 years ago from following the early research on bristlecone pines. Another is in the atmosphere modeling, which assumes particular feedback loops between CO2 and water vapor that have never been verified and leads to incorrect predictions of tropical-atmosphere temperature profiles. But…the dendrochonologists don’t know atmosphere physics, and the atmosphere phycists don’t know dendrochronology, so each group compounds its own errors with the assumption that the other guys got it right.

      That’s how error cascades get going.

  113. >One is in dendrochronology. Tree-ring growth is lousy proxy for temperature because they’re more sensitive to rainfall, something I learned more than 30 years ago from following the early research on bristlecone pines.

    So based on something you learned 30 years ago, the whole field is suspect? Years ago people were saying dinosaurs couldn’t have feathers, too. For bristlecone pines, you might want to check out this article:

    http://www.pnas.org/content/early/2009/11/13/0903029106.abstract?sid=1c81cc57-d8a5-47ac-9652-9664d86f01cf

    The authors find a strong correlation between temperature and growth for the upper treeline trees.

    Of course, you knew that CRU studies the records for trees in Eurasia, which do show a correlation with temperature as well.

    > Another is in the atmosphere modeling, which assumes particular feedback loops between CO2 and water vapor that have never been verified and leads to incorrect predictions of tropical-atmosphere temperature profiles.

    Actually, they have. Climate sensitivity is based primarily on observation:

    http://www.iac.ethz.ch/people/knuttir/papers/knutti08natgeo.pdf

    The climate models have been predictive of trends over the last 20 years as well as the responses (such things as temperature, humidity and circulation) followung the eruption of Mt. Pinatubo.

    But somehow you expect us to dismiss hundreds, even thousands, of scientific papers.

    Also, I would still like to know which papers were affected by the code that you are criticizing. Heck, just pointing out where the results were used would do for now.

    1. >The authors find a strong correlation between temperature and growth for the upper treeline trees.

      We get no data on precipitation, and no actual description of the “no such clear relationship”. This is a handwave. Supposing they’d been better about this, I’d still need to see a principled argument why the lack of correlation justifyies ignoring rainfall-variability issues at other sites…say, sites that aren’t, you know, deserts.

      >Actually, they have. Climate sensitivity is based primarily on observation:

      Apparently you failed to digest even the abstract of the article: “The quest to determine climate sensitivity has now been going on for decades, with disturbingly little progress in narrowing the large uncertainty range.”, let alone negative statements in the body like “The observed global warming provides only a weak constraint on climate sensitivity”.

      Or the really fun bit: “The most likely values (circles), likely (bars, more than 66% probability) and very likely (lines, more than 90% probability) ranges are subjective estimates by the authors”

      Well, at least they’re honest about it. Puts them way ahead of the hockey team.

  114. Any conspiracies in sight? Yes, actually…

    You’ve missed a few, e.g.:

    Conspiracy #3: The “Doubt is our Product” conspiracy, and its zombie remnants.

    Scientists discover that smoking causes cancer. Big Tobacco sets up a ‘meme-warfare’ PR apparatus to discredit the science. They attempt to muddy the waters on cancer research, and they try to discredit the WHO by spreading the DDT ‘ban’ myth.

    When Big Oil and Big Coal needed to delay action on climate, they found a fully-formed anti-science apparatus ready to use.

    1. >When Big Oil and Big Coal needed to delay action on climate, they found a fully-formed anti-science apparatus ready to use.

      It may surprise you to learn that I agree with you about this. But one of the nasty things about practical politics is knowing that not all of the villains and idiots are on the other side.

      Even members of the “anti-science apparatus”, like stopped clocks, can be right twice a day. It’s a shame, and makes me unhappy, but there it is.

  115. But…the dendrochonologists don’t know atmosphere physics, and the atmosphere phycists don’t know dendrochronology, so each group compounds its own errors with the assumption that the other guys got it right.

    You’re assuming a tight coupling between climate modelling and paleoclimate that simply doesn’t exist.

  116. Here’s an interesting note from mail 1164120712:

    The main one is an ambiguity in the nature and consistency of their sensitivity to temperature variations. It was widely believed some 2-3 decades ago, that high-elevation trees were PREDOMINANTLY responding to temperature and low elevation ones to available water supply (not always related in a simple way to measured precipitation) . However, response functions ( ie sets of regression coefficients on monthly mean temperature and
    precipitation data derived using principal components regression applied to the
    tree-ring data) have always shown quite weak and temporally unstable associations between chronology and climate variations (for the high-elevations trees at least). The trouble is that these results are dominated by inter-annual (ie high-frequency) variations and apparent instability in the relationships is exacerbated by the shortness of the instrumental records that restrict analyses to short periods, and the large separation of the climate station records from the sites of the trees. Limited comparisons between tree-ring density data (which seem to display less ambiguous responses) imply that there is a reasonable decadal time scale association and so indicate a real temperature signal , on this time scale .The bottom line though is that these trees likely represent a mixed temperature and moisture-supply response that might vary on longer timescales.

    The discussion is further complicated by the fact that the first PC of “Western US” trees used in the Mann et al. analyses is derived from a mixture of species (not just Bristlecones ) and they are quite varied in their characteristics , time span, and effective variance spectra . Many show low interannual variance and a long-term declining trend , up until about 1850 , when the Bristlecones (and others) show the remarkable increasing trend up until the end of the record. The earlier negative trend could be (partly or more significantly) a consequence of the LACK of detrending to allow for age effects in the measurements (ie standardisation) – the very early sections of
    relative high growth were removed in their analysis, but no explicit standardistion of the data was made to account for remaining slow width changes resulting from tree aging. This is also related to the “strip bark” problem , as these types of trees will have unpredictable trends as a consequence of aging and depending on the precise nature of each tree’s structure.

    Another serious issue to be considered relates to the fact that the PC1 time series in the Mann et al. analysis was adjusted to reduce the positive slope in the last 150 years (on the assumption – following an earlier paper by Lamarche et al. – that this incressing growth was evidence of carbon dioxide fertilization) , by differencing the data from another record produced by other workers in northern Alaska and Canada (which incidentally was standardised in a totally different way). This last adjustment obviously will have a large influence on the quantification of the link between these Western US trees and N.Hemisphere temperatures. At this point , it is fair to say that this adjustment was arbitrary and the link between Bristlecone pine growth and CO2 is , at the very least, arguable. Note that at least one author (Lisa Gaumlich) has stated that the recent growth of these trees could be temperature driven and not evidence of CO2 fertilisation.

    The point of this message is to show that that this issue is complex , and I still believe the “Western US” series and its interpretation in terms of Hemispheric mean temperature is perhaps a “Pandora’s box” that we might open at our peril!

  117. Eric: The plot of the fudge factors you provide on this post seems to me to be one of those historical pictures that are worth a thousand words. Do I have your permission to upload said plot to Wikipedia?

    ESR says: Yes, you do.

  118. I’d like to propose that we name the following the “Mann Correction Vector.” I have found that it has near universal applicability in correcting flawed data. Truly this is one of the most exciting scientific discoveries of our age and naming it thus is a fitting tribute to its discoverer.

    MCV ={0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,- 0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6, 2.6,2.6,2.6}*0.75

  119. @Morgan Greywolf

    The take home message from my post is that the code being looked at does not provide evidence of the claim that the data was being cooked. If you are willing to assume bad faith behavior that is not present in the code, then you can still reach the conclusion you’ve decided upon. But if you are willing to start from that assumption, then you don’t really need any evidence, at all.

    hypothetical code that isn’t present could take numerous forms. In fact, they could have deleted the damning cook_climate_data.pro, in the same way that they could have uncommented the section to use the correction, but not uncomment the section that labels the correction, or used tree ring data from 50 years ago in their reconstruction of contemporary temperature measurements . In fact, we don’t know if they actually just used randomn() to generate all their publication plots. These events could have happened, but you have no evidence that they did, just a conspiracy theory built upon the assumption of bad faith; which is exactly what you brought to the discussion in the first place.

    Also, I don’t know if you have any experience with actually programming, but in my experience, I spend plenty of time writing throwaway programs to test out different ideas before I’m willing to program up something more serious. The fact that one file out of thousands includes an unused correction that may or may not be relevant is not surprising. All sorts of silly stuff shows up in the scratch programs I’ve seen.

  120. p.s. we don’t have any idea what filter_cru does, we can guess, but again, this is assumption, not evidence. If the original data were, say, the magnitude of polar geographic coordinates* in km then the correction would represent less than one part in a thousand.

    *altitude measured from center of the earth, a fairly common measurement used in geophysics, with a nominal value of 6374 km. The particular value depends on where you measure from,because the height of the earth’s surface varies relative to mean, and because the earth is actually an oblate spheroid not a true sphere.

  121. Also, I don’t know if you have any experience with actually programming, but in my experience, I spend plenty of time writing throwaway programs to test out different ideas before I’m willing to program up something more serious. The fact that one file out of thousands includes an unused correction that may or may not be relevant is not surprising. All sorts of silly stuff shows up in the scratch programs I’ve seen.

    Tons. I’ve written several major N-tier applications for Fortune 500 companies, I’ve done performance and scalability testing (and tuning) of various major N-tier enterprise apps for Fortune 500 companies, and I’ve lead technical teams that have adapted, customized, implemented and integrated a major application suite for various Fortune 500 companies.

    I’ve written plenty of scratch code. And you know what? If I leave commented-out code in a program, I leave it in there for a reason. Usually because I need to be able to refer back to it later or uncomment it out for a particular run. Otherwise, I region highlight it and ^W it away.

  122. ESR: Something else earlier brought up that you haven’t addressed: what would falsify your position? What new discovery would it take to make you change your mind, about either the legitimacy of the current ‘mainstream’ view of AGW or the culpability of the CRU people?

    1. >ESR: Something else earlier brought up that you haven’t addressed: what would falsify your position? What new discovery would it take to make you change your mind, about either the legitimacy of the current ‘mainstream’ view of AGW or the culpability of the CRU people?

      To convince me AGW is real, I’d need to see an AGW model that meets the scientific standards for retrodiction, transparency, and verification by third parties. I’d need all the primary data to be open source. I’d need the data reduction code to be open source. I’d need al the decisions about “correcting” the data to be documented and justified. The model would have to not contain bugger factors – model coefficients that cannot be either computed from first principles nore justified by experiment. Prediction is not yet effectively testable, but I’d at least need need to see good retrodiction against un-“corrected” historical climate data.

      The CRU people , at this point, would have to have a pretty damn coinvincing story about how their primary datasets got “lost” before I’d believe another wood from them.

  123. esr: “One email is enough.”

    Gee, it doesn’t even take a crime being committed, all it takes for someone in this crowd to make a *comment* (almost certainly in anger), and — siddenly, they are *all* guilty until proven innocent. Under this system of justice, I think I’d be in jail for “planning” to have Donald Rumsfeld hog-tied and thrown out of a plane over Waziristan. (Lucky for him, I guess, that the Bush administration finally put the Iraq war under adult supervision, which calmed me down considerably.)

    “Especially in view of later representations that CRU “accidentally” lost primary datasets.”

    If what you mean by “primary dataset” (see my problems with that term, above) is the raw station data, that data came from various nations and researchers *not* at CRU. Therefore, it’s only really lost if those nations and researchers have lost it — a state of affairs for which CRU people could hardly be considered culpable unless, perhaps, they actually ordered those nations and researchers to delete it all, at the behest of George Soros and the International Jewish Banking Conspiracy to Make Us All Poor and Stupid .

    So what it seems they did was fumble and/or throw out their *copies* of that data. Back in the 1980s, when (as I remember anyway) storage actually *was* at a premium, networking was often done via sneakernet, if at all, and (for that matter) research into global warming wasn’t nearly as well-funded as it is today. (Can you guess one of the most common errors in making a backup tape? Yep: zorching old backups. I did my time in the tape-monkey salt mines, so I know from whip-scars.)

    Now, I know it’s hard for you, Eric, but try to imagine that they really are blameless for no longer having their own *copies* of what I think you mean by “primary datasets” — they were and are, after all, scientists first and IT managers perhaps hardly at all. Under the doctrine of innocent until proven guilty, this is, after all, something a prosecutor has to imagine — for purposes of refutation, at the very least. He can’t just assume guilt. All the scare quotes in the world around “accidental” won’t make it something other than an accident, if that’s what it amounted to.

  124. “If I leave commented-out code in a program, I leave it in there for a reason. Usually because I need to be able to refer back to it later or uncomment it out for a particular run. Otherwise, I region highlight it and ^W it away.”

    Which makes you either more fastidious or trigger-happy than about 80% of the coders out there, from the code I’ve seen in my life. I spent way too many years hacking in university research labs, though, which tends to skew my rough sense of the proportion. That experience is, however, quite germane in this case. Maybe in the N-tier Fortune-500 look-ma-I-wrote-a-whole-ERP-package-singlehandedly world, programmers with such superhuman capabilities can keep their code squeaky clean in their sleep. I wouldn’t know. I was only on the bailing-wire/chewing-gum interface side of that world, and only very briefly — having left, in part, because the code was even uglier and more trash-strewn than in the research world, which I hadn’t thought possible.

  125. They can somehow deceive the whole scientific and political community, but they can’t even get the most basic things on their agenda (less coal power etc) come true despite enormous international meetings. How is that even remotely logical?

    This is all a global conspiracy because … the green party rules the world, in secret?

  126. To convince me AGW is real, I’d need to see an AGW model that meets the scientific standards for retrodiction, transparency, and verification by third parties. I’d need all the primary data to be open source. I’d need the data reduction code to be open source. I’d need al the decisions about “correcting” the data to be documented and justified.

    Since there’s no evidence of this quality that AGW isn’t real, can I assume that you’re agnostic on the matter?

    The model would have to not contain bugger factors – model coefficients that cannot be either computed from first principles nore justified by experiment.

    The problem with using an epithet like “bugger factor” is that it’s difficult to know wtf you’re talking about. Do you mean parameterisation?

    Prediction is not yet effectively testable, but I’d at least need need to see good retrodiction against un-”corrected” historical climate data.

    The part about “uncorrected” is just silly. You can’t get good climate information straight from the raw data. Weather stations aren’t deployed with climate science in mind, and have all sorts of systematic errors. Satellite data needs to be adjusted for orbital behaviour.

  127. Michael Turner:
    If what you mean by “primary dataset” (see my problems with that term, above) is the raw station data, that data came from various nations and researchers *not* at CRU. Therefore, it’s only really lost if those nations and researchers have lost it — a state of affairs for which CRU people could hardly be considered culpable unless, perhaps, they actually ordered those nations and researchers to delete it all, at the behest of George Soros and the International Jewish Banking Conspiracy to Make Us All Poor and Stupid .

    So what it seems they did was fumble and/or throw out their *copies* of that data. Back in the 1980s, when (as I remember anyway) storage actually *was* at a premium, networking was often done via sneakernet, if at all, and (for that matter) research into global warming wasn’t nearly as well-funded as it is today. (Can you guess one of the most common errors in making a backup tape? Yep: zorching old backups. I did my time in the tape-monkey salt mines, so I know from whip-scars.)

    One of the FOIA requests was for the station data, that is, the list of weather stations that CRU used in its gridded weather series. CRU refused to release it.

    There was a request for a list of those countries or organizations that had imposed NDA restrictions on data. CRU refused to release it.

    There was a request for the NDAs. CRU replied that they had lost them, but managed to release two documents which didn’t actually restrict further dissemination.

  128. I did upload your plot to Wikipedia and incorporate it into the Climategate article (“Climatic Research Unit e-mail hacking incident”), but it was quickly taken down for reasons that I found basically valid (that the plot lacked context – what was being plotted?) I decided that a smaller version of the same plot with a more encylopedia-style title and labeled axes might past muster, especially if I added a line or two of context to the article.

    So, I had to make my own version of your plot. You can see it here: http://tiny.cc/cru_fudge

    BUT, while I was doing that, the Wikipedia article became “locked” pending disputes. Natch.

    So that’s all I’m going to do for now. I still think this plot is a powerful argument in the debate over the meaning of the CRU files. I would compare its potential impact to that of Charles Johnson’s animated gif during the “Rathergate” issue. (BTW, I was the one who originally uploaded that animated gif to Wikipedia back in January 2005.)

  129. @Michael Turner.
    You say, “Under the doctrine of innocent until proven guilty, this is, after all, something a prosecutor has to imagine — for purposes of refutation, at the very least”

    Please avoid making sweeping statements outside your area of expertise – it merely hinders intelligent debate. Mr Turner, your comments show me, as an advocate with a decade’s worth of experience working in New York courts that you have no idea of the rules regarding destruction of evidence. Those of us who practice law call the loss or destruction of evidence subject to a pending FOIL discovery request ‘intentional spoliation.’ Please go Wiki up the term and learn that there is no such thing as ‘innocent until proven guilty’ once a government-funded organisation has ‘lost’ data contrary to data retention policy subject as well as a formal disclosure demand. Every goverment agency I’ve come across has a strict 6-year document retention policy – failure to uphold to this is ALWAYS severly punished if they accidently or deliberately stymie the litigation process.

    Here in New York (where writs have just been served against NASA GISS) litigation is a smoothly oiled machine. Any half competent attorney will crucify these spoliators with a summary judgment motion once they cannot answer discovery demands in full ( i.e. data sets ‘lost’). There are two classes of spoliation that apply here (a) intentional spoliation which, once proven, always results in a victory for the opposing party (b) unintentional spoliation whereby if the court feels the offending party merely lost the data an ‘adverse inference’ is formally granted i.e. what is lost is presumed to have obeen adverse impact to the party who is responsible for the loss of such evidence.

    In all the years I’ve studied and practised law I have never seen any party win a case who has destroyed or ‘lost’ evidence after a FOIL request) has been served. From what I’e read of the McIntyre saga where for 3 years he was frustrated in his FOIL requests, no court will baulk from severely sanctioning the likes of CRU and NASA GISS.

    I’m amazed at your naivety to think losing or destroying evidence somehow still gives any litigant the benefit of the doubt- quite the contrary-the burden is shifted to the spoliator and NEVER have I seen any spoliator win from that position.It would be the shock of the century if the ‘hockey team’ weren’t fatally penalized for such a ‘foul.’

  130. So you think blatant evidence of data cooking is a line of code for plotting a graph that is explicitly commented

    Apply a VERY ARTIFICAL correction for decline!!

    Great catch Sherlock.

  131. The problem with using an epithet like “bugger factor” is that it’s difficult to know wtf you’re talking about. Do you mean parameterisation?

    It is pretty clear what he meant by bugger factor, he meant model coefficients, more specifically model coefficients that cannot be either computed from first principles nor justified by experiment.

    i.e. unsubstantiated fudge factors, or kluges.

    Reading through this thread has been an abject lesson in the lengths to which the spin brigades will go. To them it’s just an exercise in poo flinging to see what might stick.

  132. ThomasD. Having programmed marine models using data gathered by PhD’s, applying their weights and margins of error, I can say with at least some experience that no respectable programmer in the modeling and analysis world would program in hard coded data or variants unless you always always want the same smooth results for whatever reason. You are in deep denial here. This code looks about as rigged as humanly possible. Programmers seldom lie in their code comments because they are the only people who generally see them. The Scientists present the graphs and sometimes the actual data with a paragraph exlaining the formulas used in the model. No one gets the code – usually. Smart programmers don’t use any comments at all or the bare minimum in a language like LISP or APL for readability. And I know WTF I am talking about.

  133. According to this (http://www.jgc.org/blog/2009/11/about-that-cru-hack.html), the fudge factor is not used in the code. I don’t know IDL and have not confirmed this with my own personal analysis.

    If the fudge factor is not used and there is no smoking gun, the conspiracy minded now have to start over…again! So, we just keep poking until we find the data the fits our desired results? It seems to me the denialists are playing their own game of pick-and-choose.

    I believe global warming is real and the risk vs. reward payoff just doesn’t make any sense. If we do nothing and global warming is real, we are in for quite a nasty brutish future. If we do nothing and global warming is not real, then we got lucky. If we do something and global warming is real, perhaps we make our children’s lives less brutish for a somewhat painful cost now. If we do something and global warming is not real, then we paid a somewhat painful cost but also got lucky.

    So, do you want to take the risk? Or pay the cost? I think thats the real question. If your willing to take the risk for all of humanity (and the biosphere), then clearly you have different priorities then me.

    Perhaps the best way to think of humanity is as an extinction event.

  134. BTW, I also believe that all the data and models that are being used to shape policy HAVE to be open source. Nothing else make sense. Then we can get past these kinds of debates where the sincerity and honesty of one side is being challenged because of internal politics, statements, and games. The real issue that needs to be discussed and debated is global warming and what to do about it.

    If the data and models were open source I think the debate would become much more real, and if the data and models support global warming (which I do personally believe to be the case) then we can do something to address it. If not, then we can continue the debates.

  135. The code is an earlier draft. The artificial correction is a placeholder for the proper correction in these later files:

    briffa_sep98_decline1.pro
    briffa_sep98_decline2.pro

    The code appears to be for a paper in Nature about the influence of volcanoes on temperature. In the paper (see Figure 1) the dendro reconstruction is plotted without a divergence correction, artificial or otherwise:

    @article{briffa1998influence,
    title={{Influence of volcanic eruptions on Northern Hemisphere summer temperature over the past 600 years}},
    author={Briffa, KR and Jones, PD and Schweingruber, FH and Osborn, TJ},
    journal={Nature},
    volume={393},
    number={6684},
    pages={450–455},
    year={1998},
    publisher={Nature Publishing Group}
    }

  136. BTW: take another look at that code snippet:

    ;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    ;oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21

    Notice the different colours. This code appears to originally be intended to plot the data with and without the corrections, probably on the same graph. That’s not something you’d do if you wanted to hide the fact you were using an artificial correction.

    Andy: this code is not all from the same project. HARRY_read_me.txt is about the CRU temperature data, yes, but the “hide the decline” code appears to be from something else entirely. Besides, the “hide the decline” code all deals with post-1960 tree trunk cores from high northern latitudes, and that isn’t generally used for as a source of global temperature data since it’s known to be problematic. (It diverges from all the other temperature measurements.)

  137. BOR : “smart programmers don’t use any comments at all or the bare minimum in a language like LISP or APL for readability”

    You didn’t just say that…

    Also, THEY AREN’T “SMART PROGRAMMERS”, they are smart scientist. They leave comments.

  138. makomk writes: “This code appears to originally be intended to plot the data with and without the corrections, probably on the same graph.”

    I disagree. This is clearly an either-or situation. If one code is commneted in, the other must be commented out. The artifice may have been used as a temporary stand-in until the programmer gains actual sensory data. However, once he/she obtained the correct data, there is no reason to keep the artifice around, even for historical reason. In fact, the artifice can create all sorts of misunderstandings. They may have kept it there for nefarious reasons as well. One compilation is used in private while the other is used in public.

  139. >This is clearly an either-or situation. If one code is commneted in, the other must be commented out.

    wtf? Try this:


    filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21

    There you go, both “commented in”.

  140. >realclimate.org is a wholly-owned subsidiary of the “hockey team”. One of the leaked >emails offers it for use as a propaganda arm.

    >Instead of addressing RealClimate’s argument, you are attacking the blog itself. This is, of >course, a well known fallacy called “ad hominem.”

    The “inconvenient” fallacy that reared its ugly head??? realclimate.org was involved in the Alar hoax as well as the “General Betrayus” character assasination campaign. Behind the scenes? Al Gore, George Soros. Agenda?

    http://www.americanthinker.com/blog/2009/11/the_warmist_pr_con_job.html

    Dark forces at work here. Big money in play. Way bigger than huge “research” grants.

  141. Pete wrote “There you go, both “commented in”.”

    Wow. Either you’re totally anal or you’re just dishonest. It would obviously be silly to uncomment both code since the latter code takes precedence over (overrides) the former.

  142. >Either you’re totally anal or you’re just dishonest. It would obviously be silly to uncomment both code since the latter code takes precedence over (overrides) the former.

    oplot is short for overplot. If they’re both uncommented you’ll get two lines.

  143. From earlier in the code, the original call to ‘plot’.


    plot,timey,comptemp(*,3),/nodata,$
    /xstyle,xrange=[1881,1994],xtitle='Year',$
    /ystyle,yrange=[-3,3],ytitle='Normalised anomalies',$
    ; title='Northern Hemisphere temperatures, MXD and corrected MXD'
    title='Northern Hemisphere temperatures and MXD reconstruction'
    ;

    When they run this with the artificial correction commented in, they would also comment in the ‘title’ that’s commented out above (there’s an ‘oplot’ for NH temp between this code and the part we’re looking at).

  144. pete,

    What is your point? The way the code stands, it is easy to comment things in and out and recompile for different purposes. Why have two different codes for the same thing lying around if you have the correct data already? The correct thing to do is to delete the artificial stuff once the correct data is in. You don’t keep it around unless you intend to use it for whatever purpose.

  145. >The correct thing to do is to delete the artificial stuff once the correct data is in. You don’t keep it around unless you intend to use it for whatever purpose.

    Rubbish. You don’t keep backups of your old code?

  146. “Rubbish. You don’t keep backups of your old code?”

    Absolutely. There are all sorts of crap in my old code and I do back it up to my archives, crap and all. But eventually I clean out the crap and it can no longer be found in my final corrected or released code.

  147. >Absolutely. There are all sorts of crap in my old code and I do back it up to my archives, crap and all. But eventually I clean out the crap and it can no longer be found in my final corrected or released code.

    This isn’t “final corrected or released code”.

  148. >This isn’t “final corrected or released code”.

    And I am supposed to take your word for it? Do you work for the IPCC or CRU? Have you seen the final code that was used by the IPCC for its dire predictions? If so, identify yourself and provide evidence for your assertions.

  149. >And I am supposed to take your word for it? Do you work for the IPCC or CRU? Have you seen the final code that was used by the IPCC for its dire predictions? If so, identify yourself and provide evidence for your assertions.

    Have you not been following this? The code is from the files that were hacked/leaked from CRU. This is not the usual method for releasing final corrected code.

  150. >The code is from the files that were hacked/leaked from CRU. This is not the usual method for releasing final corrected code.

    Are you being purposely obtuse? The compiled program was used by the IPCC and CRU to make a case for man-made global warming. Since the hacked code is the most recent, it only makes sense that this is the code that was used by CRU to support its claims. So, for you to assert that it was not released is disingenuous, to say the least.

    What I want to know is this: Which code was compiled and used for public consumption, the one with the artifice commented in or the one with the artifice commented out? If you cannot answer this question, you have nothing interesting to contribute to this debate. Thanks for your input.

  151. >Which code was compiled and used for public consumption, the one with the artifice commented in or the one with the artifice commented out?

    All haven’t seen any published graphs with an artificial correction — both that I’ve found include the post-1960 divergence.

    see:

    Briffa et al 1998 Influence of volcanic eruptions on Northern Hemisphere summer temperature over the past 600 years Figure 1

    Briffa 2000 Annual climate variability in the Holocene: interpreting the message of ancient trees Figure 5

  152. > dhogaza Says:
    > But given that CO2’s role as a GHG has been known to physics for roughly 150 years, it’s hard to see how that’s going to be overturned. Or > do you think the divergence problem trumps physics?

    Excuse me? It’s only been “proven” in the “2500 climate scientists believe in global warming” circle and has never been even close to have been validated as an even reasonable theory. The whole “theory” of the atmospheric greenhouse is based on speculation that is over 400 years old. In fact just this year some physics decided to take a break from their research to investigate this greenhouse effect and published a paper in the “International Journal of Modern Physics” which of course is not in the CRU approved “peer review” circle debunking the whole “theory” of the greenhouse effect:

    http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1161v4.pdf
    (Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics)

  153. I see that valadj is not plotted in the cited code, but I see no indication of any other array being plotted. yrloc is not plotted either. I’m not familiar with this language, so I can’t assess all the algorithms. I’m assuming that the command is oplot and timey and tslow are being passed as parameters, but what is the significance of yrloc if the valadj plotting line is commented out? The section provided does not appear to be a complete construction set.

  154. I have a theory, based on 20 plus years as a programmer, data modeler, and working as a guy who doesn’t always trust other peoples’ data adjustments.

    1. Start with the raw data.
    2. Use the ARTIFICAL to adjust the data
    3. Save the adjusted data, and hide the raw data.
    4. Comment out ARTIFICAL (but leave it in, in case steps 2 and 3 need to be repeated).
    5. Rerun the program. You get the same results as in step 2, but can point to the ARTIFICAL as commented out.

    There are reasons to adjust data, but one follows standard procedures. Note that since the AGW model has not predicted the actual data from 1998-2008, the model is incomplete (unless you believe, like Kevin Trenberth, that the recent data is wrong).

    FYI, CO2 adjustments are logarithmic, and amount to about .36 C per doubling of CO2. There are two adjustments for H2O, one logarithmic, and greater than the CO2 adjustment, and the other negative, and greater than the positive H2O adjustment for changes in albedo related to clouds. There are also adjustments for bacterial sulfur-containing gases, that also end up negative, volcanoes, also negative, and industrial non-CO2 and non-H2O. Since the fal of the Soviet empire in 1989-1992, burning of high-sulfur coal has dropped dramatically, which led to the AGW of the 1990-1998 period. As India and China eat more meat, more methane will add another AGW gas to the mix. Of all these, CO2 is the least significant.

    The real risk for humanity, is global cooling, and we should be planning on ways to reverse a tendency to have an Ice Age more than 70% of the time in the last 140,000 years. Right now, Earth’s albedo is about .30. If it were to increase to .33, we could hit positive feedback due to snow. This would be bad.

  155. “What I want to know is this: Which code was compiled and used for public consumption, the one with the artifice commented in or the one with the artifice commented out?”

    This is the question that honest folks on both sides of the AGW debate should be asking. We need someone with knowledge of computer code AND climate papers to help answer this question. I suspect that Steve McIntyre is working on it right now.

  156. BTW, one thing I don’t see discussed very often in the global warming debate is the proportionality of CO2 and the man-made portion of CO2 emissions. This page:

    http://www.geocraft.com/WVFossils/greenhouse_data.html

    points out that the entirety of the human contribution to the greenhouse effect is about 0.28%. This of course raises the question: if we totally bugger our economy, and sacrifice all the poor people in the world so they quit burning forests for cropland, is that 0.28% actually going to, you know, SAVE THE PLANET?

    I’m not convinced. Pete, can you convince me?

  157. Hey, esr, just out of curiosity – have you bothered to attempt to contact the authors?

    You’re a fairly prominent expert in software development; there seems to be a fairly good chance they’d answer your questions.

    Because what you’re doing seems to be fairly irresponsible: taking stolen source code, ignoring the fact that it’s commented out, accusing the author of having nefarious and deceitful intent, and stating with absolute certainty your conclusions about their human motivations.

    Please continue to ask pointed questions – science that can’t withstand scrutiny isn’t science. But leave the proselytizing until after the dust has settled.

  158. Yeah, when they got all close-mouthed and secretive about their data sets and code suite, I thought it smelled fishy. Not good, I thought, and downgraded their input accordingly. Seems a wise move now.

    I laughed out loud when they said that the originators of the data (national weather services, mostly) made them sign non-disclosure agreements to get the data. WTF?! Top secret temp data, huh? VERY curious.

    Di I think all “climate scientists” are crooked? No, of course not. Just awfully slow to call out the bad actors on their very obvious bullshit. Too much respect for “piled higher and deeper”, I’d guess.

  159. Someone with actual knowledge of IDL is going to have to step in here. I’ve done some rudimentary checking on the use of a couple of functions, and I cannot see any indication that the variables timey and tslow are populated. They are passed to the last invocation of the oplot function. I’ve already clued in that variables need not be declared up front in this language (a default variant datatype?), but from what I’ve been able to work out 1) timey and tslow are not constants 2) they are not populated in the supplied code, and 3) oplot expects values to be passed to the two parameters where timey and tslow are used.

    From what little I can see, this instruction set appears to be incomplete. What am I missing here? Again, I’m not familiar with even this type of language, so without a function declaration I can’t even tell whether or not variables are being passed to it from another file or whether another file can be passing variables by reference and receiving values generated by this function.

  160. Rob:
    Someone with actual knowledge of IDL is going to have to step in here. I’ve done some rudimentary checking on the use of a couple of functions, and I cannot see any indication that the variables timey and tslow are populated. They are passed to the last invocation of the oplot function. I’ve already clued in that variables need not be declared up front in this language (a default variant datatype?), but from what I’ve been able to work out 1) timey and tslow are not constants 2) they are not populated in the supplied code, and 3) oplot expects values to be passed to the two parameters where timey and tslow are used.

    From what little I can see, this instruction set appears to be incomplete. What am I missing here? Again, I’m not familiar with even this type of language, so without a function declaration I can’t even tell whether or not variables are being passed to it from another file or whether another file can be passing variables by reference and receiving values generated by this function.

    I’ve been using IDL for analysis for more than 15 years, so I understand it fairly well (as long as we don’t use the Object-Oriented approach).

    Could you clarify which routine you have been looking at?

  161. Sean:
    “What I want to know is this: Which code was compiled and used for public consumption, the one with the artifice commented in or the one with the artifice commented out?”

    This is the question that honest folks on both sides of the AGW debate should be asking. We need someone with knowledge of computer code AND climate papers to help answer this question. I suspect that Steve McIntyre is working on it right now.

    One frightening possibility is that no one remembers. There was no configuration control of any kind, as far as I can tell, so who knows when changes to the code were made.

    ESR says: Yes, in fact I think this is quite likely.

  162. pete:
    >This is clearly an either-or situation. If one code is commneted in, the other must be commented out.

    wtf? Try this:

    filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21

    There you go, both “commented in”.

    filter_cru is a procedure that has at least 3 KEYWORDS, NAN, TSIN, and TSLOW. (KEYWORDS can be in any order, and the slash before nan is equivalent to NAN=1. Arguments can also be used to pass data to and from procedures and functions, but the order is critical.) The pattern in the procedure definition and the procedure call is KEYWORD = variable. Note that IDL allows you to abbreviate the KEYWORD, as long as you type enough initial letters to provide a unique match. This is potentially dangerous, and certainly confusing for somebody new to IDL. My practice is to use the same letters for the KEYWORD and the variable, but to always use upper case for the KEYWORDS and lower case or mixed case for the variables.

    The first time filter_cru is called with TSIN = yyy + yearlyadj, which will only work if TSIN is used as an input parameter. The second time it is called with TSIN = yyy, that is, without the adjustment.

    In both cases, the local variable tslow is matched to the KEYWORD with the same name, which allows the procedure to return a value, scalar, vector, or array and store it in the local variable. In this case, it is almost certainly returning a vector, which is then plotted in the oplot procedure. IDL is very big on dynamic recasting, and does it very well, but it can certainly be a mystery to programmers with a background in other languages.

    In case you or anybody else interested in IDL hasn’t found it yet, here is the documentation page for IDL. The Quick Reference Guide is useful, but you will probably need to download the full IDL Reference Guide to really understand the built-in routines. This will list the arguments and the KEYWORDS for every function and procedure that ships with IDL.

    http://www.ittvis.com/ProductServices/IDL/ProductDocumentation.aspx

  163. If you read the comment from Deltoid.

    Raymond has made no attempt to find out if the graph was actually used anywhere. The file name was osborn-tree6/briffa_sep98_d.pro, so we should look for a paper with authors, Briffa and Osborn published in 1998 and sure enough there’s Briffa, Schweingruber, Jones, Osborn, Harris, Shiyatov, Vaganov and Grudd “Trees tell of past climates: but are they speaking less clearly today?” Phil. Trans. R. Soc. Lond. B 1998:

    In §4, we referred to a notable correspondence between ‘hemispheric’ MXD series (averaged over all sites) and an equivalent ‘hemispheric’ instrumental temperature series. Despite their having 50% common variance measured over the last century, it is apparent that in recent decades the MXD series shows a decline, whereas we know that summer temperatures over the same area increased. Closer examination reveals that while year-to-year (i.e. mutually ten-year high-pass filtered) correlations are consistently high between tree-growth and temperature (ca. 0.7 for 1881-1981), the correlations based on decadally smoothed data fall from 0.89, when calculated over the period 1881-1960, to 0.64 when the comparison period is extended to 1881-1981. This is illustrated in figure 6, which shows that decadal trends in both large-scale- average TRW and MXD increasingly diverge from the course of decadal temperature variation after about 1950 or 1960.

    And figure 6 is basically the graph plotted by the code above and it does not include the “corrected MXD” data:”

    Just another beat up. Nothing to see here folks, move along.

  164. “I disagree. This is clearly an either-or situation. If one code is commneted in, the other must be commented out. The artifice may have been used as a temporary stand-in until the programmer gains actual sensory data. However, once he/she obtained the correct data, there is no reason to keep the artifice around, even for historical reason. In fact, the artifice can create all sorts of misunderstandings. They may have kept it there for nefarious reasons as well. One compilation is used in private while the other is used in public.”

    Or, it’s an earlier draft (as makomk has said) used to do a sensitivity analysis on the output. Which is standard for a modeling project, from my experience in modeling of environmental systems (water, not climate). Again, this is not final released code – it’s an intermediate draft stolen from the CRU.

  165. GaryC:

    This would be the code. As someone has since pointed out tslow is a keyword. Would timey also be a keyword?

    ;
    ; Now prepare for plotting
    ;
    loadct,39
    multi_plot,nrow=3,layout=’caption’
    if !d.name eq ‘X’ then begin
    window,ysize=800
    !p.font=-1
    endif else begin
    !p.font=0
    device,/helvetica,/bold,font_size=18
    endelse
    def_1color,20,color=’red’
    def_1color,21,color=’blue’
    def_1color,22,color=’black’
    ;
    restore,’compbest_fixed1950.idlsave’
    ;
    plot,timey,comptemp(*,3),/nodata,$
    /xstyle,xrange=[1881,1994],xtitle=’Year’,$
    /ystyle,yrange=[-3,3],ytitle=’Normalised anomalies’,$
    ; title=’Northern Hemisphere temperatures, MXD and corrected MXD’
    title=’Northern Hemisphere temperatures and MXD reconstruction’
    ;
    yyy=reform(comptemp(*,2))
    ;mknormal,yyy,timey,refperiod=[1881,1940]
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=22
    yyy=reform(compmxd(*,2,1))
    ;mknormal,yyy,timey,refperiod=[1881,1940]
    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor
    if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
    ;
    yearlyadj=interpol(valadj,yrloc,timey)
    ;
    ;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    ;oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21
    ;
    oplot,!x.crange,[0.,0.],linestyle=1
    ;
    plot,[0,1],/nodata,xstyle=4,ystyle=4
    ;legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ; ‘Northern Hemisphere MXD’,$
    ; ‘Northern Hemisphere MXD corrected for decline’],$
    ; colors=[22,21,20],thick=[3,3,3],margin=0.6,spacing=1.5
    legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ‘Northern Hemisphere MXD’],$
    colors=[22,21],thick=[3,3],margin=0.6,spacing=1.5
    ;
    end

  166. >This would be the code. As someone has since pointed out tslow is a keyword. Would timey also be a keyword?

    tslow is being passed to oplot as an argument.

    tslow is a keyword for filter_cru, but it’s being used as an argument for oplot.

    timey is being passed to oplot as an argument.

    You can tell the keywords because they use the syntax /KEYWORD or KEYWORD=something.

  167. You’ll have to pardon me, as my familiarity is more with object-oriented languages and with explicit declaration of variables and datatypes. I was just considering the comment that yearlyadj couldn’t possibly have been used as the call to oplot with yearlyadj passed as a parameter is commented out.

    I merely noticed that the population of yearlyadj is apparent in the code from this file, but that some of the values being passed to the oplot call that was active weren’t declared or populated in the available code. These values were timey and tslow. In order for this at all to be meaningful, I had to eliminate the possibility that these were global constants, keywords, or functions.

    What I was arriving at was if they are variables and they are not populated here, then it would appear that there is some parameter passing into this function. If parameters can be passed into the function, then output parameters can pass values outside of the function. This might render meaningless the assertion that the calculation of yearlyadj is not used, if, and only if, a function in another file can be found to derive a value based on an output parameter in this function.

    As I said, if this were one of the languages I’ve worked with, it would be apparent if variables were being passed into and out of the function. In those languages, a function must be declared, and all of the parameters, input and output, must be part of the function declaration.

    I’m sure this has been mentioned before, but this begs the question: why not use a modern RDBMS and corresponding language? All of this would be much easier to manage and the code base wouldn’t be so much of a nightmare to go through.

  168. [Of course it would be much simpler to look up what he did in his lab notebook – instead of trying to reverse engineer/replicate the history.]

    Hmmm while valadj is commented out of briffa_sep98_d.pro
    valadj is not commented out of briffa_sep98_e.pro

    briffa_sep98_a.pro
    briffa_sep98_b.pro
    briffa_sep98_c.pro
    briffa_sep98_d.pro
    briffa_sep98_decline1.pro
    briffa_sep98_decline2.pro
    briffa_sep98_e.pro
    ;
    ; PLOTS ‘ALL’ REGION MXD timeseries from age banded and from hugershoff
    ; standardised datasets.
    ; Reads Harry’s regional timeseries and outputs the 1600-1992 portion
    ; with missing values set appropriately. Uses mxd, and just the
    ; “all band” timeseries
    ;****** APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE*********
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor

    The observation above is a response to the comment below
    http://esr.ibiblio.org/?p=1447#comment-243043
    Philip Machanick Says:
    November 27th, 2009 at 12:09 am

  169. Roy:
    GaryC:

    This would be the code. As someone has since pointed out tslow is a keyword. Would timey also be a keyword?

    ;
    ; Now prepare for plotting
    ;
    loadct,39
    multi_plot,nrow=3,layout=’caption’
    if !d.name eq ‘X’ then begin
    window,ysize=800
    !p.font=-1
    endif else begin
    !p.font=0
    device,/helvetica,/bold,font_size=18
    endelse
    def_1color,20,color=’red’
    def_1color,21,color=’blue’
    def_1color,22,color=’black’
    ;
    restore,’compbest_fixed1950.idlsave’
    ;
    plot,timey,comptemp(*,3),/nodata,$
    /xstyle,xrange=[1881,1994],xtitle=’Year’,$
    /ystyle,yrange=[-3,3],ytitle=’Normalised anomalies’,$
    ; title=’Northern Hemisphere temperatures, MXD and corrected MXD’
    title=’Northern Hemisphere temperatures and MXD reconstruction’
    ;
    yyy=reform(comptemp(*,2))
    ;mknormal,yyy,timey,refperiod=[1881,1940]
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=22
    yyy=reform(compmxd(*,2,1))
    ;mknormal,yyy,timey,refperiod=[1881,1940]
    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor
    if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
    ;
    yearlyadj=interpol(valadj,yrloc,timey)
    ;
    ;filter_cru,5.,/nan,tsin=yyy+yearlyadj,tslow=tslow
    ;oplot,timey,tslow,thick=5,color=20
    ;
    filter_cru,5.,/nan,tsin=yyy,tslow=tslow
    oplot,timey,tslow,thick=5,color=21
    ;
    oplot,!x.crange,[0.,0.],linestyle=1
    ;
    plot,[0,1],/nodata,xstyle=4,ystyle=4
    ;legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ; ‘Northern Hemisphere MXD’,$
    ; ‘Northern Hemisphere MXD corrected for decline’],$
    ; colors=[22,21,20],thick=[3,3,3],margin=0.6,spacing=1.5
    legend,[‘Northern Hemisphere April-September instrumental temperature’,$
    ‘Northern Hemisphere MXD’],$
    colors=[22,21],thick=[3,3],margin=0.6,spacing=1.5
    ;
    end

    That someone would probably be me. (By the way, sorry for calling you Rob earlier.)

    No, there is no hint in that code section that timey is a keyword.

    I think the secret is the RESTORE command, that is,

    restore,’compbest_fixed1950.idlsave’

    Somewhere on the computer system is a file called compbest_fixed1950.idlsave that was created by the IDL SAVE command. SAVE writes out the current values of all defined variables, unless the user specifies that only certain variables should be saved by listing them. RESTORE reads that file and recreates all of the variables. It is a quick way of ensuring that the IDL environment matches a desired state before beginning to do some new processing. I assume that timey was one of the variables that was SAVED/RESTORED in this way.

    You may be able to find the routine that created the SAVE file by looking for that file name in the text, but it would be possible to rename the save file in the operating system without causing any problems, so that is not infallible. The only thing you can really count on is that the variable has the same name when it is saved that it will have when it is restored. So look for the variable timey in a routine that also has a SAVE call further down.

  170. Roy:
    You’ll have to pardon me, as my familiarity is more with object-oriented languages and with explicit declaration of variables and datatypes. I was just considering the comment that yearlyadj couldn’t possibly have been used as the call to oplot with yearlyadj passed as a parameter is commented out.

    I merely noticed that the population of yearlyadj is apparent in the code from this file, but that some of the values being passed to the oplot call that was active weren’t declared or populated in the available code. These values were timey and tslow. In order for this at all to be meaningful, I had to eliminate the possibility that these were global constants, keywords, or functions.

    What I was arriving at was if they are variables and they are not populated here, then it would appear that there is some parameter passing into this function. If parameters can be passed into the function, then output parameters can pass values outside of the function. This might render meaningless the assertion that the calculation of yearlyadj is not used, if, and only if, a function in another file can be found to derive a value based on an output parameter in this function.

    As I said, if this were one of the languages I’ve worked with, it would be apparent if variables were being passed into and out of the function. In those languages, a function must be declared, and all of the parameters, input and output, must be part of the function declaration.

    I’m sure this has been mentioned before, but this begs the question: why not use a modern RDBMS and corresponding language? All of this would be much easier to manage and the code base wouldn’t be so much of a nightmare to go through.

    IDL has COMMON blocks, borrowed from FORTRAN, but they are deprecated and not used by anybody who is fluent in IDL. Just by people who can write FORTRAN using any computer language.

    There are some really tricky ways of attaching variables to WIDGETS used to create a GUI, but it has been more than 10 years since I did any of that, so I can’t remember the details. I’m also fairly confident that all of the CRU IDL routines used the Command Line Interface, which allows you to create script files that can be very powerful.

    I have never used the object-oriented approach to IDL programming, but I have seen no hints that anybody at CRU has used it either.

    Finally, scientists are not usually comfortable with relational databases. When I was a freshman physics major, I was given the one-hour FORTRAN for Physicists session, followed by a one quarter course that actually explained everything they left out. That was for FORTRAN IV, which its nasty 3-way computed GOTO statements and other kludges. As scientists leave graduate school, they are likely to drift to either MATLAB or IDL. (Be very glad that none of the code is in FORTH, unless you think in Reverse Polish Notation and stacks. Of course APL would have been worse, but no scientist actually understands APL, as far as I can tell.)

    It is very easy to stay with a tool that you are comfortable with, even if it is a bad match for the problem. With proper configuration control, it would not have been a real problem. As it is, its a disaster.

  171. The leaked files do not include a compbest_fixed1950.idlsave, so it comes down to looking for a yearlyadj in another file. I’m on a Windows system, so I can’t grep the files, I have to open them up and do searches. Not fun.

  172. “caerbannog” notes:
    To put things in perspective (and to prove that Linux is really, really crappy), I spent a few minutes mining the comments in the upcoming Linux-2.6.32 kernel source-code.

    What follows is ugly and unformatted, but it should get the point across. According to the standards of evidence used by AGW skeptics, my little quote-mine here should be considered ironclad proof that Linux simply won’t run on any computer.

    Torvalds fucked it up. Fucking broken ABI IOC3 is fucked fucked beyond believe …
    brain-damage, it’s managed to fuck things up one step further.. What the fuck is going on here? We leave junk in the beginning Shit happens.. all the algo is pure shit and should be replaced THIS IS A PIECE OF SHIT MADE BY ME long delays in kernel code are pretty sucky anyway setup the pointer arrays, this sucks loose some. This sucks :-( It sucks. I totally disown this extern calls and hard coded values here.. very sucky! assume we found an overflow. This sucks. performance sucks for guests using highmem. This sucks, but it is the best we can do.. this sucks [tm] :-( XXX I know this sucks This SUCKS. method really sucks. You can only read or write one location at a time SuckyIO interrupt routing for PICs on function 1 This sucks. There is a better wa TODO: use a hash or array, this sucks. This sucks, and it is a hack Sucks! We need to fork list. Things still suck. Note that the arbiter/ISA bridge appears to be buggy Disable archidle() by default since it is buggy really buggy something weird, or if the code is buggy support for disabling the buggy read-ahead Some old kernel bugs returned The original driver looks buggy/incomplete That turned out to be too buggy to support giveupconsole() is obviously buggy as it This feature appears to be buggy. usb-uhci seems buggy for async unlinking crap – we crashed before setuparch() This piece of crap needs to disappear That’s crap, since doing that while some partitions are opened One more crappy race: I don’t think we have any guarantee here Piles of crap below pretend to be a parser for module and kernel this bit masking stuff is crap. horrible the crap we have to deal with is when we are awake What is all this crap for? Locking and life cycle management is crappy still. This is a crappy interface. This means that the ip6tables jump stack is now crap. XXX fix this crap up totally crap, FIXME: get rid of this crap useless crap (ugh ugh ugh). This is such a hack So, here’s this grotty hack… :-( So, here’s this additional grotty hack… :-( UGLY HACK: workaround regulator framework bugs. Wheee, hackady hack Following is a work-around (a.k.a. hack) ugly hack, I can’t find a way to actually detect the disk Crude hack to get console output To keep this hack from interfering very hacky This #define is a horrible hack Hack alert ! it’s a bit of a hack. It’s still quite hacky, The hack below stinks… Now do the horrible hacks One gross hack So we use a hack, This looks like a dirty hack to me This is an ugly hack, Hack warning It’s a gross hack, XXX sleazy hack: cheap hack to support suspend/resume FIXME: this hack is definitely wrong whacked out. The following is just a hack a hack :-( My guess is that this is a hack to minimize the impact of a bug vt.c for deeply disgusting hack reasons Another Hack :-( Brutally hacked HACK WARNING!! HACK ALERT: FIXME: temporary hack FIXME: this is a hack – nstead we’re going to do a total hack job for now dirty hack time. THE PADDING THIS STARTS WITH IS A HORRIBLE HACK THAT SHOULD NOT LIVE Right now we use a sleazy hack which is an ugly hack.

    And this just goes on and on and on….

  173. I want to thank all of you, regardless of position on AGW. For a complete layperson, this has been a somewhat difficult read, but I believe I get the drift.

    I must however add that as an accountant, any report or analysis that I have ever produced in my 30 year working life, has alway had complete source data available (if not already attached, which is my typical SOP), and it is imperative to have a complete audit trail through the entire system, from sourse to financial statements.

    I can’t understand any reluctance to prove out your work.

  174. First, to anyone who is interested in understanding the source code, either FORTRAN or IDL, there is an absolutely wonderful web site with a link on one of the other posts here.

    http://www.di2.nu/foia/foia.pl

    This allows you to search either the emails, the HARRY_READ_ME.txt file, or the source code files for any text. Highly recommended!

    Second, using this site to search for timey finds some very useful references, in addition to hundreds of cases where the variable is used. In harris_tree/banding/bandall2_science_fixed.pro is a set of statements that define timey.

    allnyr=2000
    allyr=findgen(allnyr)+1.
    kper=where((allyr ge 1400) and (allyr le 1995),nyr)
    timey=allyr(kper)

    allnyr is an integer set to 2000

    allyr is a floating point vector of 2000 elements which is created and filled with the numbers from 0 to 1999 by the system function FINDGEN (which stands for Float Index Generation) The vector is then incremented by 1, so it is 1 to 2000.

    kper is a vector that contains the indices of allyr where the values are greater than or equal to 1400 and less than or equal to 1995.
    nyr is the number of values in kper
    Note that the WHERE statement is generally much more efficient than using an IF statement in a loop, so this is a fairly common IDL usage and demonstrates some real familiarity with the language.

    timey is a new vector that is created by extracting the elements of allyr that are indexed by kper. in this case, that means it contains the floating point numbers from 1400 to 1995, inclusive.

    I have two comments about this set of statements. First, in the last line, timey=allyr(kper), the parentheses should be replaced by square brackets, that is, timey=allyr[kper]. IDL still supports use of parentheses for both surrounding function arguments and for array indices, but back in 1995 the vendor strongly recommended using the square brackets instead. One of these years there is a risk that ITT, the current vendor, will stop supporting the long-deprecated use of parentheses for indices, and this routine will stop working.

    Second, it would have been much clearer to have written something like:

    firstYear = 1400
    lastYear = 1995
    nYears = lastYear – firstYear + 1
    timeY = firstYear + FINDGEN(nYears)

    The other part of this routine that is very interesting is

    fixedyr=1950
    fnfix=string(fixedyr,format='(I4)’)

    save,filename=’bandallnh_science_fixed’+fnfix+’.idlsave’,$
    timey,nyr,nhts

    The first two lines occur early in the routine, and simply define a character string that contains ‘1950’.

    The SAVE statement occurs near the end of the routing, and concatenates that character string with others to create a character string that contains ‘bandallnh_science_fixed_1950.idlsave’. This string is used as the name of a SAVE file, which is used to save three variables, timey, nyr, and nhts. Note that the dollar sign is a statement continuation indicator, so those two lines represent a single statement. So here is at least one place that timey is written out into a SAVE file.

    nyr is still the number of years in timey, but I’m struggling trying to understand where nhts was defined. There is an extremely confusing sequence of statements, some of which appear to overwrite results that have just been computed. I think I’m going to try reformating it (i.e., prettyprinting it by hand) so it closely matches my style to see if that helps. It doesn’t help that I don’t have IDL installed on this machine, so I don’t get to take advantage of the color coding that the IDL IDE provides.

  175. My best friend from high school is an astronomer and his wife works at the Mauna Kea IR scope. He has a Power Point he sent me a year ago about CO2 and H2O vapor and IR windows. In a nut shell, where you have water vapor, CO2 becomes a sideshow. Much of AGW chickenlittle comes from the way Venus CO2 and H2SO4 works to retain the solar heat at surface level. In a system with H2O the IR retention of the H2O far out weighs that of CO2 because the H2O has a MUCH broader absorption range than the CO2 does. And point 2 is that CO2 is a heavy gas with respect to O2 and N2 and that at 14,000 ft. concentrations of CO2 have changed very little, over the last 60 years. So how do I punch a .PPT file to this board, for him and I we have argued over the last35 years that burning oil is a poor use of it, when it would be better saved for plastics.

    MadMichaelJohn

    1. >we have argued over the last35 years that burning oil is a poor use of it, when it would be better saved for plastics.

      I used to say the same thing – that is, that it’s a crime to just burn coal and oil that our descendents are going to want very badly as high-value chemical feedstocks. I’m no longer really worried about this; I think we’ll have algal oil and nanotech synthesis before fossil oil and coal depletion become critical.

      I can’t think of a way to put up a PPT. If you send me images made from thge pages I could put them in my site’s media library.

  176. dhogaza Says: “The fudge-factor’s derived from the instrumental record.”

    Funny sort of instrument that sits zero for several years, then takes a few plausible measurements then sits on 2.6 for five years. Maybe there are some tiny elements of instrumental readings in there, deeply buried, but no reference to where the series came from, no discussion of what was done to the instrumental data to get these values, or why… the results are such a close approximation to useless that it seems easier to just call them useless.

    If I offer you lunch with the promise that “this was derived from food at some time or other”, would you eat it?

  177. “And point 2 is that CO2 is a heavy gas with respect to O2 and N2 and that at 14,000 ft. concentrations of CO2 have changed very little, over the last 60 years.”

    So you’re saying that the CO2 observations begun by Charles Keeling on neighboring Mauna Loa are somehow incorrect? Please look up the “Keeling Curve”.

  178. Smoking gun? No. This is a mushroom cloud.

    And, as others have noted, does this disprove the original claim that human behavior is affecting the Earth’s climate? No. Only empirical testing can do that–but seeing the manner in which those who make the claim propound it and defend it is rather damning all by itself.

    Then again, does this claim even rise to the level of a hypothesis? It does not. A hypothesis is experimentally testable. This is not. If there is no testable hypothesis, there is no science, just opinion.

    Therefore “global warming,” or whatever they’re calling it this week, is a religious-political meme with pseudoscientific overtones. Q.E.D.

    My suspiciions went in this direction the first time I heard about the idea, because there is vastly more water vapor in the Earth’s atmosphere than carbon dioxide by orders of magnitude, and water vapor is a tremendously more effective absorber of infrared radiation than carbon dioxide by several orders of magnitude. Any effect that carbon dioxide could have on the Earth’s climate vanishes in the noise of day-to-day fluctuations in humidity, not to mention the sun’s measurably wandering output. This is middle-school “Earth Science” as taught to legions of sixth-graders by the football coach. Mention these facts in public these days, of course, and people look at you like you’re some kind of Holocaust denier.

  179. Caerbannog, thanks. I did not know that linux was a global conspiracy to ruin the economy and western civilization. It’s running on my computer at home.

    Speaking as a scientist, this thread has some stupidity in it. Really. Spew venom at me, I don’t care. Most of you are just providing verification of the Dunning-Kruger effect. Many of you come across as conspiracy loons. Your skepticism is seriously biased. Look in the mirror and be fair to both sides. Learn the science and try to comment intelligently. I know, that’s asking a lot for the denialosphere. Michael Turner, thank you for your lucid comments (are you the cosmologist?). Your argument is very good.

    For CO2 not to have a warming effect, we’d have to invalidate the measured spectroscopy of the CO2 molecule and some textbook thermodynamics. Let me know when one of you geniuses has some real work to that effect. The way to invalidate science is with better science, with a better scientific argument. I see nothing of the sort here.

  180. Mike Scott- Let’s start here: http://www.esrl.noaa.gov/gmd/ccgg/trends/co2_data_mlo.html

    It’s only 50 years, but 320 to 380+ ain’t “very little”. So, wrong.

    And your absorption comments re: H20 and CO2, while not entirely wrong, are highly misleading. You could rely on your friend’s PPT (which Im guessing is factually accurate but contextually irrelevant) or read the papers on the topic:

    http://agwobserver.wordpress.com/2009/10/08/papers-on-carbon-dioxide-absorption-properties-in-atmosphere/

  181. So the “smoking gun” is some commented-out code from some archived source files, which *may* have once been deliberately uncommented and use to generate spurious plots, or may have been test code, or may have been a stub that wasn’t deleted until later?

  182. John P Says:

    “For CO2 not to have a warming effect, we’d have to invalidate the measured spectroscopy of the CO2 molecule and some textbook thermodynamics. Let me know when one of you geniuses has some real work to that effect. The way to invalidate science is with better science, with a better scientific argument. I see nothing of the sort here.”

    I notice that you carefully avoid saying “significant warming effect”, because most of the thermodynamics textbooks you care to open will tell you that you can’t get energy out of nothing. Heat moves (on average) from a warm object to cooler objects so the Sun is warmer than the Earth, and the surface of the Earth is warmer than the atmosphere above it. CO2 only absorbs a narrow band of infra-red that is outside the absorption of water and other things. One narrow band, which is right now close to black (i.e. nearly everything on that band is being absorbed already). Adding more CO2 will absorb just a fraction more energy, but the MOST it can absorb is ALL the energy in that band. The incremental gain gets diminishingly small so strictly speaking it does something, but practically speaking it does nothing.

    This business of “3 degrees C per doubling CO2” implies that some additional energy source keeps pumping up the minuscule CO2 gain with enough additional energy to overcome the FOURTH POWER LAW of basic radiation theory. Think about it, every degree of warming increases the radiation output of the Earth along a fourth power law, so every degree of warming requires more additional energy than the one before. It’s like climbing a hill and the hill is getting steeper at every step, but you are also getting more tired at every step (as CO2 absorption runs into saturation). There is no tipping point where you are suddenly going to start leaping to the top of that hill. Go back and look at that thermodynamics textbook again.

  183. Again, this is not final released code – it’s an intermediate draft stolen from the CRU.

    The released documents are clearly not some quick smash and grab by a hacker. They are too focused for that. EIther the hacker was in for a very long time and had the chance to search for the best material, or these are leaked by a person who had time to look around or the hacker grabbed a file intended for FOI consideration.

    In none of these situations does it seem at all likely that intermediate versions of code would be taken. Is a hacker really going to take v2 when he can see v9 sitting beside it? Sorry warmers, but this code is pretty obviously a fairly final rendition.

    For CO2 not to have a warming effect, we’d have to invalidate the measured spectroscopy of the CO2 molecule

    Not if CO2 is already absorbing at saturation. This has been known for decades. It’s a total scam that any scientist would suggest that the CO2 absorbtion is what is heating the earth.

    For those of you who cannot follow what Tel says above, here is analogy: I have a greenhouse (=earth), made of glass (=CO2). It keeps things warm by retaining heat from the sun. If the greenhouse starts to get hotter, I assume the days are sunnier. I do not assume the glass is suddenly much thicker. Glass is such an efficient insulator that doubling it’s thickness will make bugger all difference to how much heat it retained, especially when compared to the variability of solar input.

    But the CO2 warming argument is, in analogy, that you can make a greenhouse much hotter by increasing the thickness of the glass by 25%.

  184. There are an awful lot of people claiming innocuous language as ‘gotcha’ phrases in this debate by simple virtue of the fact they don’t know what they are talking about. Phrases like ‘fudge factor’ ‘trick’ and ‘artificial’ are commonly used in discussion of mathematical models and have little correlation to their everyday use. I’m not a climatologist so I can’t speak to the specifics of their models, but I am an engineer. Here’s a simple equation that lets people fly through the the sky everyday L=1/2rhoU^2SC (an expression of Lift in terms of atmospheric conditions and aerodynamic shape). See that C at the end of the equation? That’s properly called a ‘dimensionless coefficient’. What it actually is, is a number ‘made-up’ so that the equation spits out values that reflect observation. In the parlance of engineers and scientists – it’s a fudge factor. Notice that what it is used for is to make the math reflect reality, not diverge from it! With out it, your airplanes wouldn’t fly. Not convinced? Google the term ‘fudge factor’ you will find it is used by scientists and engineers everyday, it’s completely benign. ‘Trick’ and ‘artificial’ are equally innocuous in this context. These are not ‘gotcha’ phrases at all!

  185. >Not if CO2 is already absorbing at saturation. This has been known for decades. It’s a total scam that any scientist would suggest that the CO2 absorbtion is what is heating the earth.

    That argument’s been around for over a century (it’s due to Angstrom). It’s been invalidated by better modern understanding and measurement of absorption spectra.

    >This business of “3 degrees C per doubling CO2″ implies that some additional energy source keeps pumping up the minuscule CO2 gain with enough additional energy to overcome the FOURTH POWER LAW of basic radiation theory. Think about it, every degree of warming increases the radiation output of the Earth along a fourth power law, so every degree of warming requires more additional energy than the one before.

    The “additional energy source” is of course the sun, as you noted earlier. Even though FOURTH POWER sounds really big if you put it in ALL CAPITALS, a one degree increase in temperature is about a 1.5% increase in the fourth power of temperature.

  186. Mooloo, @ “For those of you who cannot follow what Tel says above, here is analogy: I have a greenhouse (=earth), made of glass (=CO2). It keeps things warm by retaining heat from the sun. If the greenhouse starts to get hotter, I assume the days are sunnier. I do not assume the glass is suddenly much thicker. Glass is such an efficient insulator that doubling it’s thickness will make bugger all difference to how much heat it retained, especially when compared to the variability of solar input.

    But the CO2 warming argument is, in analogy, that you can make a greenhouse much hotter by increasing the thickness of the glass by 25%.”

    The increased CO2 moves the absorption layer farther up into the atmosphere. In terms of your greenhouse analogy, it would be like adding another layer of glass further out. Think of how double or triple glazed windows are warmer than single glazed ones.

  187. pete Says:

    “That argument’s been around for over a century (it’s due to Angstrom). It’s been invalidated by better modern understanding and measurement of absorption spectra.”

    Interesting link to the RealClimate article, but it does not invalidate the saturation in any way. Actually it verifies what I’m saying completely. First thing to check out is this graph:

    http://www.realclimate.org/images/CO2Abs4x.jpg

    Note that the whole pink area is what you get from 300ppm of CO2 in the atmosphere. Then the thin yellow bands are the EXTRA that you get if you go from 300ppm up to 1200ppm. That’s right, to get an increase big enough to even notice when plotted the author needed to multiply the CO2 quantity by 4. This is exactly what any normal engineer or physicist would mean by a saturating function, it does not mean that the function gives nothing back for large inputs, it means that the function gives most of the return for small inputs then rolls off on a “knee” shape and gives diminishing returns for increasing inputs.

    This is clearly displayed in this graph:

    http://www.realclimate.org/images/TransLongPaths.jpg

    Since the graph shows transmission, flip it over upside down and you get the shape of the graph for power absorbed. Note that the left hand side (low CO2) is very steep, then it hits a knee shape and rolls off into a region of diminishing returns. We are at the start of the knee part of the curve right now and we are in the rolling off phase. More CO2 will cause warming, but all the steep changes that were going to happen, have happened.

    You might like to note that during the ice ages in the last half million years, CO2 was down at 200ppm, on a much steeper part of the curve (so a little bit of CO2 in the middle of an ice age has significantly more influence on the climate than the same CO2 does now).

    Let’s get to 4th power radiation, looking at a baseline of 273K and compare 1 degree warming with percent of baseline:

    273.0K => 100.0%
    273.5K => 100.7%
    274.0K => 101.5%
    274.5K => 102.2%
    275.0K => 103.0%
    275.5K => 103.7%
    276.0K => 104.5%
    276.5K => 105.2%
    277.0K => 106.0%
    277.5K => 106.8%
    278.0K => 107.5%

    This is not a diminishing return, it is a rapidly INCREASING nonlinear function. Adding CO2 to the world at present gives a situation where the more CO2 you add, the less additional energy you get (in a rolloff function) and every bit of heating loses energy faster (in a fourth power). The effect of increasing CO2 is real, but small, and getting smaller all the time. The only “tipping point” is when it vanishes into complete irrelevance.

    Ask any programmer, if part of your algorithm involves a complexity of O(N^4) and the other parts are either linear or logarithmic then you can be sure that the fourth power step is going to dominate everything else.

    For the IPCC models to amplify the minuscule warming that CO2 provides, they needed to insert significant positive feedback gains into the model, and ignore the large negative feedback caused by the water convection cycle (evaporation, condensation, rain, snow, etc). None of these positive feedback gain factors were based on fundamental physics, they were based on seat of the pants empirical twiddling (a technique which MAY work, but comes with no guarantees).

    They come up with a 3K sensitivity value and keep making predictions that have never yet come to pass. If the climate models from 10 years ago were to be believed we would be living with another 0.5K of warming now, but it didn’t happen so they just moved the goalposts and started making even less plausible predictions for 2050.

  188. Putting some more numbers into it I can get the 3K warming by making these assumptions:

    * All the energy is leaving Earth in the 10-22 micron band drawn in the plot above.

    * No energy leaves by any method other than ground level radiation.

    * All energy absorbed by the CO2 gas is returned to the surface of the earth.

    * Only CO2 is in the system (no water).

    * CO2 is likely to double from 300ppm to 600ppm.

    Using the green horizontal line as the baseline, we have approx 300K global temperature at 300ppm of CO2 and 0.663 transmission factor. Moving to the red band (double CO2) we get about 0.635 transmission factor.

    300.0 ^ 4 * 0.663 approx equals 303.25 ^ 4 * 0.635

    Thus 3.25 degrees C of warming (give or take).

    ——————–

    Here are the problems with the above assumptions:

    * Water is already absorbing in most of the wavelengths 17 microns and larger. This is the region where most of the gain in the CO2 curve is coming from, leaving only the gain on the low wavelength side (around 13 microns). This is the part of the CO2 curve that gives the least gain so cut the above estimate by at least half.

    * There are other methods for energy to leave the Earth’s surface (e.g. evaporation and convection).

    * Energy absorbed by CO2 (and any other greenhouse gas, including water) does not instantly return to the ground, some of it re-radiates, some goes into convection. As warm gas floats higher it’s radiation efficiency improves. Rough guess is half goes up, half goes down so cut the gain by half again.

    * Water has a latent heat providing additional cooling when show falls to the ground and then melts. Substantial portions of the Earth have regular and semi-regular snowfall.

    * Cloud-top-temperature (at least over oceans in tropical areas) is a yearly average of approx 280K which is approx 75% as efficient a radiator as 300K ground level, but with substantially less greenhouse gas to obstruct the transmission (making it about the same or perhaps slightly better all up).

    * Cloud-top-temperature (over land and non-tropical ocean) is a yearly average of approx 250K which is approx 50% as efficient a radiator as 300K ground level, providing yet another way for heat to escape.

    I think it’s safe to say that the assumptions that give 3 degrees of warming are the worst possible case assumptions, thus the real world situation is going to be considerably less warming than that, and that anything more than 3 degrees could be considered outrageous.

    For reference cloud data (as measured by satellite) is available from here —

    http://isccp.giss.nasa.gov/products/browsed2.html

  189. Gary Strand states:
    “To put things in perspective (and to prove that Linux is really, really crappy), I spent a few minutes mining the comments in the upcoming Linux-2.6.32 kernel source-code.
    What follows is ugly and unformatted, but it should get the point across. According to the standards of evidence used by AGW skeptics, my little quote-mine here should be considered ironclad proof that Linux simply won’t run on any computer…”

    Yes Gary, except ALL of us computer guys know exactly what they are talking about – what was broken is explained, and what the current work around is explained. There are no “what does this mean” things in there. All code has issues. Good code means you know what the issues are, and explains what is being done until such time as a fix for it is worked out (operating systems are INCREADABLY complicated – thus you often can’t just “fix” the code that is messed – you have to do a world of testing as that code is relied upon by other code – program code often actually takes a bug into account so fixing the bug may seriously break other things).

    Part of the issue with this code is not just that it is a mess and has apparent bugs, but that it’s not commented enough for even their programmers to figure out, as seen in the read_me files.

    So yea, guess they are the same.

  190. If the science of CO2 increase causing warmer temperatures was settled 150 years ago please explain to this simple man a couple of points:

    1) The Cambrian Period had CO2 at over 7000 PPM (parts per million) and had the same temperatures we see today.

    2) The Late Ordovician Period was also an Ice Age while at the same time CO2 concentrations then were nearly 12 times higher than today– 4400 ppm.

    According to your “established science this is impossible. Obviously something far more than CO2 is at work helping to set earth’s temperature.

    Perhaps this: http://www.spaceweather.com/

    Note the sunspots during the period of warming was averaging 30,000 to 50,000 a year and this year we won’t reach 1,000.

    Do a little research on sunspots and the Maunder Minimum.

  191. They are using FORTRAN? Seriously? I’m not a programmer, just a Net Admin, but I thought FORTRAN died with the dinosaurs. And I am employed at a college….

  192. Nitanoid I know, but couldn’t these guys afford something like Mathematica or Matlab? They do get the misanthropes discount, don’t they? (No complaints about GNUPlot but it is not Mathematica).

  193. “If the science of CO2 increase causing warmer temperatures was settled 150 years ago please explain to this simple man a couple of points:

    1) The Cambrian Period had CO2 at over 7000 PPM (parts per million) and had the same temperatures we see today.

    2) The Late Ordovician Period was also an Ice Age while at the same time CO2 concentrations then were nearly 12 times higher than today– 4400 ppm. ”

    Rearrangement of the tectonic plates (geologic climate change) is considered a second order forcing capable of producing temperature changes of 15 – 20C. Heat transport by the oceans can cool or warm the planet considerably depending, for example, on whether and how much heat is transported to the poles, and whether/how much the poles become glaciated. (We have the benefit of two very different – almost opposite – geological dynamics operating at the poles in the current epoch to help us understand the profound impact land/ocean structures have on climate)

    When considering climate in the distant past, it is extremely difficult – but important – to establish all pertaining factors. The sun is dimmer the further back in time you go, the atmosphere quite differently populated (more/less aerosols etc), landmasses move around and cover/don’t cover the poles. All these interrelating factors have powerful influence on global temperature on geological time scales. However, comparing Cambrian and Late Ordovician periods to the Holocene (say) is not comparing like with like.

    Our species will not be dealing with tectonic climate change any time soon. While climate change in the geologic past does inform our understanding of current climate in, we look to quaternary climate shifts, like the last three or four ice ages, to help establish what may happen in the nearer future. This is comparing like with like.

    Context is everything.

    “Obviously something far more than CO2 is at work helping to set earth’s temperature.”

    Quite so. Greenhouse gases are seen as a first order climate control (15 – 30C), as well as distance from the sun and luminosity. Second order climate controls are land/ocean distribution and ocean currents. Third and Fourth order climate controls are orbital variations (2C – 6C), insolation changes (<1C), volcanic forcing, ocean/atmosphere systems (like ENSO) etc…

    The climate has changed over the course of the Earth's history, as we all know. It was 1000C at the surface in the earliest period, but that factoid isn't going to advance the discussion much. To repeat…

    Context is everything.

    "Note the sunspots during the period of warming was averaging 30,000 to 50,000 a year and this year we won’t reach 1,000."

    And yet this year will be in the top six warmest in the instrumental record. 2008 was a low sunspot year, too, so lag effect can't account for the warmth.

    You get a much better correlation to interannual global temperature variability with ENSO. Neither ENSO nor sunspot activity can account for the warming of the last century, however (nor can any other oscillating weather factor). The ionly long-term forcing factor that survives scrutiny is GHG increases. We know CO2 has increased. We know the planet has gotten warmer in the time it's been increasing. No other candidate for long-term warming has emerged or withstood scrutiny. AGW is the best theory around and well-corroborated.

  194. Tel says

    > Heat moves (on average) from a warm object to cooler objects so the Sun is warmer than the Earth,

    and

    > CO2 only absorbs a narrow band of infra-red that is outside the absorption of water and other things.

    You don’t know what you’re talking about. The statements above prove it. Please learn some of the science. Have you read any of the excellent materials on the web? There is Spencer Weart’s book at aip.org and David Archer’s climate science course at U of Chicago.

    http://geoflop.uchicago.edu/forecast/docs/models.html
    http://www.aip.org/history/climate/

  195. “luminous beauty:

    The program you are puzzling over was used to produce a nice smooth curve for a nice clean piece of COVER ART.

    http://www.uea.ac.uk/mac/comm/media/press/2009/nov/homepagenews/CRUupdate

    I release you now back to your silly sport of spurious speculation.”

    So, according to your theory, this poor schmuck Harris spent three agonizing years of working nights and weekends (during the course of which, he lost approximately 85% of his sanity), in order to sketch up some static cover art? Exactly how much did that endeavor cost us? And, more importantly, how much carbon was “emitted” in pursuit of this lofty research goal? I’ve got a box of crayons, some graph paper and a ruler… can I get a slice of some of that juicy grant money?

  196. Dave X, your link describes an the induction heater. Energy moves from the electrical source (high temperature) coupled via the magnetic field to a resistive load (lower temperature than the source).

    You can look at a tungsten lightbulb filament, or induction furnace, or air conditioner (heat pump). It is well understood that electrical energy supplies are equivalent to super-hot thermal supplies. That’s what makes electricity such an all purpose energy delivery medium (also mechanical energy has similar properties but mechanical energy is more difficult to bend round corners).

    Can we just narrow the discussion to a general acceptance that thermodynamics does work as advertised?

    The early years of usenet news pretty much covered all the corner cases of perpetual motion already.

  197. Given that there are a multitude of questions yet to be answered & that there is data not yet revealed for actual review & duplication, the idea the world would dedicate trillions of dollars & virtually destroy its economic engine, looks to be somewhat insane.
    Worse is that even if we spend the trillions & take on massively reduced CO2 output for nearly 50 years, all the world will get out of such a draconian sacrifice, is a one degree temperature reduction. (And that depends on where the temperature reading is taken since many locations are artificially elevated due to asphalt or other urban heat island effects.)
    We cannot recreate the world based on science that is not revealed, reviewed, replicated, & demonstrably duplicated to prove its accuracy. Add to this the now open desire revealed by United Nations types for global government, many suspect the real purpose of all of this has been to establish a global government.
    That speculation is reasonable since those proposing this are now openly espousing for such a system. A socialist system by the way!

  198. Al Gore – Sun God Shaman?

    When you read the science, you’ll find it to be well known that the Earth experiences 30-35 year temperature cycles. Thus any ‘climatologist’ who supposedly didn’t know this would be considered a fool amongst their colleagues.

    In fact, if this is your job, you can’t NOT know about Earth’s 30-35 year temperature cycles!

    Thus many scientists have been waiting for this latest warming ‘wave’ within the greater ‘tide’ to break, which is very likely what we are seeing right now with all of this record winter weather around the world. It snowed last year in Baghdad, for crying out loud – first time in 100 years!! And now England is snowed in, and the fraudsters have not-so-subtly changed their rhetoric from ‘Global Warming’ to ‘Climate Change Crisis’.

    But this is only a re-branding of their fraud. Al Gore and crew attempted to use an upward fluctuation in this cycle to spread terror for power and profit, just as Inca and Mayan Priests once used their knowledge of eclipses to intimidate their populations, and pretend a direct connection to God.

    The motives of this are plain : Al Gore has a Carbon Credit Exchange waiting in the wings, developed with David Blood and, early in its development, Ken Lay of Enron. These two crooks have already played key roles in collapsing our economies, in their quest for profits under the new ‘Scarcity Capitalism’ Lay was so fond of, and we are about to hand over our economies to their sinister, vile, greedy machinations.

    If you look back at the only really solid data we have, which of course only covers the last 100 years or so – you will see the two previous cycles were a little longer than this one. If you go long range, you see that overall, we ARE in a warming cycle. It’s been warmer centuries ago, it’s hardly a bad thing, there’s nothing we can do about it anyway, so we prepare, is all. Big deal.

    But this game plays out like this – it’s not ’settled science’, but it is still fairly well known that the latest upward swing would lead to a dip – and we would have a carbon tax imposed just in time for a 35 year downswing in temperatures, when our energy needs would skyrocket. The payout would go beyond bags of money. It would be Global Fascism at it’s purest and finest.

    Thus they started to go into the schools in the 90’s, to indoctrinate a generation of children in their fraudulent cult – it’s easy to see that it’s been nothing less than timed, if you just do a little ‘hindcasting of your own. If this natural upward fluctuation in temperature cycles were to have lasted an extra year or two – assuming we are looking at the onset of the dip we should be and would be expecting if we had honest leaders who weren’t trying to enslave us with phony science – there would be an army of self-hating eco-police coming out to do battle against evil polluting humanity over the next decade and all the laws would have been in place.

    It is nothing less than FOOLISH to abandon one source of energy without first developing another. To attempt to force the issue with a blatant scientific fraud destroys any alleged value in such an effort – in case anyone has forgotten, Stalin taught us the true meaning of “The End Justifies The Means”.

    What Al Gore and the rest of the Climate Clown Cabal did – and are still trying to do! – is TREASON. Anybody remember what that is? Well, I’ll give you a hint – before there can be treason, you need to still have a country first.

  199. I ask this, with honest curiosity, and with a complete lack of guile:

    The data “corrections” listed in the intro… You’re claiming they were intended to “cook” the data, rather than to correct for data collected near population centers, where the readings would be elevated unnaturally?

  200. Excellent beat ! I wish to apprentice while you amend your site, how could i subscribe for a blog site? The account aided me a appropriate deal. I were a little bit acquainted of this your broadcast offered bright transparent idea

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *