Nov 29

Facts to fit the theory? Actually, no facts at all!

It just keeps getting better and better. Now we learn that the CRU has admitted to throwing away the primary data on which their climate models were based. I quote: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”

This means that even the CRU itself has no idea how accidentally corrupt or fraudulently altered its data might be. And the IPCC reports used the CRU’s temperature reconstructions as a gold standard. So did other climatologists all over the world. And now they can’t be verified! Without a chain of provenance tieing them back to actual measurements, every single figure and trendline in the CRU reconstructions might as well be PDOOMA, a fine old engineering acronym expanding to “Pulled Directly Out Of My Ass”.

Words don’t often fail me, but this is beyond ridiculous. How could anyone who calls himself a scientist allow the primary data and metadata to be destroyed? I’ve long thought the AGW case was built on sand, but it’s worse – it’s built on utter vacuum. Somebody will have to do the work of collating raw historical data from the weather stations and time periods the CRU mined all over again before we will know anything about the quality of their results. A significant portion of the climatological literature — everything that used CRU reconstructions or models as an input — will have to be outright scrapped.

While I still think the leaked emails and code make a strong case for active fraud, the scale of this disclosure makes that almost irrelevant. It is, at the very least, procedural incompetence on a breathtaking scale — the most astounding case of my lifetime, and I’m hard-put to think of a parallel in the entire history of science.

UPDATE: High drama! There’s a strong argument, based on the CRU dump, that the CRU’s claim to have lost the data in the 1980s has to be a falsehood. If so, we’ve moved from an incompetence-centered explanation back to a fraud-centered one. But then, a counterclaim that the reporting was bad and they’ve only destroyed 5% of their data. Pass the popcorn…

Nov 28

AGW fraud unravels at an accelerating pace

AGW alarmists, led by the “hockey team”, have dismissed criticisms that urban heat-island effects have been distorting surface temperature measurements upwards. Now Vincent Gray, a reviewer of the 2007 IPCC report, says this: not only is the single paper on which this dismissal is based fraudulent, the hockey team knows it’s fraudulent and keeps citing it anyway!

Paleoclimatologist Eduardo Zorita writes: “I may confirm what has been written in other places: research in some areas of climate science has been and is full of machination, conspiracies, and collusion, as any reader can interpret from the CRU-files.”

A Franco-Russian geomagnetics research group who was rebuffed when it tried to get primary temperature datasets from the CRU has assembled its own series of average temperature efforts by going back to ground-station measurements that the hockey team has never had an opportunity to “correct”. The result?

Aside from a very cold spell in 1940, temperatures were flat for most of the 20th century, showing no warming while fossil fuel use grew. Then in 1987 they shot up by about 1 C and have not shown any warming since. This pattern cannot be explained by rising carbon dioxide concentrations, unless some critical threshold was reached in 1987; nor can it be explained by climate models.

The report on this is well worth reading, as it goes into some detail on how the geomagneticians’ statistical methods produced a different — and much higher quality — result than the IPCC did. Among other things, they used daily rather than monthly averaging and avoided suspect techniques for statistically inferring temperature at places it hadn’t actually been measured.

Interestingly, their calculation of average temperature in the U.S. says “The warmest period was in 1930, slightly above the temperatures at the end of the 20th century. “. Could this inconvenient warm spell be what the VERY ARTIFICAL correction was intended to suppress?

I can almost pity the poor AGW spinmeisters. Perhaps they still think they can put a political fix in to limit the damage from the CRU leak. But what’s happening now is that other scientists who have seen the business end of the hockey team’s fraud, stonewalling, and bullying are beginning to speak out. The rate of collapse is accelerating.

Nov 26

Facts to fit the theory

On 12 Oct 2009, climatologist and “hockey-team” member Kevin Trenberth wrote:

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong.

Eyebrows have quite rightly been raised over this quote. It is indeed a travesty that AGW theory cannot account for the lack of warming, and bears out what I and other AGW critics have been saying for years about the fallaciousness and lack of predictive power of AGW models.

But the second sentence is actually far more damning. “The data is surely wrong.” This is how and where most scientific fraud begins.

Scientific fraudsters are not, in general, people pushing theories they know to be false. Outright charlatanism is not actually common, because it’s relatively easy to detect. Humans are evolved for a social competitive environernt
and are rather good at spotting lies, except when they’re fooling themselves because they want to believe.

In general, scientific fraudsters are people who are overinvested in a theory that they believe. Because they know it must be true, they interpret predictive failures as “The data is surely wrong”. It is only a short step from “The data is surely wrong” to fixing the pesky data until it looks right — see my previous post for an immediate example.

It’s only slightly longer step after that to destroying the inconvenient data that fails to fit your theory — something one of the hockey-teamers actually called for and there is strong reason to suspect they actually did.

Sometimes, actually, the data is wrong. Occasionally, experimental error will appear to falsify a theory that is actually correct. But research groups are entitled to the benefit of that doubt only when they meet the most rigorous standards of full disclosure about the “wrong” data. Not when their reaction is to conceal and destroy it.

Nov 25

Will the AGW fraud discredit science?

In response to the mounting evidence of fraud, data falsification, and criminal conspiracy by the “hockey team” clique of climatologists pushing anthropogenic-global-warming (AGW) theory, there has been serious and concerned speculation that the collapse of this scam may damage the credibility of science in general.

This is a reasonable thing to be concerned about, given that the species of toxic slime mold known as “creationists” have been oozing all over the blogosphere with suggestions that evolutionary biology is just as bogus. I think there are three important lessons to be drawn here: one is some reassurance from the history of major scientific frauds, another is a heuristic about when we should be suspicious of “science”, and a third is the importance of transparency.

Continue reading

Nov 24

Hiding the Decline: Part 1 – The Adventure Begins

From the CRU code file osborn-tree6/briffa_sep98_d.pro , used to prepare a graph purported to be of Northern Hemisphere temperatures and reconstructions.

;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,- 0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
;
yearlyadj=interpol(valadj,yrloc,timey)

This, people, is blatant data-cooking, with no pretense otherwise. It flattens a period of warm temperatures in the 1940s 1930s — see those negative coefficients? Then, later on, it applies a positive multiplier so you get a nice dramatic hockey stick at the end of the century.

All you apologists weakly protesting that this is research business as usual and there are plausible explanations for everything in the emails? Sackcloth and ashes time for you. This isn’t just a smoking gun, it’s a siege cannon with the barrel still hot.

UPDATE2: Now the data is 0.75 scaled. I think I interpreted the yrloc entry incorrectly last time, introducing an off-by-one. The 1400 point (same as the 1904) is omitted as it confuses gnuplot. These are details; the basic hockey-stick shape is unaltered.

UPDATE3: Graphic is tenmporily unavailable due to a server glitch. I’m contacting the site admins about this.

Nov 23

Open-Sourcing the Global Warming Debate

The email and documents recently netjacked from the Climate Research Unit at the University of East Anglia raise serious questions about the quality of the research being used to underpin major public-policy decisions.

In the open-source software community, we understand about human error and sloppiness and the tendency to get too caught up in a pet theory. We know that the most effective way way to combat these tendencies is transparency of process — letting the code speak for itself, and opening the sources to skeptical peer review by anyone.

There is only one way to cut through all of the conflicting claims and agendas about the CRU’s research: open-source it all. Publish the primary data sets, publish the programs used to interpret them and create graphs like the well-known global-temperature “hockey stick”, publish everything. Let the code and the data speak for itself; let the facts trump speculation and interpretation.

We know, from experience with software, that secrecy is the enemy of quality — that software bugs, like cockroaches, shun light and flourish in darkness. So, too. with mistakes in the interpretation of scientific data; neither deliberate fraud nor inadvertent error can long survive the skeptical scrutiny of millions. The same remedy we have found in the open-source community applies – unsurprisingly, since we learned it from science in the first place. Abolish the secrecy, let in the sunlight.

AGW true believers and “denialists” should be able to agree on this: the data get the last word, because without them theory is groundless. The only way for the CRU researchers to clear themselves of the imputation of serious error or fraud is full disclosure of the measurement techniques, the raw primary data sets, the code used to reduce them, and of their decisions during the process of interpretation. They should have nothing to hide; let them so demonstrate by hiding nothing.

The open-source community has many project-hosting sites that are well adapted for this sort of disclosure. If they require assistance in choosing one and learning how to create and manage an open-source project, I and many others in the open-source community will be happy to provide it.

For the future, we need to restore the basic standards of science. No secrecy: no secrecy of data, no secrecy of experimental methods, no secrecy of data-reduction or modeling code. Such transparency and accountability are especially vital when the public-policy stakes are large. This is among the excellent reasons that both the US and UK have Freedom of Information Acts, and the logic of those acts has perhaps never applied more pressingly than it does here.

Nov 21

Hiding the Decline: Prologue

According to the summaries I’ve seen, the 61 megabytes of email and documents net-jacked from the Climate Research Unit a few days ago do not — quite — reify conservatives’ darkest fantasies about “the team” (as the network of professional anthropogenic-global-warming alarmists communicating through CRU likes to style itself). To do that, they’d have to contain marching orders from the Socialist International.

However, the excerpts I’ve seen are already quite damning enough; among other things, they are evidence of criminal conspiracy to violate the Freedom Of Information act. And I no longer have to speculate about the rest; I’ve downloaded the documents from Pirate Bay and will study them myself.

For those of you who have been stigmatizing AGW skeptics as “deniers” and dismissing their charges that the whole enterprise is fraudulent? Hope you like the taste of crow, because I do believe there’s a buttload of it coming at you. Piping hot.

Am I going to blog about it? Heh…try to stop me…

UPDATE: I’ve read about 10% of the material and started a file of notes on it, but been delayed by preparing for a major release on one of my projects. In the meantime, read this excellent summary with links to the original emails.

Nov 16

Barbecue kings!

John Birmingham writes from Australia:

Even, and this is gonna hurt, the Americans have it all over us when it comes to cooking with fire, iron and tongs. In fact it’s arguable the American barbecue, or rather its plethora of regional variations on barbecue, set the gold standard worldwide for applying heat to meat while out of doors. While the popular image of American cooking, at least as practised by average Americans, involves squeezing a plastic sauce packet over something nasty in a chain restaurant, the truth is their barbecue specialists would put ours to shame. Undying, unutterable shame.

Alas, John, it is so. I have eaten barbecue all over the U.S. and the world, and the kings of the genre are in this country. Not in my part of it; I’m a Boston-born northerner and most barbecue where I live is as bland and bad as you describe. As a general rule in the U.S. the further south you go the better the barbeque gets, with the acme reached in south Texas. (Though the area around Memphis, further north, is a contender.)

Internationally, almost nobody even competes with the Southern U.S. for the barbecue crown. Brazilian churrasco is the one exception I can think of – that stuff can give good ol’ Texas ‘cue a run for its money. But you ex-British-Empire types aren’t even properly in the running. I’ve been to a backyard braai in South Africa and, while the spirit was there, the seasoning and cooking technique was sadly lacking, much like what you describe.

American cooking in general gets a bad rap internationally that it doesn’t deserve. It’s as though foreigners think it’s still 1965 here or something. I can remember a time in my childhood when the slams were richly deserved — heck, I remember returning here from Europe in 1971 and having to wait more than a decade before I saw a decent piece of bread! But Americans got a clue about food in the 1980s and haven’t lost it since. I learned this when I was traveling intensively around the turn of the century; most places I visited, even the “high-quality” food was inferior to what I ate at home.

UPDATE: I suppose it’s worth noting that Brazilian-style churrascarias have become the most recent high-end restaurant fad in the U.S., suggesting that other Americans generally agree with me that the style competes well with native ‘cue. Sadly, Korean barbecue failed to become naturalized here when that was tried in the 1990s.

Nov 15

The pragmatics of webscraping

Here’s an amplification of my previous post, Structure Is Not Meaning. It’s an except from the ForgePlucker HOWTO on writing code to web-scrape project data out of forge systems.

Your handler class’s job is to extract project data. If you are lucky, your target forge already has an export feature that will dump everything to you in clean XML or JSON; in that case, you have a fairly trivial exercise using BeautifulStoneSoup or the Python-library JSON parser and can skip the rest of this section.

Usually, however, you’re going to need to extract the data from the same pages that humans use. This is a problem, because these pages are cluttered with all kinds of presentation-level markup, headers, footers, sidebars, and site-navigation gorp — any of which is highly likely to mutate any time the UI gets tweaked.

Here are the tactics we use to try to stay out of trouble:

1. When you don’t see what you expect, use the framework’s self.error() call to abort with a message. And put in lots of expect checks; it’s better for a handler to break loudly and soon than to return bad data. Fixing the handler to track a page mutation won’t usually be hard once you know you need to – and knowing you need to is why we have regression tests.

2. Use peephole analysis with regexps (as opposed to HTML parsing of the whole page) as much as possible. Every time you get away with matching on strictly local patterns, like special URLs, you avoid a dependency on larger areas of page structure which can mutate.

3. Throw away as many irrelevant parts of the page as you can before attempting either regexp matching or HTML parsing. (The most mutation-prone parts of ppsages are headers, footers, and sidebars; that’s where the decorative elements and navigation stuff tend to cluster.) If you can identify fixed end strings for headers or fixed start strings for footers, use those to trim (and error out if they’re not there); that way you’ll be safe even if the headers and footers mutate. This is what the narrow() method in the framework code is for.

4. Rely on forms. You can assume you’ll be logged in with authentication and permissions to modify project data, which means the forge will display forms for editing things like issue data and project-member permissions. Use the forms structure, as it is much less likely to be casually mutated than the page decorations.

5. When you must parse HTML, BeautifulSoup is available to handler classes. Use it, rather than hand-rolling a parser, unless you have to cope with markup so badly malformed that it cannot cope.

Actual field experience shows that throwing out portions of a page that are highly susceptible to mutation is a valuable tactic. Also, think about where in the site a page lives. Entry pages and other highly visible ones tend to get tweaked the most often, so the tradeoffs push you towards peephole methods and not relying on DOM structure. Deeper in the site , especially on pages that are heavily tabular and mostly consist of one big form, relying on DOM structure is less risky.

Nov 09

Ego is for little people

When I got really famous and started to hang out with people at the top of the game in computer science and other fields, one of the first things I noticed is that the real A-list types almost never have a major territorial/ego thing going on in their behavior. The B-list people, the bright second-raters, may be all sharp elbows and ego assertion, but there’s a calm space at the top that the absolutely most capable ones get to and tend to stay in.

I’m going to be specific about what I mean by “ego” now, because otherwise much of this essay may seem vague or wrongheaded. I specifically mean psychologial egotism, not (for example) ethical egoism as a philosophical position. The main indicators of egotism as I intend it here are are loud self-display, insecurity, constant approval-seeking, overinflating one’s accomplishments, touchiness about slights, and territorial twitchiness about one’s expertise. My claim is that egotism is a disease of the incapable, and vanishes or nearly vanishes among the super-capable.

It’s not only scientific fields where this is true. For various reasons (none of which, fortunately, have been legal troubles of my own) I’ve had to work with a lot of lawyers. I’m legally literate, so a pattern I quickly noticed is this: the B-list lawyers are the ones who get all huffy about a non-attorney expressing opinions and judgments about the law. The one time I worked with a stratospherically supercompetent A-list firm (I won’t name them, but I will note they have their own skyscraper in New York City) they were so relaxed about recognizing capability in a non-lawyer that some language I wrote went straight into their court filings in a lawsuit with multibillion-dollar stakes.

This sort of thing has been noted before by other people and is almost a commonplace. I’m bringing it up to note why that’s true, speaking from my own experience. It’s not that people at the top of their fields are more virtuous. Well…actually I think people at the top of their fields do tend to be more virtuous, for the same reason they tend to be be more intelligent, less neurotic, longer-lived, better-looking, and physically healthier than the B-listers and below. Human capability does not come in nearly divisible chunks; almost every individual way that humans can excel is tangled up with other ways at a purely physiological level, with immune-system capability lurking behind a surprisingly large chunk of the surface measures. But I don’t think the mean difference in “virtue”, however you think that can actually be defined, explains what I’m pointing at.

No. It’s more that ego games have a diminishing return. The farther you are up the ability and achievement bell curve, the less psychological gain you get from asserting or demonstrating your superiority over the merely average, and the more prone you are to welcome discovering new peers because there are so damn few of them that it gets lonely. There comes a point past which winning more ego contests becomes so pointless that even the most ambitious, suspicious, external-validation-fixated strivers tend to notice that it’s no fun any more and stop.

I’m not speaking abstractly here. I’ve always been more interested in doing the right thing than doing what would make me popular, to the point where I generally figure that if I’m not routinely pissing off a sizable minority of people I should be pushing harder. In the language of psychology, my need for external validation is low; the standards I try hardest to live up to are those I’ve set for myself. But one of the differences I can see between myself at 25 and myself at 52 is that my limited need for external validation has decreased. And it’s not age or maturity or virtue that shrunk it; it’s having nothing left to prove.

Continue reading

Nov 05

Maybe if moral cowardice cost money, it would be less common?

Heh. State representative Fred Maslack of Vermont has proposed a bill under which non-gun-owners would have to register and pay a fee. Entertainingly enough, there is actual justification for this in a careful reading of the Vermont state constitution.

The Hon. Rep. Maslack is joking. I think. And I’m against requiring people who don’t want to bear arms to do so. But gad, how tempting – because underlying his argument is a truth that the drafters of the Vermont and U.S. constitutions understood. People who refuse to take arms in defense of themselves and their neighbors are inflicting a cost on their communities far more certainly than healthy people who refuse to buy medical insurance (and yes, I do think that proposed mandate is an intended target of Maslack’s jab). That externality is measured in higher crime rates, higher law-enforcement and prison budgets, and all the (dis)opportunity costs associated with increased crime. And that’s before you get to the political consequences…

I’ve never made a secret of my evaluation that refusal to bear arms is a form of moral cowardice masquerading as virtue. Real adults know how precious human life is, when they are ethically required to risk it on behalf of others, and when killing is both necessary and justified. Real adults know that there is no magic about wearing a police or military uniform; those decisions are just as hard, and just as necessary, when we deny we’re making them by delegating them to others. Real adults do not shirk the responsibility that this knowledge implies. And the wistful thought Rep Maslack’s proposal leaves me with is…maybe if moral cowardice cost money and humiliation, there would be less of it.

Nov 05

Structure Is Not Meaning

So, I announce ForgePlucker, and within a day I’ve got some guy from Y Combinator sneering at me for using regular expressions to parse HTML. Says it’s “crappy code”. The poor fool…he has fallen victim to a conceptual trap which I, fortunately, learned to avoid decades ago. I could spout a freshet of theory about it, but instead I’m just going to utter a maxim: Never confuse structure with meaning.

Continue reading

Nov 04

Announcing ForgePlucker

I’ve been strongly hinting in recent blog entries that I planned to do something concrete about the data-jail problems of present open-source hosting sites. Because I believe in underpromising and overperforming, I decided at the outset not to announce a project until I could not only show working code, but code with wide enough coverage to make it crystal-clear that the project goals are achievable with a relatively modest amount of effort.

That time has arrived. I am very pleased to announce ForgePlucker, a project aimed at developing project-state extractor software for backup, offline analysis, and (eventually) re-importation. The proof-of-concept code can extract complete issue-tracker state from Berlios, Gna!, or Savane — and issue trackers are probably the hardest part of the job. I expect extraction of repository histories and developer permissions tables to be easier. Extraction of mailing-list state is probably a bit trickier than either of those, but doable.

Continue reading