OK, this is lovely. Remember Phil Jones of the CRU saying they had retained only “homogenized, value-added” data rather than raw measurements? It seems that well before the CRU leak there was strong circumstantial evidence that much (perhaps all) of the supposed global-warming signal is accounted for by “adjustments” made to the data.
Get a load of this graphic:
This is the NOAA (National Oceanic and Atmospheric Administration) telling us itself what the “adjustments” do to the U.S historical temperature record. If you look over at the scale on the left, you’ll see that these “adjustments” explain about 80% 50% of the supposed global-warming signal between 1900 and 2000.
Gee, does that shape look…familiar? Why, yes. Yes it does. The Climate Skeptic post I lifted this from reproduces my plot of the “VERY ARTIFICAL correction for decline!!”. It’s possible to make too much of the similarity, I think; the “decline” that VERY ARTIFICAL was “correcting” for was in tree-ring proxies for temperature, not measured ground temperature.
Still…isn’t it curious that every time we dig into the supposedly “value added, homogenized” data, we find a similar pattern of “adjustments” in that oh-so-familiar hockey-stick shape?
Why, it’s almost as if the people doing the “adjusting” imposed their preconceptions on the data, fixing it to conform to pet theories that just happen to be lucrative funding sources as well. But, no, that could never happen, could it?
UPDATE: Estimate of error changed from 80% to 50% because the scale is Fahrenheit. I waited to do this until I could get an AGW alarmist to commit to a specific correction, so I couldn’t be accused of shading the number to favor a skeptical position.
> If you look over at the scale on the left, you’ll see that these “adjustments†explain about 80% of the supposed global-warming signal between 1900 and 2000.
You mean the scale that reads Farenheit, meaning the “adjustments” shown account for roughly 0.5 degrees Farenheit?
If only we had satellite data to compare to the adjusted temps! Then we could determine whether the adjustments are necessary. On the other hand, we could always just engage in baseless speculation according to our own preconceptions.
The scale is in Fahrenheit. So the correction is just about 0.25C or 1/3rd of the published rise in the 20th century.
On the other hand: The curve should have a negative slope due to correction for urban heat island effect which would explain much or even all of the official “warming”.
Sure, we could put more weight on the satellite data, because it’s much, much more accurate to take temperature from 100+ miles up than it is to use on-site instruments.
This is anecdotal, but take a look at this interesting error in NZ temperatures: http://www.jgc.org/blog/2009/12/theres-something-seriously-odd-about.html
> If only we had satellite data to compare to the adjusted temps!
Knowing how much the homogenized data differs from the actual data in 2000 doesn’t tell us how the homogenized data differs from the actual data in 1960 or before.
And then there’s the question of how the satellite data is calibrated….
I’m assuming that “USHCN” refers to the U.S. data (country code 425) extracted from the GHCNv2 data, and represents the difference between ‘v2.mean.Z’ and ‘v2.mean_adj.Z’? If so, why is the chart shown in Fahrenheit, as the original data are all in Celcius?
(data can be obtained from ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2)
>If so, why is the chart shown in Fahrenheit, as the original data are all in Celsius?
Interesting question. The text of the Climate Skeptic article makes more sense if the F is simply a mislabeling of the chart.
@esr: doesn’t look like a simple chart mislabeling problem to me. The entire page on the NOAA web site is in Fahrenheit.
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html
ANOTHER DATA ADJUSTMENT EXAMPLE – Read how the professionals ‘scientifically’ adjust the weather data – A “weather geek” looks into how NASA Goddard Institute for Space Science (GISS) adjusted the Darwin Australia raw-data temperature record. Raw-data shows a DECLINE in temperature. NASA GISS ‘professionally’ adjusted data shows a RISE in temperature (“hide the decline”?) … http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/
ICE CORE DATA – Read what the Greenland Ice Core Temperature Proxy Data shows – whether or not you believe there is a problem all depends on your time-scale … http://www.foresight.org/nanodot/?p=3553 … just keep scrolling down the page. After looking at the ice core temperature proxy data you will conclude that:
(1) the Earth is warming – again; and that,
(2) the Earth is not yet as hot as it was previously (e.g. Medieval Warm Period, earlier periods of mass industrialization – grin); and that,
(3) the Earth will probably get wicked cold sometime in the future
You will be left wondering both what the big rush is _AND_ why the heck you haven’t seen this Greenland Ice Core Temperature Proxy data before
EMAIL: LEAKED or STOLEN? Read how an uber-unix mail admin geek figures out the Climategate emails are a “Whistle-blower” leak of a UK “Freedom of Information Act” (FOIA) info gathering activity … http://wattsupwiththat.com/2009/12/07/comprhensive-network-analysis-shows-climategate-likely-to-be-a-leak/#more-13821 … email really is forever.
Iowahawk begs forgiveness for the departure from the usual Iowahawk bill of fare.
He writes “Fables of the Reconstruction”, a detailed how-to-guide for replicating the climate reconstruction method used by the so-called “Climategate” scientists. Not a perfect replication, but a pretty faithful facsimile that you can do on your own computer, with some of the same data they used.
Got 30 to 60 minutes, a modest amount of math and computer skills, and curiosity? Read on here … http://iowahawk.typepad.com/iowahawk/2009/12/fables-of-the-reconstruction.html
What I am curious about is why the adjustments don’t have a downward slope. Aggregate adjustment has really trended upwards over the past forty years, despite urban expansion and population growth? I mean, I count a total of four downward adjustments since 1960.
I saw a front page above the fold story in the newspaper about how the Associated Press has analyzed the leak and they find that there’s no cause for concern and that Human-centric global warming is still real…
> I saw a front page above the fold story in the newspaper about how the Associated Press has analyzed the leak and they find that there’s no cause for concern and that Human-centric global warming is still real…
Well, that’s a relief!
:)
>The text of the Climate Skeptic article makes more sense if the F is simply a mislabeling of the chart.
Theory 1:
First NOAA make a labelling mistake, then Climate Skeptic fails to read the label, and the two errors cancel giving a serendipitous correct result.
Theory 2:
NOAA have drawn the graph in Fahrenheit for an American audience.
Which theory would a rational person prefer?
If you controlled the primary data set used by 90% of the world’s climate researchers (let’s call it, oh I don’t know – how about “GHCN”), and if you wanted to guarantee a consensus that the planet was warming, wouldn’t adjusting the data be precisely the approach to take?
All of the climate scientists would discover a warming signal. They’d all publish in the peer reviewed literature. in 15 years, it would be the accepted wisdom that temperatures were increasing suddenly and dramatically.
I mean, after all, everyone sees it. The science would indeed be settled.
You know in a recent thread on this matter, we discussed falsifiability, and the frustration many people have with the AGW folks unwillingness to give a falsifiable claim. Well here is the thing: Al Gore in Denmark made the following claim:
“This is the volume metric measure of the ice and some of the models suggest to Dr Maslowski that there is a 75 per cent chance that the entire north polar ice cap, during the summer months, could be completely ice-free within five to seven years.”
Link here
My congratulations to Mr. Gore. This is a falsifiable claim (though of course there is a little weasel in there too.) Anyone of the AGW supporters here willing to stand by that claim — that the entire north polar ice cap will be ice free during the summer months by 2014-2016?
>NOAA have drawn the graph in Fahrenheit for an American audience.
I don’t think anyone is disputing that this is a Fahrenheit measurement. Still, a suggestion that the gross adjustment when converted for Celsius (or the upward trend of adjustment) is statistically negligible would be a little disingenuous.
That said, this alone does not mean that the adjustments themselves are dubious. But it does beg the question of how much weight was given to population growth and UHIs over the same period. Urbanization and industrialization certainly wasn’t a “hockey stick.” More like a moderately steep ski slope.
As much as it pains me to say it, pete is probably right on this one. Hey, it had to happen sometime. The actual graphic he referenced at NOAA is here -http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif
The black line corresponds to the climate sceptic article fairly closely, so I’m going to assume that’s where it came from. So that one’s the adjustment for time of day of the observations. So in itself it may actually be ok, although it certainly bears scrutiny, and until they release their code I’m not willing to accept it at face value.
The others are somewhat more problematic. The red line is an adjustment for the switchover to MMTS, and it seems to me that this one is going the wrong direction, because at the time the switchover happened in many cases the weather stations were moved closer to buildings in order to be able to transmit their automated readings. Still, this one is a one-time deal and small.
The yellow line is the adjustment due to site moves, and this is definitely the most problematic. The premise that site moves predominantly have required an adjustment upwards is ludicrous on its face, and that’s another 2 tenths of a degree F trend, steadily rising. The logic and the code behind this step need a lot of scrutiny here.
The blue line is an adjustment due to filling in missing data – it adds about a tenth of a degree, but there’s not much trend there. The final purple line is the adjustment for urban heating, which is a total of about a tenth of a degree over the century downwards.
So on its face it looks like almost all of the adjustments that add trend go one way, with the exception of the UHI adjustment.
This page at NOAA explains some of the “opaque and largely undisclosed” reasons for the adjustments.
>The black line corresponds to the climate sceptic article fairly closely, so I’m going to assume that’s where it came from.
The graph esr’s posted is two down from the one you found. I prefer your one — for one thing it makes it clear that they have adjusted for the UHI effect.
>The yellow line is the adjustment due to site moves, and this is definitely the most problematic. The premise that site moves predominantly have required an adjustment upwards is ludicrous on its face, and that’s another 2 tenths of a degree F trend, steadily rising. The logic and the code behind this step need a lot of scrutiny here.
From NOAA:
>The yellow line is the adjustment due to site moves, and this is definitely the most problematic. The premise that site moves predominantly have required an adjustment upwards is ludicrous on its face, and that’s another 2 tenths of a degree F trend, steadily rising. The logic and the code behind this step need a lot of scrutiny here.
The differential between site moves and UHI is probably what deserves the most scrutiny here. Of course, it’s also a very old debate at this point: Have we underestimated the warming as it pertains UHI or overestimated cooling as it pertains to the site moves? Possibly one or both.
Yup. it does say that pete, it’s just that I don’t find it believable in the least that the trend would be all one way. As many, many stations, anecdotally, were moved closer in to buildings, had parking lots built next to them, had air conditioner exhausts put near them, etc. So as I said, this step bears great scrutiny, because it’s counterintuitive.
But I have to take exception that the page you linked makes them not ‘opaque and largely undisclosed’. Until they post the code, and the raw data, it’s exactly that. They’ve given a hundred thousand foot overview of a process that needs to be examined at the one foot level.
>Until they post the code, and the raw data, it’s exactly that.
The raw data’s here: http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php
I agree that the code would be nice (I think the GISTEMP team have released their code).
Better grab that CRU data while you can…it seems to be disappearing:
http://wattsupwiththat.com/2009/12/14/whats-going-on-cru-takes-down-briffa-tree-ring-data-and-more/
>Better grab that CRU data while you can…it seems to be disappearing:
This is old news. They’ve been redirecting to that “news” page for quite some time now. I don’t think that there’s anything necessarily sinister about this. It could be that a spike in massive requests was blasting their bandwidth or crapping out the webserver. Also, they are clearly in “damage control” mode, so I’m sure they’d prefer to control the flow of information as tightly as possible.
I don’t think there’s any need to sensationalize every banal detail of this story. The leak materials themselves should remain the focus. This is a P.R. nightmare for UEA, and they will likely conduct themselves in many ways that may appear “sinister” but are actually just practical business decisions for them.
I have no doubt that the listed reasons for the corrections are all defendable and correct. The list may be incomplete, however, in that there may be corrections that should have been applied but were not.
It could be the well-known problem of applying corrections to your data until you get the right answer and then stopping. Nothing nefarious there, just human nature. This effect was seen in some famous measurements: the speed of light and the charge of the electron. For the electric charge, Milliken made a small error in his original paper, and the result he got was too small. In subsequent papers, you can see the results slowly rise until they reach the correct value. Nobody intentionally fudged their data, but they stopped looking for systematic effects when they got nominal agreement with the “correct” result.
If that kind of thing can happen for an easily-replicable experiment such as those described above, it is extremely likely to be a problem for the large corrections applied to the temperature data. I am not an expert in temperature measurements, but as an experimentalist I am quite disturbed to see corrections applied that are (a) the same order of magnitude as the claimed effect, and (b) the same shape as the claimed effect.
I don’t care how well the corrections are described or how well they are justified; the two properties above are cause for concern. I see no evidence for intentional fraud on the part of most climate scientists, but I am concerned that they do not seem to be aware of the capacity of the human mind to fool itself.
>I am quite disturbed to see corrections applied that are (a) the same order of magnitude as the claimed effect, and (b) the same shape as the claimed effect.
The shape of UHI is also a little disturbing to me, if only for its smoothness. It’s a pretty good indicator that UHI correction was, at the very least, grossly oversimplified and treated with far less value than the corrections for site moves. It seems to *generally* proceed in concert with the rate of population growth (although, even that is somewhat arguable), but is correcting for UHI as simple as that? For instance, while the population rate did increase somewhat steadily, the migrations of those populations and the growth rates of individual industrial and urban sectors surely were not so stable.
http://www.populartechnology.net/2009/10/peer-reviewed-papers-supporting.html
>This page at NOAA explains some of the “opaque and largely undisclosed†reasons for the adjustments.
But notably, it doesn’t have very much to say about how it arrived at its UHI adjustment… except “Adjustments to account for warming due to the effects of urbanization (purple line) cooled the time series an average of 0.1F throughout the period of record.” Well, okay.
It’s notable that NOAA’s homogenized data has always trended warmer than GISS, averaging +0.40C/century after adjustment. It seems curious to me that so much emphasis can be placed on mankind’s effect on nature via carbon emissions, while the potential effects of population, industrial and urban scaling are dismissed out of hand as adjustment indicators… when convenient, that is.
Even more strange is that the corrections applied for the station moves are framed in a way that *is dependent* on UHI (“During this time, many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites.”), while the flat rate of decline applied for UHI doesn’t appear to be connected meaningfully to the site moves.
In other words, if that chart is showing us something meaningful, then we should be able to observe some meaningful relationship between the “yellow” line and the “purple” line. As it stands, it seems as though either one or perhaps both are fairly arbitrary.
Check out Richard North’s EUReferendum-
The IPCC head, Dr Pachauri, is set to get rich -like Al Gore- off the climate scare.
In fairness, Pachauri’s not as dumb as Al Gore.
@Jack Williams and @esr:
Yep. The USHCN looks like it is a different data set than the GHCN. I wonder what the differences are between USHCN data and the GHCN data? I guess we could look at that. So far the main differences are that the USHCN data is, in fact, in degrees Fahrenheit. (The data for GHCN and USHCN are actually represented in the raw files in tenths of a degree Celcius and Fahrenheit, respectively.)
Just like the GHCNv2 data, the USHCN has a raw set and an adjusted set.
Maybe we should do a similar graph for GHCNv2. The files are drop dead simple to parse: they’re old-fashioned fixed-width ASCII files. While I could whip up such a program in Python in day or two, I’ll bet ESR could do it 10 minutes… ;)
>Interesting question. The text of the Climate Skeptic article makes more sense if the F is simply a mislabeling of the chart.
Or, as pete suggested, this is another example of contrarians being overly generous in the interpretation of data apparently supporting a politically palatable position.
Not being a climatologist, paleoclimatoligist, meteorologist or statistician, I have only slightly more technical understanding of the science than an educated layman. But I like to think I do understand people and I can observe how they act and speak. Basically, I know bullshit when I hear it.
>I don’t think that there’s anything necessarily sinister about this. It could be that a spike in massive requests was blasting their bandwidth or crapping out the webserver. Also, they are clearly in “damage control†mode, so I’m sure they’d prefer to control the flow of information as tightly as possible.
I didn’t say there was anything sinister about it. Just said you better grab it while you can.
“Or, as pete suggested, this is another example of contrarians being overly generous in the interpretation of data apparently supporting a politically palatable position.”
Or simply missing the units on the graph. If you aren’t an American, you would EXPECT to see the graph in Celsius. Since NOAA is a United States agency, I imagine they provided the graph in the units used in the United States.
Then again, has anyone bothered to check if the values are *really* in degrees F? It’s also possible that the graph template they used defaults to degrees F in the legend.
It likely is. How is it that you can see that behavior in others but not yourself?
In any case, the difference is not particularly important; a factor of two doesn’t really change the argument much unless the claim was that the entire recent warming trend has been an instrumental artifact, which is a pretty untenable position.
By the way, I have been pretty critical of the climate science community since I started looking into these issues, but here is a great example of how scientists should behave. It looks as if Climategate is causing scientists to be more careful about public (mis-)statements about their work.
>Or, as pete suggested, this is another example of contrarians being overly generous in the interpretation of data apparently supporting a politically palatable position.
This is getting ridiculous. So now, we are fighting life-or-death over what is essentially 100 years of observable data (0.00000002% of the history of Earth “climate”)? Meanwhile, temp forecasts using the Hansen model have hobbled towards 50%, and that’s being generous, considering how many adjustments were made along the way (“Solar variance” for example is proffered as a explanation for poor forecasting, whenever they aren’t busy vilifying and destroying solar variance skepticism) and the virtual totality of the model that is based on proxy reconstructions. Mann and Hansen, apparently, are more intelligent and prescient than any scientific mind in history, even though an odds-on bet on their temp predictions would have you flying back from Vegas on an econo-flight.
The hubris is staggering… or would be had the history of scientific method not been littered with similar junk. Perhaps we are entering another “Medieval Period.” Maybe irrationality trends with surface warming. I wonder if anyone has plotted that relationship.
pete, if you have two independent data sets, and you adjust one of them because it has errors so that it matches the other, then you only have one independent data set. You can’t then go on to say “Look, we have two independent data sets both of which agree thus our proposition is proven!” I’m really really dubious of “correcting” raw data. If you have to correct it, it’s not data, it’s guesswork.
Jessica Boxer, don’t kid yourself. Gore is merely making a claim about what is possible. It’s also possible that he’ll jump up, grab three balls, and start juggling with no practice or training. It’s also possible that he’ll stand on his head while doing so. *Many* things are possible. A falsifiable claim and a probability of it happening are simply not compatible ideas. If it fails … well, OBVIOUSLY it was one of the 25% of the probability, thus, Gore wasn’t wrong, isn’t wrong, and won’t be wrong.
For morgan greywolf (assuming that the HTML pre tag works here):
def gen_list(fmt, ubound, count, plusChar, minusChar):
base = (ubound-(x*5) for x in range(count))
def fix(inp):
return fmt % (inp-5, inp, plusChar) if inp>0 else fmt % (abs(inp), abs(inp-5), minusChar)
return [ fix(x) for x in base ]
_lats = gen_list(“%2d-%2d%1s”, 90, 36, “N”, “S”)
_longs = gen_list(“%3d-%3d%1s”, 180, 72, “W”, “E”)
def line_slicer(line, chunks):
“””Return the input line in 6 character chunks.”””
offset = 0
for x in range(chunks):
yield line[offset:offset+6]
offset += 6
class MonthData (object):
“””
self.mapping gives you a dict of dicts. Outer key is latitude,
Inner key is longitude
“””
def __init__(self, date_list):
self._month = date_list[0]
self._year = date_list[1]
self._mapping = {}
@property
def month(self):
return self._month
@property
def year(self):
return self._year
@property
def mapping(self):
return self._mapping
def fill(self, fiter):
“””fiter is an open file iterator that is positioned to read the first
line of the month’s data set. That is the line *after* the month year
line!”””
for latitude in _lats:
currmap = {}
longitude = iter(_longs)
for x in range(6):
line = fiter.next()
for val in line_slicer(line, 12):
currmap[longitude.next()] = int(val)
self.mapping[latitude] = currmap
def __str__(self):
retval = “%d %d\n” % (self.month, self.year)
vals = (“%s %s %d” % (x, y, self.mapping[x][y]) for x in _lats for y in _longs)
return “\t”.join((retval, “\n\t”.join(vals)))
def toCsv(self):
“””Returns a value suitable for csvwriter.writerows() method.”””
return ((self.month, self.year, x, y, self.mapping[x][y]) for x in _lats for y in _longs)
class GridFile(object):
“”” Read in a NOAA grid file. “””
def __init__(self, filename):
“”” Read the file given by “filename” and create a list of MonthData objects.”””
self._mlist = []
fin = open(filename, “r”)
for line in fin:
curr = MonthData([ int(x) for x in line_slicer(line, 2) ])
curr.fill(fin)
self._mlist.append(curr)
fin.close()
@property
def months(self):
return self._mlist
# To dump out in csv….
# test = GridFile(“min_grid_1880_2008.dat”)
# import csv
# wfile = open(“some.csv”,”wb”)
# writer = csv.writer(wfile,quoting=csv.QUOTE_NONNUMERIC)
# for m in test.months:
# writer.writerows(m.toCsv())
# wfile.close()
# It would not be very difficult to dump the data into
# a MySql database, for that matter.
*bleep* It didn’t. Since it’s Python, the all-important indentation is missing. If you want a copy, mflacy at verizon dot net.
eric, you are absolutely convinced that AGW is false? If so, what is in your opinion, the clearest, strongest, argument against it?
ESR says: Predictive and retrodictive failure. There’s no greenhouse signal in tropical-atmosphere temperature profiles, and global temperature measures stopped rising any faster than the long-term postglacial trendline in 1998-1999. Retrodiction fails too; for the AGW models to look good, the Medieval Warm Period and the Roman Climactic Optimum have to be gaslighted out of existence.
Mark: s’okay. I pulled it out the page source. ;)
In case people are looking for the USHCN data, the URL is ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly — NOT the URL pete provided, which is the HTTP page for the GHCN data.
@esr: Don’t give them any ideas!
>My congratulations to Mr. Gore. This is a falsifiable claim (though of course there is a little weasel in there too.)
A little? I’ve yet to hear anyone… at NASA, at NSIDC, at NAS… step forward to affirm this strident, apocalyptic claim. Even the head researcher in the study Gore quoted shrunk in horror when asked to qualify this 75% probability, and instead said he would only vouch for up to a “50%” likelihood. Anybody have a coin?
But, of course that’s besides the point. The point was not to make a falsifiable claim. The point, as usual, was to cherrypick the most terrifying prediction available and shout it through a media megaphone. If he’s wrong… well perhaps he didn’t properly account for solar variance. More likely it will something along the lines of “See? My plan to reduce carbon emissions is working! But we have more work to do.” Then the goalposts will be moved again, most likely. The ice cap will melt in 2020, instead, then 2030, and so on.
This was the primary tactic of many a Doomsday cult throughout history. When the apocalypse, inevitably, does not happen, use the respite to consolidate your power, than reschedule, reschedule, reschedule.
All my experience has shown that this is the kind of thing repeated by environmental activists with fairy tramp stamps — the left-wing equivalent of sandwich-board-clad rapturist street preachers — not climatologists. It doesn’t invalidate the work climatologists are actually doing, or the real concerns they raise — not in the slightest.
>All my experience has shown that this is the kind of thing repeated by environmental activists with fairy tramp stamps…not climatologists.
Well the actual claim by NSIDC is ‘Ice free summers sometime in the next few decades’, which really isn’t that much more than the 5-7 years “predicted” by Gore, so it seems that he’s not far off of the mainstream here. See here for example. It’s definitely a falsifiable prediction, though.
Skip said:
>” Well the actual claim by NSIDC is ‘Ice free summers sometime in the next few decades’, which really isn’t that much more than the 5-7 years “predicted†by Gore, so it seems that he’s not far off of the mainstream here. See here for example. It’s definitely a falsifiable prediction, though.”
First of all, NSIDC sea ice assessment of had nothing to do with Mr. Gore’s outburst of Chicken Little fear-mongering. The Oscar winner was not quoting them, but rather a recent Naval PS study. Here’s a [url=”http://beyondzeroemissions.org/media/radio/dr-wieslaw-maslowski-predicted-2013-ice-free-summer-arctic-five-years-ago-now-he-says-ma”]link[/url] to an Barbara Walters-style interview with the lead researcher. I wouldn’t advise anyone to give it anymore then a cursory scan for falsifiable scientific claims. I assure you there are none at all, as has become the norm for these sorts of “evolving climate models.” He does name drop Gore though, and mentions his Nobel. Smart move. In this New Age of Pseudoscience, he is sure to go very, very far.
worthwhile reading for analyzing the data
http://iowahawk.typepad.com/iowahawk/2009/12/fables-of-the-reconstruction.html#more
Pete, that site provides monthly averages. By defintion, averages are raw, not derived. Although at least one site I’ve found did test a particular site to see if averaging highs and lows yieled a statistically significant difference from a 24 hour average, the GHCN-daily would probably be a better source. In the same thread at Megan McCardle’s blog where I dug up those links before, someone gave a link to a nice web app to list sites both adjusted and original.
Ugh.. averages are derived not raw… maybe ESR can fix my post for me.
As I understand it, the official line on Urban heating is that if it were a significant problem it should tend to be cooler on more-windy days than it is on less-windy days, more so than it actually is. Which is a weird and roundabout way of measuring which might easily not work depending on the local geography. For instance, suppose your thermometer is in the middle of New York’s Central Park, a local cool spot surrounded by warmer paved areas. It’s classified as “urban” but should be cooler on calm days than on somewhat windier days.
Incidentally, I have my own theory about global warming, which is in this Youtube video. :-)
Presumably you mean the Little Ice Age (which wasn’t technically a glacial period). On the thousands-of-years scale we should be on a downward slope towards the next glacial period. The warming trend out of the LIA was driven by solar variation, but temperature and solar variation have diverged in the last 30 years.
Ironically, you need to rely Phil Jones’ HadCRUT3 record to make any strong claim about a reduced warming trend. Other records don’t show a significant decrease in trend.
Skip said:
>” Well the actual claim by NSIDC is ‘Ice free summers sometime in the next few decades’, which really isn’t that much more than the 5-7 years “predicted†by Gore, so it seems that he’s not far off of the mainstream here. See here for example. It’s definitely a falsifiable prediction, though.”
Yes, but the NSIDC sea ice assessment of had nothing to do with Mr. Gore’s outburst of Chicken Little fear mongering. He wasn’t quoting them, but rather a recent Naval PS study, one that, as always, lacked falsifiable scientific claims. The head researcher, however, did “name-drop” Gore in a fluff interview he gave to last March (to a fluff advocacy group, no less), and even manages to make a mention of Gore’s Nobel. Smart career move.
As for “5-7 years” not being far off from several decades, I suppose that depends on what timescale you intend to use, and how much weight you lend to *reproducible,* falsifiable claims. Certainly, the difference between five years and thirty is negligible compared to, say, planetary age. However, so is the full body of observational weather data. To a paleontologist, thirty years is likely to be considered a neglible span (if not, that better have been a very interesting thrity years!) To the policymakers and businesspeople invested in climate alarmism, and to the people they are trying to frighten, thirty years might as well be an eternity.
Sigh, The problem is Information, nobody has enough of it. We are just now starting to launch satellites with the capabilities to make high resolution measurements not only for the surface but for the different levels of the atmosphere. As for the ocean, we know even less. Woods Hole and other oceanographic research centers have developed various autonomous surface and deep diving probes.
The problem is the governments of the world are willing to negotiate spending hundreds of billions to trillions of dollars on wild guesses based on limited and too often suspect data plus thin and somewhat (at best) shaky science. But our weather sats are old and dieing and few or no replacements in-line. For every observational satellite launched a half dozen don’t get built. Oceanographic science is even worse off. For 10 % of what the government will waste on pork and earmarks this year alone we could build enough satellites and ocean probes to keep our launch schedule full for the next ten years. Instead here we are arguing about thin science and crap data. How about instead of making the next post, sit down and compose a snail mail letter ( according to our company lobbyists almost all email from the public goes to file 13 where snail mail actually gets read by somebody) to your representatives politely demanding they actually get off their butts and start appropriating money for things like satellites.
P.S. I’m STILL pissed about the cancellation of the superconducting supercollider. LHC my butt!
Unfortunately, the climate scientists’ implicit acceptance of inaccurate information only from one side makes them appear biased, which raises questions about the quality of their science. After Gore’s film came out, I asked my my brother-in-law, who is an atmospheric scientist, about the manifold errors and scientific inaccuracies. His response to me was, basically, that although Gore had the science wrong, his message was the “right” one, so, like the vast majority of climate scientists, he turned a blind eye to them.
So, coupled with the prevalence of the use of pejorative terms such as “denialist” by supposedly responsible scientists (here is an example), I would say that these thing do raise substantial concerns about the validity of the science.
> The warming trend out of the LIA was driven by solar variation, but temperature and solar variation have diverged in the last 30 years.
I’d like you to source me on that claim, please.
Capt. Caveman said:
> Sigh, The problem is Information, nobody has enough of it.
I don’t think so. Amassing more data to torture doesn’t necessarily solve the problem of a drunken methodology, or biased homogenization. Another century or two of data might be useful in the hands of dispassionate observers who actually practice the scientific method. But as Mr. Gore noted, we don’t have nearly that much time to perform useful study and experimentation…. perhaps only 5-7 years. Next month, that number might be down to three weeks. Why dither about such frivolities as empirical data and the scientific method when the Doomsday clock is ticking?
http://www.spectator.co.uk/essays/all/5592863/the-inconvenient-truth-about-malaria.thtml
Jake Fischer:
http://www.spectator.co.uk/essays/all/5592863/the-inconvenient-truth-about-malaria.thtml
Well, this quote will suffice:
“I am a scientist, not a climatologist, so I don’t dabble in climatology.”
Isn’t it about time we all began framing the debate this way? “I am a scientist. I don’t dabble in climatology, astrology, witchcraft or politics.” I have already heard several mentions about G&T. G&T is certainly industry-funded dross, but it is industry-funded dross that assails pure vapor. When do the adults step in?
I had no idea there was a schedule. Can I put it on my Outlook calendar?
“After Gore’s film came out, I asked my my brother-in-law, who is an atmospheric scientist, about the manifold errors and scientific inaccuracies. His response to me was, basically, that although Gore had the science wrong, his message was the “right†one, so, like the vast majority of climate scientists, he turned a blind eye to them. Bad science, good answer.
Reminds me of “This paper contains much that is new and much that is true. Unfortunately, that which is true is not new and that which is new is not true.”
# jrok Says:
December 15th, 2009 at 11:31 pm
>Capt. Caveman said:
>> Sigh, The problem is Information, nobody has enough of it.
>I don’t think so. Amassing more data to torture doesn’t necessarily solve the problem of a drunken methodology, or biased >homogenization. Another century or two of data might be useful in the hands of dispassionate observers who actually practice the >scientific method. But as Mr. Gore noted, we don’t have nearly that much time to perform useful study and experimentation…. perhaps >only 5-7 years. Next month, that number might be down to three weeks. Why dither about such frivolities as empirical data and the >scientific method when the Doomsday clock is ticking?
Ah yes, the difference between science and scientists. We are, I think, going to see the serious beginnings of true open source science come out of this giant circus of the absurd. No longer will theories be put forward on the basis of “because I said so”, from now on I think we’ll see demands to actually see the data and models the theories will be based on.
The thing most people don’t realize is every scientific hoax of modern times has been peer reviewed, but until now, except for a few exceptions such as Piltdown man the embarrassing little disasters didn’t become well known because, lets face it, news people don’t read science journals. (far too many big words for their little minds to grasp) But nowadays we live in a heavily connected 24/7 world where people are much more likely to discuss and comment on the emperor’s apparent lack of clothing, especially after the present giant fireworks display of slanted science. The emails are not going away no matter how much some would wish.
Also, I know that the tinfoil hat brigades will be out in force, but so what? They’ve been around forever and they’ll never go away. Just remember their motto “the complete lack of proof is proof of a conspiracy/cover up”.
>We are, I think, going to see the serious beginnings of true open source science come out of this giant circus of the absurd.
It is to be devoutly hoped. And I think it’s actually a rather likely outcome.
My only quibble is that it won’t be the ‘beginning’ of open-source science, but a reassertion of the basics. What honest scientists have been doing all along. If you mean the beginning of a social demand that the ‘science’ cited in public-policy debates meet the highest standards of disclosure, then I agree with you.
> The thing most people don’t realize is every scientific hoax of modern times has been peer reviewed, but until now, except for a few exceptions such as Piltdown man the embarrassing little disasters didn’t become well known because, lets face it, news people don’t read science journals.
Well, that may be true. But the soupy, wild eyed claims these frauds have been engaging reminds me more of Soviet vernalization (and, I guess, Lamarckism in general) than of Piltdown man. Both dovetailed with the interests of power mad statists and relied on absurd propaganda and the “cleansing” of vocal critics to thrive. And just like this current group of GCM grifters, pawns and gamesters, Lysenkoism wasn’t killed by a single silver bullet. The wounded AGW monster will limp on for many years in one form or another, until the average person finally grows weary of their bogus forecasts and their unearned acclaim. The AGW cultural celebrities like Gore will be more easily disposed of. They will stupidly and arrogantly destroy himself at some point in the next 5 to 7 years… I’m about 75% sure of that.
Pete, What is your source for the claim that, “…temperature and solar variation have diverged in the last 30 years.”
Also, how do you define “solar variation” and what are its presumed impacts on temperature?
> Pete, What is your source for the claim that, “…temperature and solar variation have diverged in the last 30 years.â€
I’ll assume he is referring to Lean, and the “perceived” divergence in the early 80’s. Of course this claim goes straight back into the crock of bull with the rest of it. The presumption that these models can predict decadal shifts in “GAT” is debunked nonsense, and their decadal retrodiction is based on utter garbage as well. Frankly, anyone who claims that they can forecast climate shifts in ten year intervals is no better than a Tarot card reader, as we’ve already seen. These crooks smooth all of MWP (in some cases, as flat as a pancake!) and then they want to claim a 25 year solar “divergence” prove their “science.” Put your money in your shoe before you hang around this crowd.
My personal favorite doublethink I’ve noticed: CO2 is not sufficient to directly increase temperatures as predicted or observed, but do to feedback effects(increase water vapor being the most commonly mentioned) can drive much larger changes in temperature. Solar variations is not sufficient to directly increase temperatures as predicted or observed, and can thus be discarded.
It’s a form of group doublethink that reminds me of the voter paradox. Individuals accept the claims without understanding the dependence on feedback, or only dig into one of the two. The most common of course is to not realize that the disimissal of solar variation depends on only examining direct effects while holding atmospheric content constant. This is based on the comments i’ve seen. Any papers may be better reasoned, I haven’t seen them linked.
>> “…and then they want to claim a 25 year solar “divergence†prove their “science.†Put your money in your shoe before you hang around this crowd.”
Pete has a problem with definition here. He says “Solar VARIANCE” has diverged from temperature. Well WTF is solar variance and how does it have anything to do with the sun’s total impact on earth climate? I have not heard of any paper or textbook that claims to understand the whole influence of the space environment on earths temperature, and yet somehow we know for certain that all solar influence has diverged in the past few decades. That’s a very startling claim, one that frankly needs a lot of evidence to back it up before it should ever make it into mainstream acceptance.
Jeremy, see
http://www.skepticalscience.com/solar-activity-sunspots-global-warming.htm
ESR says: ‘Skeptical Science” ignores the hypothesis that now seems most plausible, which is that the correlation disappeared because the recent temperature records are wrong – corrupted by instrumental error or fraud.
Eric, if “error” or “fraud” are the sole reasons why the temperature record is what it is, then are you willing to accept that every scientist, software engineer, analyst and so on at NASA, NCDC, the Japanese Meteorological Agency and the UAH (for starters) are in on it?
That’s the only way you can accept that CRU’s analyses are not significantly different than those of the other groups. If CRU was manipulating the data, then the work of those other groups wouldn’t be very much like CRU’s. Since they are, what’s your explanation?
>Since they are, what’s your explanation?
Not enough “independent” datasets. They’re all working from the same “homogenized, value-added” data that’s been screwed with by the “team” – and they’re also all caught up in the same theoretical error cascade. It’s just like Millikan and the electron-mass error all over again, but on a larger scale and with trillion-dollar consequences.
How do you know that every group is starting with CRU’s data, Eric? You’re claiming that Spencer, Christy et.al. at UAH and elsewhere are starting with CRU’s data? That’s odd, because UAH’s data is satellite-based.
What of the JMA? Where’s their data coming from?
It’s easy to make claims of global fraud. It’s not at easy to substantiate them.
>How do you know that every group is starting with CRU’s data, Eric?
Because the literature says so. However bogus the general run of pro-AGW papers may be, they do in general cite their data sources, and the same handful of acronyms keeps cropping up.
Is that “same handful” controlled, maintained and distributed by CRU and CRU alone? Show your evidence.
Like I said, if you’re going to accuse fraud, you need to back it up other than vague handwaving assertions without proof.
>Is that “same handful†controlled, maintained and distributed by CRU and CRU alone?
No.. Hansen’s been screwing with the GISTEMP datasets too; McIntyre has repeatedly caught him at it.
GISTEMP is open source – get it and fix it, then. Show up Hansen et.al.
What of NCDC? UAH? JMA?
Are they all in on it too? Or merely unwitting dupes of the frauds of Jones and Hansen?
>Are they all in on it too? Or merely unwitting dupes of the frauds of Jones and Hansen?
Unwitting dupes, I think. But that may just mean I’m not cynical enough yet; it seems that every time I learn more about what going on, I keep having to revise my estimates of the degree of corruption upwards.
How would Jones’ and Hansen’s “frauds” penetrate Spencer’s work with satellite data at UAH? Or the Japanese at JMA?
Instead of just “think”, how about *prove*?
>How would Jones’ and Hansen’s “frauds†penetrate Spencer’s work with satellite data at UAH? Or the Japanese at JMA?
I don”t know yet. But I expect we’re going to find out as the dominoes falling from the CRU flap forces, er, agonizing reappraisals.
Conceding “I don’t know” is a step in the right direction. That’s a big backpedal from “most plausible […] error or fraud” comment you made earlier.
>Conceding “I don’t know†is a step in the right direction. That’s a big backpedal from “most plausible […] error or fraud†comment you made earlier.
It is still the case that the most plausible hypothesis is error and fraud, as I have been maintaining since well before the CRU flap – far more likely than that the AGW models are true. But “most plausible” does not equal “certain”, and there are things I would have to know that I don’t to be near certainty.
Sticking with “error and fraud” as “most plausible” means that you’ve completely ignored the independent nature of the analyses, or, *must* accept that many scientists are incredibly incompetent (“error”) or are all in on the conspiracy (“fraud”).
Now, why would you prefer a belief that’s quite in-credible, and call it “plausible”?
>Now, why would you prefer a belief that’s quite in-credible, and call it “plausible�
Because it’s more plausible than the AGW models. And I know what an error cascade looks like. And some of the central players have already been caught committing fraud.
There’s no such thing as “the AGW models”. Like I pointed out to jrok, the work of CRU, NASA GISS, NCDC, JMA, and UAH aren’t climate models, but analyses of observed data. Two different things.
Who are the “central players”, and what “fraud” did they commit? Do you include the folks at NASA GISS, NCDC, JMA and UAH? You must, because their analyses match up very well with CRU. Ergo, if CRU has committed “fraud”, and those other groups’ work is very close to CRU’s, then they must have committed “fraud”, in the same way, as CRU.
This conspiracy of yours is spiraling across the planet. Makes it much less plausible.
>And I know what an error cascade looks like.
You wouldn’t know an error cascade if you were caught in the middle of one.
>They’re all working from the same “homogenized, value-added†data that’s been screwed with by the “teamâ€
This is simply false; HadCRUT and GISTEMP for example perform different homogenisations, based on the unhomogenised data from the GHCN. RSS and UAH use satellites, HadAT uses radiosondes.
Also the “team” refers to Holocene palaeoclimatologists, not to the entire field of climate science.
>Because the literature says so. However bogus the general run of pro-AGW papers may be, they do in general cite their data sources, and the same handful of acronyms keeps cropping up.
Here you casually jump from studies of the instrumental record to climate science in general. Are you even paying attention to what you’re debating, or are these just Pavlovian trained reflexes from years of arguing on the internet? Are you trying to learn or trying to win a rhetorical argument?
>it seems that every time I learn more about what going on, I keep having to revise my estimates of the degree of corruption upwards.
How can you have any faith in your own estimates when you keep making so many basic mistakes?
>How can you have any faith in your own estimates when you keep making so many basic mistakes?
Sorry, but jumping up and down all day yelling “Lalalala! You’re wrongwrongwrong!” is not going to magically create any “basic mistakes” on my part into existence. You, on the other hand, actually are making basic mistakes – like supposing that “the team” only includes paleoclimatologists when Hansen (the same Hansen who’s been repeatedly caught screwing with the GISTEMP data) is a central member. We already know (we don’t have to guess) that the “team’s” ability to fuck with datasets that are inconvenient to them extends beyond the CRU data, we just don’t know how far yet.
You are not going to wish away the fact that these people have already been caught lying to the public, committing financial fraud against their grant sponsors, conspiring to violate FOIA, cherry-picking and distorting the data, plain making shit up, and subverting the peer-review process to suppress valid criticisms of their work. They are frauds and criminals, and by defending them you are an accessory after the fact.
>Sorry, but jumping up and down all day yelling “Lalalala! You’re wrongwrongwrong!†is not going to magically create any “basic mistakes†on my part into existence.
I notice you still haven’t corrected the Fahrenheit / Celsius mistake you’re propagating in this post.
>like supposing that “the team†only includes paleoclimatologists
Do you even know where the epithet “Hockey Team” comes from?
> You wouldn’t know an error cascade if you were caught in the middle of one.
Well, this statement alone proves that you don’t even know what an error cascade is. When someone is “caught in the middle of one”, by definition they don’t know it. That’s what makes it an error cascade. Talk about basic mistakes.
>Well, this statement alone proves that you don’t even know what an error cascade is. When someone is “caught in the middle of oneâ€, by definition they don’t know it. That’s what makes it an error cascade. Talk about basic mistakes.
If what I said was a tautology, then “by definition” it can’t be a mistake now can it?
> If what I said was a tautology, then “by definition†it can’t be a mistake now can it?
Perhaps not. But if you are using “tautology” to lawyer your way out of a gaffe, then it’s more likely that anything you say is just as unfalsifiable as one of Hansen’s dire predictions: You weren’t wrong, aren’t wrong and can never be wrong.
Welcome to “the new science” I suppose.
jrok, Hansen isn’t as bad as you think…
http://rabett.blogspot.com/2006/09/well-lookee-that.html
Boy, Strand, you really dug for that one. A blog entry from 2006? Don’t hurt yourself.
jrok, what do you mean?
The impression I get is that you’re not interested in anything that upsets your skeptic apple-cart.
A more recent comparison shows that scenario C predicted more warming than found, but is relatively close until the middle of this decade. Hansen scenario C assumed a drastic reduction in CO2 emissions, which didn’t actually happen, so the fact that it’s predicted temperatures are close is irrelevant, and his overall predictions have been off for a decade. The fact that it’s been over a decade is important, because climate models are only supposedly good on decadal time frame.
I didn’t say Hansen’s model was perfect – but it’s quite commendable, especially for 20 years old.
Certainly far better than jrok gives it credit for.
> Certainly far better than jrok gives it credit for.
What counts as “better?” Better relative to what? Surface and satellite instrumentals are in virtual agreement over the past twenty, and they both show readings below Hansen’s “best case scenario” prediction in which he assumed CO2 forcings would have been restricted over that time period.
What the heck is so difficult to comprehend about this? Hansen was wrong. GISS and RSS show him to be wrong… Not “a little wrong” or “somewhat wrong”, but very, very wrong. Are these the sort of predictions that say it’s time to go back and twiddle the knobs and spin the dials? No, no, no. They indicate the hypothesis itself is not only unproven, but deeply flawed. To say otherwise would be to toss the scientific method in the garbage. You don’t say “my predictions were wrong… but my model is perfect… so the data must be wrong.” That’s not science.
Of course this is precisely what Trenberth says to Mann: “The data is surely wrong.” That phrase should ring a ten alarm fire in the brains of anyone with the slightest bit of scientific training. We KNOW what that means.
Given that we don’t have a good model of the sun (to predict solar output), a good model for volcanoes, and a good economic model, and many other good models that are of relevance to the boundary conditions needed for climate modeling, I give Hansen a lot of credit for being far more right than wrong.
Apparently you have a very fine tolerance for error, jrok. I doubt any climate model, ever, will meet your requirements for “right”. That’s too bad, because climate models don’t have to be “perfect” to be useful, a la Newton relative to Einstein.
As for the snark against Trenberth, read his paper, to which I’ve pointed you before. Instead of handwaving away that it’s junk, point to something specific that’s wrong. And don’t pull out your canard of “I don’t have do anything but snipe”, because that’s intellectually lazy, and cowardly to boot.
> Apparently you have a very fine tolerance for error, jrok. I doubt any climate model, ever, will meet your requirements for “rightâ€.
I do? You mean Hansen was “kinda-sorta” right? I recall that, back in the early 90’s, Hansen made the sober prediction that by 2008 that FDR Drive would be submerged under the East River. Well, I live on the FDR Drive, quite literally on the waterfront. I don’t recall swimming to work this morning. I will try to reproduce this experiment tomorrow, though… I wouldn’t want to be too hasty.
But hey, so what? So, Hansen was a teensy little bit off there too. It’s not like we are talking about a volatile system based on virtually 100% proxy data, and the margin for error would need to be vanishingly small to prove *anything*. We are talking about the rigorous, reductive science of modeling planetary climate.
>I do? You mean Hansen was “kinda-sorta†right? I recall that, back in the early 90’s, Hansen made the sober prediction that by 2008 that FDR Drive would be submerged under the East River.
While ridiculing Hansen on this score is easy and justified, focusing on outright nutball predictions actually obscures the more general failure of AGW projections. Trenberth: “The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.” Global average temperature measures failed to track CO2 levels after 1998-1999, and the greenhouse signature has never actually shown up in tropical-atmosphere temperature profiles.
There’s Hansen’s toss-out remark to a journalist, then there’s his work. I wouldn’t confuse the two, any more than I would confuse your musings here with your papers.
Trenberth is attempting to close the Earth’s energy budget, and his frustration comes from a lack of good observations. Not anything to do with climate modeling. See
http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/EnergyDiagnostics09final2.pdf
>Not anything to do with climate modeling.
Preposterous. That’s the weakest bullshit you’ve come up with yet. It’s both specifically refuted by the context of the quote and nonsense in general.
If you can’t close the energy budget, no climate modeling you attempt can be anything better than voodoo. Oh, yeah…we don’t know where the heat inputs come or where they go, but we can predict warming all right. We might as well be waving a bunch of bloody chicken feathers over a gris-gris.
> There’s Hansen’s toss-out remark to a journalist, then there’s his work. I wouldn’t confuse the two, any more than I would confuse your musings here with your papers.
Ah, but you see, my musings here aren’t aiming to TERRORIZE THE LIVING HELL out of people, nor DEFRAUD grant institutions and EXTORT governments. Are you saying that social responsibility and cautious science go out the window when you talking to a member of the press? Sex it up a bit for the hotsheets? That sounds okay to you?
Shouting apocalyptic claims through the mainstream press organs was exactly what caused this whole fiasco in the first place. But for the dire predictions they handed to media agents, it’s likely these crooks wouldn’t have become trapped inside their own error cascade. With the journos and politicos involved, there suddenly is far more on the line then simply being proven wrong in the peer literature. It’s why these emails show them trying to “get rid of MWP” rather than investigate it, and actively suppressing, altering and destroying data, and even shutting down legitimate inquiries within their own ranks in order to meet U.N. timetables.
> While ridiculing Hansen on this score is easy and justified, focusing on outright nutball predictions actually obscures the more general failure of AGW projections.
You are right. The more important failures aren’t nearly as titillating as “You are going to drown in ten years.” But the point I’m trying to make is that these Armageddon sound bytes are still a significant part of the fraud. That is the element of the fraud that the public lapped up, and that the politicians rode hell-for-leather on.
>Global average temperature measures failed to track CO2 levels after 1998-1999,
Only if you rely on Phil Jones’s HadCRUT product. The others (e.g. GISTEMP, RSS, UAH) do a better job
>and the greenhouse signature has never actually shown up in tropical-atmosphere temperature profiles.
The greenhouse signature is stratospheric cooling, which has in fact shown up in measured temperature profiles.
It’s possible you’ve been mislead about the tropical ‘hot spot’ — this is an equilibrium outcome for both solar-forced or greenhouse-forced warming (and it’s not observed because we’re out of equilibrium). There is a picture in the IPCC showing a hot-spot for greenhouse forcing and no hot-spot for solar forcing, but this was because solar forcing was not causing enough warming to generate a hot-spot.
>Only if you rely on Phil Jones’s HadCRUT product. The others (e.g. GISTEMP, RSS, UAH) do a better job
All four measures were essentially flat for a decade, until all four abruptly crashed in January 2008. Go on, show me an AGW model that predicted that. And the crash.
In March 2008 I looked at the data and predicted that temperature trends would continue to track insolation during the solar minimum rather than CO2 levels, offering a bet for any reasonable stakes to some AGW-believer friends of mine. They were wise not to take it, because my prediction was correct.
>The greenhouse signature is stratospheric cooling, which has in fact shown up in measured temperature profiles.
This is one of several recent reports that say otherwise. I used to have a copy of Evans’s PDF graphing the atmospheric temperature profile, with sources, so I have some confidence he’s not just pulling numbers out of his butt in the interview.
> Only if you rely on Phil Jones’s HadCRUT product. The others (e.g. GISTEMP, RSS, UAH) do a better job.
Is this the same GISTEMP that erroneously compiled all the Russian station data from September 2008 twice (once for September and once for October), based on data handed off from NOAA?
Hrmmm… a significant error escapes notice of one party. The error is passed to another party, where it also escapes notice. You know, I believe there is a term that describes this phenomenon…
A decade’s not enough data to distinguish flat from not-flat. GISTEMP, RSS, and UAH records are consistent with a continued long-term warming trend.
You’re still getting mixed up with timescales. At the sort of timescales you’re looking at, temperature records are going to be dominated by ENSO and similar effects. At a slightly longer timescale it will be solar variation. At the multidecadal scale it’s going to be anthropogenic greenhouse forcing, Here‘s a rough calculation as to how long you need to pick up the long-term trend.
This is like betting that this summer will be warmer than winter. It’s obviously correct to both sides and says nothing about AGW.
There’s a response to that article here.
The usual source for this meme is Figure 9.1 in IPCC AR4 WG1 (pdf).
The ‘hot-spot’ is a result of increased temperature, it’s not a ‘fingerprint’ for greenhouse forcing. It looks that way from Figure 9.1 because the greenhouse forcing is the only one that’s large enough to generate a hot-spot, but equivalent warming from increased solar output will give the same sort of hot-spot.
>A decade’s not enough data to distinguish flat from not-flat.
If a decade isn’t long enough to disconfirm the models, it’s not long enough to conform them either. The obvious implication of your argument is that we should gather data for fifty years or so before we even think about policy responses.
>This is like betting that this summer will be warmer than winter.
Oh? It’s certainly not what James Hansen and other alarmists were predicting. My track record is better than theirs, not that this is any great achievement; I suspect a drug-addled rhesus monkey throwing darts could do better than they have.
>There’s a response to that article here.
At this point, I consider RealClimate a poisoned well. We *know* – we don’t have to guess, we *know* – that it operated as a false front for the CRU gang. I’m not going to trust any graphs or reconstructions published there, period, unless I know for certain they have a provenance independent of CRU.
No, if we need 50 years of data then we have the last 50 years of data.
“Certainly”? Have you got some evidence that scientists made short term predictions saying greenhouse will dominate solar/ENSO/etc. on that timescale?
>At this point, I consider RealClimate a poisoned well. We *know* – we don’t have to guess, we *know* – that it operated as a false front for the CRU gang. I’m not going to trust any graphs or reconstructions published there, period, unless I know for certain they have a provenance independent of CRU.
This conveniently lets you ignore whatever evidence you want, simply by positing some shadowy connection to CRU. Are you going to apply the same standard to evidence linked to ExxonMobil?
>This conveniently lets you ignore whatever evidence you want, simply by positing some shadowy connection to CRU.
I don’t have to posit any shadowy connection. Lambert says straight up that’s where his “refutation” comes from – and at this point, if RealClimate tried to tell me the sky was blue I’d assume they were lying. That’s no more skepticism than they deserve for secretly volunteering to serve as the team’s propaganda arm while pretending to be neutral and disinterested.
>No, if we need 50 years of data then we have the last 50 years of data
At this point I think we don’t have *any* data we can completely trust hasn’t been fucked with, with the possible (but only possible) exception of the ERBE satellite measurements. Again, the “team” has only themselves to blame for creating this doubt.
>Have you got some evidence that scientists made short term predictions saying greenhouse will dominate solar/ENSO/etc. on that timescale?
The last decade of screaming that CO2 is the primary forcer certainly out to count.
>The last decade of screaming that CO2 is the primary forcer certainly out to count.
I’ll take that as a ‘no’. Still having trouble with the difference between short-term and long-term variation?
Pete said:
> This conveniently lets you ignore whatever evidence you want, simply by positing some shadowy connection to CRU. Are you going to apply the same standard to evidence linked to ExxonMobil?
Michael Mann said:
“From: “Michael E. Mannâ€
To: Tim Osborn, Keith Briffa
Cc: Gavin Schmidt
Subject:
Date: Thu, 09 Feb 2006 16:51:53 -0500
guys, I see that Science has already gone online w/ the new issue, so we put up the RC post. By now, you’ve probably read that nasty McIntyre thing. Apparently, he violated the embargo on his website (I don’t go there personally, but so I’m informed). Anyway, I wanted you guys to know that you’re free to use RC in any way you think would be helpful. Gavin and I are going to be careful about what comments we screen through, and
we’ll be very careful to answer any questions that come up to any extent we can. On the other hand, you might want to visit the thread and post replies yourself. We can hold comments up in the queue and contact you about whether or not you think they should be screened through or not, and if so, any comments you’d like us to include. You’re also welcome to do a followup guest post, etc. think of RC as a resource that is at your disposal to combat any disinformation put forward by the McIntyres of the world. Just let us know. We’ll use our
best discretion to make sure the skeptics dont’get to use the RC comments as a megaphone…”
No, if we need 50 years of data then we have the last 50 years of data
No, no, no.
We do not have an unchanged 50 year old climate model and an unchanged 50 years of data which it predicted. A model built using past data which correctly predicts that past data has not been scientifitically validated. To be validated the model must predict the future. And you are claiming we need another thirty years of data to validate Hansen’s twenty year old model.
Yours,
Tom DeGisi