Study Shows Global Warming Data Skewed by Bad Monitoring
Article audio sponsored by The John Birch Society

Over two and a half years after the Climategate scandal fundamentally undermined public confidence in the theory of manmade climate change, questions are continuing to be raised regarding the means used for collecting data for evaluating global warming, and the process of peer review that evaluates the climate studies.

The latest challenge confronting advocates of the theory of global warming is a study coauthored by Anthony Watts, a former television meteorologist, president of IntelliWeather, and a “convert” to the ranks of the skeptics of manmade global warming. In 2007, Watts founded SurfaceStations.org, a site which evaluates the weather stations gathering data used to model changes in global temperatures, because of concerns regarding the accuracy of the data.

Why would the location of the stations matter? Because the growth and spread of the population of the United States could cause localized changes in temperature without having a larger — even global— effect. For example, measurements from a location that was once in the middle of a field might now be surrounded by blacktop; in such a situation, the world has not necessarily gotten warmer but the area around the monitoring equipment certainly has.

The existence of such poorly-placed monitoring equipment is far from hypothetical: an article for FoxNews.com cited several examples:

That problem of poorly sited stations thanks to “encroaching urbanity” — locations near asphalt, air conditioning and airports — is well established. A sensor in Marysville, Calif., sits in a parking lot at a fire station next to an air conditioner exhaust and a cell tower. One in Redding, Calif., is housed in a box that also contains a halogen light bulb, which could emit warmth directly onto the gauge.

The study conducted by Watts and his colleagues (An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends) draws on the SurfaceStation data to reach several significant conclusions, including the following points:

• The analysis demonstrates clearly that siting quality matters. Well sited stations  consistently show a significantly cooler trend than poorly sited stations, no matter which  class of station is used for a baseline, and also when using no baseline at all. … 
• It is demonstrated that stations with poor microsite (Class 3, 4, 5) ratings have significantly higher warming trends than well sited stations (Class 1, 2): This is true for, all nine geographical areas of all five data samples. The odds of this result having  occurred randomly are quite small. …
• Not only does the NOAA USCHNv2 adjustment process fail to adjust poorly sited stations downward to match the well sited stations, but actually adjusts the well sited  stations upwards to match the poorly sited stations. 
• In addition to this, it is demonstrated that urban sites warm more rapidly than semi-urban sites, which in turn warm more rapidly than rural sites. Since a disproportionate percentage of stations are urban (10%) and semi-urban (25%) when compared with the  actual topography of the U.S., this further exaggerates Tmean trends. 
• NOAA adjustments procedure fails to address these issues. Instead, poorly sited station trends are adjusted sharply upward (not downward), and well sited stations are adjusted upward to match the already-adjusted poor stations. Well sited rural stations show a warming nearly three times greater after NOAA adjustment is applied. 

In other words, the study determined that not only are many monitoring stations poorly placed, the erroneous data generated by the poorly-placed urban sites is actually being used to adjust the data gathered at better-situated rural sites. What is the result? “The new analysis demonstrates that reported 1979-2008 U.S. temperature trends are spuriously doubled, with 92% of that over-estimation resulting from erroneous NOAA adjustments of well-sited stations upward.” 

Undoubtedly the new study will draw criticism from advocates of the theory of manmade climate change because it calls into question the reliability of the data upon which the theory has purportedly been based. Consider, for example, one of the critics of “climate change deniers”: Richard Muller, a professor of physics at the University of California at Berkeley who was himself quite recently among those “deniers.” According to a recent opinion article which he wrote for the New York Times (“The Conversion of a Climate Change Skeptic”), Prof. Muller cites the increase in surface temperatures as the reason for his “conversion”: 

Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.

My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases.

In a sense, there is a point of agreement between the two studies: there has been an increase in temperature at many of the monitoring stations, and that increase has been caused by humans — but there is reason to believe that the temperature change is extremely localized, and that the poor placement of monitoring equipment has proven to be a very poor guide to worldwide trends — a doubling of the temperature change, if Watts, et. al., are correct.

The study which Muller cites as the cause for his “conversion” is drawing criticism even from others who would normally be critical of “deniers.” For example, Judith Curry, a climatologist and chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology, is quite critical of Muller’s findings:

Judged by standards set by the IPCC and the best of recent observation-based attribution analyses,  in my opinion the Rhode, Muller et al. attribution analysis falls way short. … Looking at regional variations provides substantial insights into the attribution.

No one that I listen to questions that adding CO2 to the atmosphere will warm the earth’s surface, all other things being equal.   The issue is whether anthropogenic activities or natural variability is dominating the climate variability.  If the climate shifts hypothesis is correct (this is where I am placing my money), then this is a very difficult thing to untangle, and we will go through periods of rapid warming that are followed by a stagnant or even cooling period, and there are multiple time scales involved for both the external forcing and natural internal variability that conspire to produce unpredictable shifts.

The SurfaceStations data raises fundamental questions about the existence of much of the purported warming — let alone the source of any global warming. The fundamental challenges to the science behind global warming have arisen since the Climategate revelations — certainly the conclusions of the Intergovernmental Panel on Climate Change (IPCC) have been fundamentally undermined as the IPCC’s methodology has been subjected to outside scrutiny. The time seems near when the “global warming” of the past 20 years will go the way of the “new ice age” of the 1970s.

Photo: a weather monitoring station in an open field at the Bloom Dairy Farm near coldwater Mich.: AP Images