NOAA Quietly Tweaking Climate Data

If it weren’t already obvious to you, I’ll let you in on a secret: scientific hypotheses regularly affect the selection and organization (and therefore the content) of what passes for scientific data. In other words, scientists regularly confirm with the “evidence” only what they already believe to be true and are aiming to “prove.” This is especially the case in arenas where science weighs heavily on politics (and vice-versa). The most recent revelation of the inner workings of the NOAA (National Oceanic and Atmospheric Administration) only serves to confirm the reality of this circular process.

It has recently come to light that the NOAA has been tweaking climate data from the past in order to better fit this data into its most current models of global climate change. For some time, the NOAA pegged July 2012 as the hottest month on record. But recently this “honor” reverted to July 1936 when the NOAA tweaked its climate model. In other words, the NOAA “interprets” the data of the past according to its most recent methods for collecting climate data, and, as the methods change, so does the data. At any given time, purportedly “hard” data from the past can magically change based on the model du jour at the NOAA.

However you view this information, it should trouble you. The NOAA defends its decision to tweak old data by saying that, as data-collection methods have become more sophisticated, it has become clear that some of the data that had been collected in the past needed to be re-evaluated.

But you can’t have it both ways. Think about it. How were the most up-to-date models derived? I thought these models were supposed to be based on data collected over the last century. So it stands to reason that, if the data of the last century is being called into question, then the model based on that data is also dubious. That is, if the model is based on the data. But, if you’re paying attention, you’ll notice that this most recent revelation means that the model has not been based on the data. The data is being fitted to the model.

And that isn’t real science. Either the data from the past is solid, and the hypotheses and models based on it are equally solid, or past data is suspect, and scientists must scrap any hypotheses that depend on it. When you are talking about reading variable trends over the course of hundreds of years, the science cannot help but be questionable. Think about the vast number of contributing factors involved in the weather. How do you determine whether certain weather events are part of a trend or just freak occurrences? You would need to have reliable data, collected using the same reliable and consistent methods, stretching back over hundreds of years. And much of the problem is that we don’t even know whether we are collecting all the right data. And even if we could be sure that we had been collecting data in the right way over the course of a thousand years (which of course is not the case), it could be that the cycles and regularities we think we detect over the long haul are caused by things we haven’t even been measuring—and have no way of measuring.

There is no way around it—from within our huge and inchoate pool of data, we must select the (almost certainly incomplete) data that appears to be forming into trends (removing outliers and disqualifying data collected from unreliable sources). And, in a pool of data as large as the weather, how do we know what data to keep and what data to ignore? By fitting it into a model, of course. But, unfortunately, the model dictates what data we’ll be collecting. So, if the model proves to be errant, as so many scientific models have before, it means starting again from scratch or trying to salvage what we can from our original pool of data—assuming we even collected what we’re now looking for. This is not what the NOAA is doing. Instead, they are clinging to their original model even to the point of changing data.

It is foolish to think that science is just looking at the facts and then forming conclusions. Possible conclusions must be formed first so that we know what evidence to look for. And, sometimes, we are fooled for years by confirmation bias into thinking our model is correct merely because we’ve been trashing or ignoring disagreeable data. It’s not just that we don’t know everything. We don’t even know what it is we don’t know. And that should make all of us approach science with a little more humility and open-mindedness.

In other words, we should refrain from certainty on scientific issues pretty much indefinitely, especially when the science is as complicated as all the weather from ancient times until now. But the NOAA, mostly due to the fact that it is organized and funded by the civil government, gets a huge amount of pressure to take the posture of certainty. The civil government wants results. And not just any results—the ones that would be most helpful in the current political climate.

So instead of humbly recognizing that the vagaries of environmental science are too nuanced and too complicated to produce incontrovertible conclusions, the NOAA has been imposing their own current models onto past data, as if they already know for certain what’s going on. In their minds, if the data doesn’t reflect their own concrete conclusions, it’s because something is wrong with the data. And that may make for “good” politics. But it is positively bad science.