Trending: Transparency in Climate Models
It’s a little late to begin a new climate of transparency among climatologists. What does that imply about the past?
It happened by accident, Paul Voosen reports in his article for Science Magazine, “Climate scientists open up their black boxes to scrutiny.”
It began with an unplanned leave of absence. But it has blossomed into a full-fledged transparency movement for climate science.
In 2010, Erich Roeckner, a longtime guru behind the global climate model at the Max Planck Institute for Meteorology (MPIM) in Hamburg, Germany, was unable to work. The timing was inopportune: Deadlines loomed for an international project that would compare the major climate models with one another, and MPIM’s had a bug….
With Roeckner out of commission, a team of six people spent several months tuning the MPIM model to match the climate and eliminate the glitch. Their work, though laborious, was fairly routine. What was unusual was their decision, in 2012, to publish a detailed accounting of it. Roeckner’s absence was random. But in hindsight, it was the butterfly flapping that has now led climate modelers to openly discuss and document tuning in ways that they had long avoided, fearing criticism by climate skeptics.
This revelation should strike readers as disturbing on several levels. That the details of such a politically-fraught subject have been concealed from the public in a “black box” seems contrary to the very spirit of science, where transparency in scientific methods should be paramount. Voosen has just let the cat out of the bag: “fearing criticism by climate skeptics,” climate modelers have “long avoided” letting the public look inside the box. Why? If their data are incontrovertible—as all the big science institutions constantly assure the public—why the fear?
We also see a disturbing situation in that modelers “tune” their inputs to the climate in clunky ways. Does the following sound like the classical scientific method? Count the ways things could go wrong as you listen to Voosen describe the sausage-making in the modeling rooms:
At their core, climate models are about energy balance. They divide Earth up into boxes, and then, applying fundamental laws of physics, follow the sun’s energy as it drives phenomena like winds and ocean currents. Their resolution has grown over the years, allowing current models to render Earth in boxes down to 25 kilometers a side. They take weeks of supercomputer time for a full run, simulating how the climate evolves over centuries.
When the models can’t physically resolve certain processes, the parameters take over—though they are still informed by observations. For example, modelers tune for cloud formation based on temperature, atmospheric stability, humidity, and the presence of mountains. Parameters are also used to describe the spread of heat into the deep ocean, the reflectivity of Arctic sea ice, and the way that aerosols, small particles in the atmosphere, reflect or trap sunlight.
It’s impossible to get parameters right on the first try. And so scientists adjust these equations to make sure certain constraints are met, like the total energy entering and leaving the planet, the path of the jet stream, or the formation of low marine clouds off the California coast. Modelers try to restrict their tuning to as few knobs as possible, but it’s never as few as they’d like. It’s an art and a science. “It’s like reshaping an instrument to compensate for bad sound,” Stevens says.
Wait a minute: who decides what is a “bad sound”? There seems to be a lot of wiggle room in this “art” of modeling – enough to get a politically-motivated result by turning enough knobs. This is definitely not a case of following the evidence where it leads. It’s more like Finagle’s Rule #3, “Draw your curves first, then plot your data.” If funding sources, the politically powerful and the UN want a result they can promote like “Man-caused global warming will raise global temperatures by 2 degrees in 100 years,” then who is a lowly modeler to get a contrary result from his black box, especially if he fears climate skeptics? Voosen says this is exactly what has been going on all along.
For years, climate scientists had been mum in public about their “secret sauce”: What happened in the models stayed in the models. The taboo reflected fears that climate contrarians would use the practice of tuning to seed doubt about models—and, by extension, the reality of human-driven warming. “The community became defensive,” Stevens says. “It was afraid of talking about things that they thought could be unfairly used against them.” Proprietary concerns also get in the way. For example, the United Kingdom’s Met Office sells weather forecasts driven by its climate model. Disclosing too much about its code could encourage copycats and jeopardize its business.
One can see plenty of room for corruption here: profit motives, reputations, the us-vs-them mentality. Secret sauce? Taboos? This is not Las Vegas, where what happens there stays there. It looks for all the world like political parties or competing corporations using dirty tricks, not scientists seeking to understand the real world. His terminology about secrecy and fear should be alarming to a wary public that respects science but is worried about the economic costs of draconian climate mitigation policies, such as carbon taxes and elimination of fossil fuel jobs, that the politicians say, based on these models, must be imposed for the good of the planet.
Voosen’s article doesn’t give much hope that climate science will improve with the new transparency fad. The following episode most likely never made it into the Paris accords or the latest IPCC report:
Recently, while preparing for the new model comparisons, MPIM modelers got another chance to demonstrate their commitment to transparency. They knew that the latest version of their model had bugs that meant too much energy was leaking into space. After a year spent plugging holes and fixing it, the modelers ran a test and discovered something disturbing: The model was now overheating. Its climate sensitivity—the amount the world will warm under an immediate doubling of carbon dioxide concentrations from preindustrial levels—had shot up from 3.5°C in the old version to 7°C, an implausibly high jump.
MPIM hadn’t tuned for sensitivity before—it was a point of pride—but they had to get that number down. Thorsten Mauritsen, who helps lead their tuning work, says he tried tinkering with the parameter that controlled how fast fresh air mixes into clouds. Increasing it began to ratchet the sensitivity back down. “The model we produced with 7° was a damn good model,” Mauritsen says. But it was not the team’s best representation of the climate as they knew it.
Voosen undoubtedly believes in anthropogenic global warming, as do the editors of Science. But if they thought this article was going to make the public feel better about climate experts, they must be kidding themselves. Bugs, leaks, plumbers – what’s going on here? And look at this photo caption: “Storm clouds are too small for climate models to render directly, and so modelers must tune for them.” Think about that. Surely clouds must be one of the most important factors in any climate theory, but this says they can’t use real cloud data. They have to fudge the model. They have to tinker with the numbers to get the result they want.
If modelers were afraid of revealing their secret sauce, what will they do now that the window is open? Published in Science, this exposé into how international climate policy has been shaped by a group of inept tinkerers in back rooms will give the skeptics a field day like the re-opened FBI investigation into Clinton’s emails. But perhaps that’s just dandy. After all, every cloud has a silver lining, and sunshine is the best disinfectant.
If this goes on in climate science, given all the funding and political pressure involved, you can be sure similar tinkering goes on in models of Darwinian evolution. The DODOs and DOPEs must keep the Darwin skeptics at bay at all costs. Don’t count on transparency there.