Climate Scientists Don’t Know What They Claim to Know
The edifice of climate science that makes the world cower in fear of catastrophe is a house of cards. Listen to the scientists describe this-and-that joist in the edifice.
Biden’s climate czar John Kerry had to dust off some embarrassment yesterday. He had to explain why his work on climate change requires him to fly around the world in his private jet, emitting far more carbon than whole African villages do. People like me have no other choices, he said, than to fly, like the time he flew his private jet to Iceland in order to receive an award for his climate efforts. The secular media allowed him to get away with that excuse, but many in the public see such actions as elitist and hypocritical.
Kerry’s jet flights, and the anticipated push of the Biden administration to make climate change a top priority in the US, with huge costs to the economy, ought to make critical thinkers wish to look into the scientific justifications for climate alarmism. After all, we are five years past Al Gore’s prediction that the earth would pass a point of no return by 2016. Periodically CEH likes to look into the literature for evidences of solid science – actual measurements, not just the confident assertions. Here are a few recent papers that make the conclusions look less than definitive.
Evidence for Clear‐sky Dimming and Brightening in Central Europe (Geophysical Research Letters). What could be more empirical than automatic measurements of sunlight at a given place over decades? At Potsdam, Germany, scientists have been gathering long-term radiation measurements for 70 years. It’s rare to have such a continuous record. Unexpectedly, the record from 1947 to 2017 shows periods of “clear-sky dimming and brightening.” The abstract of this paper reveals that the data don’t provide an objective readout; the data must be “interpreted.” By massaging the data, subtracting out portions and explaining away other portions due to cloud cover and other factors, the four authors arrived at their interpretation. They decided that the periods they focused on showed variations that were “anthropogenically forced rather than of natural origin, with aerosol pollutants as likely major drivers.”
We filter out the effects of clouds on solar radiation in this prominent record, to be able to study the variations in sunlight both under cloudy and cloud‐free conditions. Our analysis shows, that strong decadal variations (dimming and brightening) not only appear when clouds are considered, but also remain evident under cloud‐free conditions when cloud effects are eliminated. This implies that aerosol pollutants play a crucial role in these variations and points to a discernible human influence on the vital level of sunlight required for sustainable living conditions.
Judging from the prior statements, though, this hardly seems like the only plausible interpretation. The authors chose what data to eliminate: the “effects of clouds” in certain periods. How much cloudiness qualified for a data toss? Cloudiness covers a wide spectrum of definitions: fog, haze, distant clouds, patchy clouds, full overcast and a spectrum of subjective categories in between. When clouds pass by for a few hours in a given day, does that justify filtering out that data? Surely the interpretation of the bottom-line numbers called for some human decision making. That allows room for cherry-picking the inputs that yield the outputs desired for the funding organization, which would likely be hoping for an anthropogenic result.
Whether or not that is true, the authors admit to fuzziness in the models right up front: “the relative importance of clouds and the cloud‐free atmosphere (particularly aerosols) is currently disputed.” That’s huge. Clouds and aerosols have a major impact on temperatures and climates. One can measure watts per square meter hitting the Earth at a spot, but what other variables, like wind, might cause scatter in the data when the scientist classifies a day as “cloud-free” or not?
A team of climatologists is studying how to minimize errors in observed climate trend (Phys.org). Thank you, climatologists, for taking care of this. Very nice of you. Pray tell, though, why this late into the game you are trying to get the data right? Can’t you just hold a thermometer in the air each day? Oh, no; it’s much more complicated than that.
Climate observations can often be traced back more than a century, even before there were cars and electricity. These long periods of time mean that it is practically impossible to maintain the same measuring conditions over the years. The most common problem is the growth of cities around urban weather stations. We know that cities are getting warmer and warmer because of the thermal properties of urban surfaces and the reduction of evapotranspiration surfaces. To verify this, it is sufficient to compare urban stations with nearby rural stations. Although less known, similar problems are caused by the expansion of irrigated crops around observatories.
The background of the measurements has been changing. That’s not all. Some weather stations had been relocated over the years, leading to incommensurable measurements. But nobody need worry. The climatologists are working on “homogenisation methods” to get the numbers right. They can now identify spurious readings to prevent “systemic errors.” In fact, operators of stations can even chose the homogenization method they like. Everybody is happy that way.
Previous studies of a similar kind have shown that the homogenisation methods that were designed to detect multiple biases simultaneously were clearly better than those that identify artificial spurious changes one by one. “Curiously, our study did not confirm this. It may be more an issue of using methods that have been accurately fitted and tested,” says Victor Venema from the University of Bonn.
The experts are sure that the accuracy of the homogenisation methods will improve even more. “Nevertheless, we must not forget that climate observations that are spatially more dense and of high quality are the cornerstone of what we know about climate variability,” concludes Peter Domonkos.
Who decides what data are dense enough and high quality enough? Is it the data that yield the politically correct answers? With so much government funding at issue, one should at least be asking such questions.
Arctic Ocean was once a tub of fresh water covered with a half-mile of ice (Live Science); The Arctic Ocean might have been filled with freshwater during ice ages (Nature). This surprising conclusion of a team of geophysicists from Germany is not “true” scientifically; it is just a possibility that they are considering. What is the implication of having a basin of fresh water as deep as the Grand Canyon under the North Pole for unknown periods of time? For one thing, it means scientists don’t know as much about paleoclimate as they thought.
The Arctic region is undergoing rapid climatic and environmental change1, so knowledge of its past variability is crucial for understanding modern trends and predicting future ones. Ancient climate conditions and ocean behaviour are often reconstructed by analysing marine sediments. But Arctic sediments can be difficult to interpret, and much is still unknown about how the Arctic Ocean changed during specific glacial and interglacial periods over the past few million years2,3. Writing in Nature, Geibert et al.4 report analyses of an isotope of the element thorium in sea-floor sediments, which suggest that the Arctic Ocean swung between being filled with salt water and fresh water during periods of the two most recent glacials. (Nature)
“These results mean a real change to our understanding of the Arctic Ocean in glacial climates,” first study author Walter Geibert, a geochemist at the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research, said in a statement. To our knowledge, this is the first time that a complete freshening of the Arctic Ocean and the Nordic Seas has been considered — happening not just once, but twice.” (Live Science)
Cancellation of the Precessional Cycle in δ18O Records During the Early Pleistocene (Geophysical Research Letters). Remember when Bill Nye hammered Ken Ham at the creation-evolution debate in 2014 with ice core records? Nye was confident that he had proven the Earth was old. Well, that was then. This new paper says the records in ice cores are not complete; some records have been “canceled” because of mixing of ocean waters at depth. The proxy used to interpret cores, the delta-18-oxygen signal, does not correlate with what they believe happened. The so-called Milankovitch cycles (see 2 June 2009 and 22 June 2018) also don’t correlate with the ice cores as expected. It looks like a big mess, with enough scatter in the data to come up with any interpretation a scientist wants.
A central conclusion based on these δ18O records is that glacial‐interglacial cycles considerably changed their rhythm during the Mid‐Pleistocene. Curiously, the ∼23,000‐year (precessional) cycle of insolation is absent in Early Pleistocene δ18O records—despite its presence in insolation forcing to the ice sheets. Climate feedbacks involving (sea) ice, geological processes and carbon cycling may have contributed to the MPT [mid-Pleistocene transition]. We, however, show that the absence of an Early Pleistocene precession signal in deep‐sea δ18O records could be the result of destructive interference in the deep ocean, caused by the antiphasing of the precessional cycle between the North Atlantic and Southern Ocean deep‐water sources. We explore the potential for cancellation with an ocean model and show that interference can indeed cause widespread cancellation, particularly in the Early Pleistocene. We, therefore, conclude that the δ18O incompletely archives climatic cycles, challenging our understanding of long‐term climate variability.
So geochemists, what do you know? Apparently not much. It could be this; it could be that. What we were leaning on is not reliable. More work is needed, so please keep that funding flowing!
Disproportionate control on aerosol burden by light rain (Nature Geoscience). Nothing should be more measurable and observable in climate science than rain. Weather stations can measure this automatically. So what’s the problem? These geoscientists admit that interpreting the differences in climate effects between light rain and heavy rain is an inexact science.
Atmospheric aerosols are of great climatic and environmental importance due to their effects on the Earth’s radiative energy balance and air quality. Aerosol concentrations are strongly influenced by rainfall via wet removal. Global climate models have been used to quantify their climate and health effects. However, they commonly suffer from a well-known problem of ‘too much light rain and too little heavy rain’. The impact of simulated rainfall intensities on aerosol burden at the global scale is still unclear. Here we show that rainfall intensity has profound impacts on aerosol burden, and light rain has a disproportionate control on it. … The implication of these findings is that understanding the nature of aerosol scavenging by rainfall is critical to aerosol–climate interaction and its impact on climate.
The previous two papers are open-access for those who wish to read further. Before swallowing conclusions as affirmations that “now we know” what is going on, read critically. How do the new findings change what climate scientists thought they knew previously? Remember, all the predictions of an Earth on the tipping point of a climate catastrophe were made long before these new findings called the models into question. Papers like this, however, have appeared frequently over the years CEH has been looking at the published evidence.
We think people should be aware of how science sausage is made. It’s not always simple and pretty. Scientific consensus is more a group effort at making a witches’ brew that they can sell to the public for inducing groupthink.