July 24, 2024 | David F. Coppedge

Models Are Not Facts

Scientists rely on computer models,
but are some of them creating
fake illusions of reality?

 

Computer models are useful tools in many branches of science. Through models, astronomers envision the nature of the interiors of stars and planets, meteorologists predict the paths of hurricanes, and geophysicists predict the impacts of changing ocean currents.

But models are not the same as realities: they are only simulations of realities. The complexity of many phenomena, such as historical and future climate, challenge simulation. The reliability of computer models depends on the accuracy and completeness of empirical data used as input to the models, on the applicability of equations used, and on their track record of congruence with observational data.

Increasingly, scientists rely on software written by non-scientists. These programs often make simplifying assumptions to save processing power. How can scientists be sure that all the factors were considered and balanced? What unknowns were acknowledged? What “unknown unknowns” could increase the error in conclusions? It is risky to use off-the-shelf computer models without knowing the assumptions used by the programmers.

While computer models can be useful as heuristic tools, they cannot replace observation. Observation is one of the pillars of science that presumably separates it from other forms of knowledge: observe a phenomenon, make a hypothesis to explain the phenomenon, test the hypothesis with measurements. Models are one step removed from this classical Baconian method, leaning on someone else’s understanding of how things work.

The field of philosophy of science has a complex history of debates about ontology (the nature of reality) and epistemology (how we know reality). Philosophers have long had many deep discussions about what constitutes evidence, explanation, and justification. They discuss Baconian empiricism (experimental science) vs Cartesian rationalism (logical deduction). A typical scientist goes through the path from high school to PhD often with very little exposure to these important discussions about the nature of scientific truth. Often, they learn “how to do science” from professors, advisors, and peers. They learn what software to use and what inputs to give it, never worrying about the reliability of the results, and concluding that the output on the screen represents reality.

For models that can be compared with observations (such as a hurricane’s path), confidence in a model can be justified. But what about models that describe the unobservable past and future? What about models that deal with phenomena that cannot be directly observed even in principle? Think about these questions as we examine recent news about scientific findings that relied on computer models.

Scientists assess how large dinosaurs could really get (24 July 2024, Queen Mary University of London). Using computer models, scientists concluded that “The maximum size of T. rex is estimated to be 70% larger than current values.” Artists went forth to draw a dinosaur 70% bigger than the largest known fossil, even though no scientists has ever witnessed a living dinosaur. Notice the unknowns and assumptions:

Mallon and Hone computationally modelled a population of Tyrannosaurus rex. They factored in variables like population size, growth rate, lifespan, preservation biases in the fossil record, and more. Body size variance at adulthood, which is still poorly known in T. rex, was modelled with and without sex differences, based on living alligators. Tyrannosaurus rex was chosen as a model because it is a well-studied dinosaur with much of this information known or with good estimates….

The values are estimates based on the model, but patterns of discovery of giants of modern species tell us there must have been larger dinosaurs out there that we have not yet found.

Where do the moon’s weird swirls come from? Scientists are trying to find out (24 July 2024, Space.com). Keith Cooper writes about attempts to explain unusual formations on the moon, our nearest neighbor in the solar system. His report illustrates how models based on competing hypotheses cannot always resolve the best explanation. After looking into four hypotheses, he says, “But rather than ruling any of these explanations out, the new results actually seem to support multiple models.”

Mercury has a layer of diamond 10 miles thick, NASA spacecraft finds (24 July 2024, Space.com). Obviously no one has drilled down to the outer core of the planet Mercury. Scientists used computer modeling to settle on a picture of “unobserved reality” deep under its observable surface. A layer of diamond ten miles thick, they say, must have formed around the planet’s core. The model, however, makes assumptions about the thermal history of Mercury and periods of volcanism billions of unobserved years ago. To what extent does the clickbait headline correspond to scientific facts rather than to imaginary realities glistening on a computer screen?

Early humans began wiping out elephant relatives 1.8 million years ago (24 July 2024, New Scientist). Reporter Michael Le Page doesn’t question the use of computer models by European Darwinists who claim that human ancestors killed off dozens of species of proboscideans (elephant-like mammals) a million years ago. The new flawed computer model replaces an old flawed computer model.

Previous models of this kind have been limited to looking at the effect of just one factor, such as climate, but by taking advantage of AI, the team’s model can estimate the relative contribution of numerous factors, says Hauffe. “We combined everything in a single analysis.”

But they weren’t there, and neither was the reporter.

Southern Ocean absorbing more CO2 than previously thought (24 July 2024, University of East Anglia). This story illustrates the problem of upsets: previous models relying on possibly incorrect assumptions. Researchers at this university decided that “current models and float data do not account for small, intense CO2 uptake events.” They used computer models with revised inputs and concluded that oceans absorb more CO2 than previously thought. But are their models now an unassailable standard? They acknowledge that direct measurements of such events are hard to come by. What other unknowns could revise the model in the future? Climate models, everyone knows, have huge ripple effects on government policies and on popular culture.

Trees reveal climate surprise – bark removes methane from the atmosphere (24 July 2024, University of Birmingham). Here is an example of how observations can undermine models. Much of the climate scare resulting in international conferences and draconian measures comes from models. Researchers at the University of Birmingham uncovered another “unknown unknown” that was not used as input: “Microbes hidden within tree bark can absorb methane – a powerful greenhouse gas – from the atmosphere.” This was found by measurement, not by models. “Soil had been thought of as the only terrestrial sink for methane, but the researchers now show that trees may be as important, perhaps more so.

Our periodic reporting on climate science has shown many other factors that were not considered in the leading climate models (e.g., 22 Sept 2020). And yet the climate scare continues on, as if the political tail is wagging the empirical dog.

The point of this article is not to dispute the value of models. In many contexts, they are highly useful. James Clerk Maxwell made a model of Saturn’s rings to deduce that the rings must be made of separately-orbiting particles; this proved to be correct in the Space Age long after his death. Lord Kelvin taught his students to represent their hypotheses with models. Models are good when they can be corroborated by evidence. But when misused, they can distort reality, replacing knowledge with useful fictions to support an ideology. Scientists need to keep models in their place. They are not scientific facts until corroborated by observational evidence.

Evolutionary theory is an extreme example of model misuse. Darwinians cannot observe macroevolution occurring by natural selection under their noses, and they cannot experience millions of Darwin Years, so they build models assuming natural selection and Deep Time to “visualize” how dogs became whales and apes became people.* That is not science. It is sophisticated storytelling masquerading as science. Their computer screens portray fictional scenarios as real events. Scriptwriters and animators looking for work are only happy to transform those imaginary visions into “reality” shows. The public watches these fictions, thinking they understand dinosaurs after watching Jurassic Lark (lark, v.: “to behave mischievously; play pranks”).

*For example, see this paper in Science last May, “Evolvability predicts macroevolution under fluctuating selection.” A look at their methods shows that the conclusions are built on computer modeling. But they never observed macroevolution or fluctuating selection; they only assumed it. Inputting fossils into their model did not add reliability, because they assumed macroevolution and Deep Time in the fossil dates and in the organisms’ presumed evolutionary lineages.

 

 

 

(Visited 391 times, 1 visits today)

Leave a Reply