Scientific Uncertainty Is Cosmic in Scope
Different researchers can use the same data to reach opposite
conclusions. This contributes to a crisis of confidence in science.
— Data Does Not Interpret Itself. It Must Pass Through the Mental Digestive System of Humans. —
Scientism is the belief that the scientific method is the best (if not only) reliable way to find truth. In defense of scientism, the logical positivists used the argument that scientific conclusions can be replicated by others. Unfortunately, several meta-studies of scientific publications “have repeatedly failed to replicate a significant proportion of previously published results.” So says a European team seeking to understand why this is so.
The replication crisis: Researchers reveal a hidden universe of uncertainty (University of Luxembourg via Phys.org, 29 Nov 2022).
Note that ominous phraseology: “a hidden universe of uncertainty.” It does not portend well for logical positivists and purveyors of scientism.
The University of Luxembourg’s Department of Social Sciences contributed to a large-scale replication study that aimed to understand the role of decisions that scientists make during the research process. Published in the Proceedings of the National Academy of Sciences, the study highlights the importance of open science and collaboration among scientists.
The title of the paper is equally cosmic in scope:
Breznau et al., Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. PNAS, 28 Oct 2022, 119 (44) e2203150119. DOI: 10.1073/pnas.2203150119.
The press release mentions a loss of confidence in science during the Covid-19 pandemic, not just among conspiracy theorists or one political party. “It has also become apparent that both the public and experts have difficulty dealing with scientific uncertainty.” The “replication crisis” is part of the mix, they say. But is the uncertainty apparent or real?
Even if this can be explained by errors, publication pressure and bias or even questionable research practices, the failure of replications undermines the role of science as a reliable producer of knowledge.
So can it be explained by a few human errors or dubious practices? or is there some inherent flaw in the generation of scientific knowledge?
A Meta-Study Uncovers Rot in the Root
The project involved 163 researchers and 73 teams around the world. They went on a quest to see how published papers answered a single hypothesis in the social sciences: ” that more immigration reduces public support for government provision of social measures.”
Interestingly, the 1,253 statistical models contributed by the research teams produced very different results—even though they used the same data. The analytical choices, the expertise of the researchers and their expectations could not explain these huge differences in the results.
How, exactly, does this make science different from politics? Gather 1,253 politicians and have them give their opinions on the same hypothesis. Would anyone expect them to all agree, even if they looked at the same evidence?
The authors concluded that a “hidden universe of uncertainty remains.” Consequently, “scientists, especially those working with the complexities of human societies and behavior, should exercise humility and strive to better account for the uncertainty in their work.” The authors further highlight the potential and importance of transparent and collaborative research.
Humility 101
Scientists do not generally take classes in humility in their training. And are they immune from the desire for their conclusions to be seen as confident, certain, and trustworthy? What is the public to think if Scientist A uses a data set to arrive at one conclusion, but Scientist B uses the same data set to come to an opposite conclusion?
The paper has a large list of authors. But when they compared the models from the 1,253 studies, there was no convergence toward a “true” and reliable answer. Moreover, the vast differences in conclusions could not be explained by simplistic appeals to political bias or level of researchers’ expertise. The uncertainty is inherent in the science.
In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.
This particular meta-study was about the social sciences (e.g., sociology, psychology, anthropology, economics, political science). The authors did not comment on whether similar uncertainties plague the “hard science” categories: physics, chemistry, and engineering. Physicists are taught about sources of error; good physicists know that they must follow the error in their calculations (the “plus and minus” figures added to measurements, or error bars in graphs), and must account for error accumulation in derivations. Reducing error and increasing precision is a typical goal in hard sciences.
Even then, not all claims of precision are free of worldview bias. Evolutionists and geologists typically speak of dates for unobserved events millions or billions of years ago with stated accuracies of 3-4 significant figures, without revealing the assumptions inherent in their dating methods and their unquestioned acceptance of deep time (24 March 2022).
Failures to meet requirements for getting a rocket or bridge to work will be more likely to humiliate scientists coming to wrong conclusions. Nevertheless, this meta-study did not distinguish the sciences. Their clause, “scientists, especially those working with the complexities of human societies and behavior,” implies that every scientific field is prone to uncertainty to some extent. The call for humility, transparency and clarity is just as important in science as it is in all other scholarly endeavors.
Social Science Marries Hard Science
Even in the hard sciences, political biases or overconfidence can taint research and lead to opposite conclusions, resulting in breakdowns in public confidence. Since the press release mentioned the pandemic, consider what happened then. In matters of vaccine safety, alternative medicines, mask wearing and lockdowns, Democrats were using “science” to shout one thing, and Republicans were using “science” to shout opposite things. One study would use peer-reviewed research to conclude that Ivermectin was completely and utterly ineffective against the coronavirus, and another study would claim it was highly effective if administered early. The same thing happened with all the other controversies about masks, lockdowns and vaccine mandates. Did “science” speak with disinterested clarity in those cases? Each political party had champions preaching on their soapboxes, documentaries, commercials, and bumper stickers. Into this toxic social milieu there were potential conflicts of interest, hidden agendas, and selective reporting of results. Can anyone claim they have the truth, the whole truth and nothing but the truth in such matters?
These are important questions. Right now, millions of Chinese have taken to the streets in protest over months of lockdowns by the communist government that are resulting in mass starvation and loss of human rights. President Xi’s totalitarian regime uses “science” as a weapon for its zero-Covid policy. Sadly, some “western” countries (Australia, New Zealand, and others) followed similar policies, relying on certain “experts” who claimed to know how to stop a virus. Within the USA, some states were rigid on lockdowns, some were lax, and yet governors on both sides often claimed success.
Regardless of where you stand on these issues, the point is that uncertainty and bias are inherent in science. When uncertain science impacts government policy, danger lurks. Yes, we can test a scientific model to be flawed if a bridge falls into the river, but much of science is far less certain. This is especially true when dealing with human health issues, in part because there is so much variance between individuals and groups. Do you need vitamin supplements or not? Does carrying a cell phone increase your risk of cancer? Are genetically modified foods safe? A report on Nov 24 from the University of Wisconsin overturns a long-held expert opinion that you should drink 8 cups of water a day. “Stop counting cups,” the headline says; “There’s an ocean of difference in our water needs.” Some people need a little; others need a lot.” Such blanket generalities do not help people. Solid proof is hard to come by, and opinions vary all over the map.
Never build a firm opinion on one study, because there are probably other papers saying the opposite. Many times, too, we have reported on scientists rethinking about older “solid science” and demonstrating it was not so solid after all. Common sense is needed when considering scientific claims. What were the potential biases of the researchers? Did they have conflicts of interest? Who funded the work? How big were the error bars? Were the scientists humble, clear, transparent and tentative about their conclusions? Is one side trying to censor the findings of another side? Science is not like individual justice. It should be considered guilty till proven innocent.
Biology lies in the never-neverland between hard and soft sciences. Drop a mouse, and it will fall under gravity (hard science). But did mice evolve from some reptilian ancestor millions of years ago? (soft storytelling). We saw Darwin in yesterday’s post trying to associate himself with Isaac Newton to pretend legitimacy for his Stuff Happens Law.
A data set does not interpret itself. Conclusions must first pass through the mental digestive system of flawed humans. Some of it ends up building healthy tissues. Some of it ends up in the toilet because it has no nutritional value. We think Darwinism is junk food for flabby thinkers. See DIDO in the Darwin Dictionary: Darwin In, Darwin Out. Corollaries include DIGO (Darwin In, Garbage Out) and GIDO.
All science is social science. Never has there been a more urgent need for rational citizens to use critical thinking about the claims of science.
Comments
Very interesting! Thanks for highlighting this.