August 31, 2023 | David F. Coppedge

Big Science Airs Its Dirty Laundry

All is not well in scientism. Big Science
is as fallible as other human institutions.

 

For all the talk about “trust the science” and “the science is settled!” on social media, fewer readers are hearing from the top guns in Big Science (journals, lobbyists, academic deans) about their deep concerns over scientific reliability and reputation.

A True Story Replayed

Perhaps a lesson from history would humble establishment science a bit. Science Magazine today (31 Aug) reviewed a new play by Mark Rylance about “Ignaz Semmelweis, a 19th-century Hungarian physician whose simple discovery would eventually change medicine forever.” His discovery? If doctors would simply wash their hands after doing autopsies before delivering babies, they would reduce maternal mortality by 90%. Incidentally, he got his discovery with help from a lowly janitor, who told Semmelweis that bleach in his bucket as he mopped the floor helped reduce the smell from the autopsies. Semmelweis washed his hands with the bleach solution and noticed the drop in the smell on his own hands. When he worked the maternity ward with his new habit, the fatalities to mothers dropped dramatically.

Playing Semmelweis, Mark Rylance (left) is astonished at his clean hands after washing with bleach. From the new play Dr Semmelweis, reviewed in Science Aug 31, 2023.

It’s a great story, and the reviewer gave it good marks, calling it “an excellent play about science.” But the lessons go deeper than science. The scientific establishment at the time would not believe Semmelweis. They were too proud. They were the experts, and he was the weird doctor with the east European accent and a slight stutter. Neither were they convinced by the evidence that deliveries by midwives had one third the fatalities to the mothers compared to those done by the “expert” doctors.

“Compelling as the data are, the medical establishment is not convinced,” the reviewer says. “Physicians scoff at the idea that such a simple practice could make a difference, and they resent the implication that they are in some way contaminated.” It drove Semmelweis to madness. “Semmelweis is ultimately committed to an insane asylum, driven there by an overwhelming sense of the tragedy of a woman giving birth and dying soon after,” the review ends. “Ironically, he dies during his confinement from sepsis due to an unwashed wound.” The lesson from Semmelweis is not about the scientific method. It is about character—humility, integrity, critical thinking—qualities incumbent upon all humans in all professions. Big Science ignores these values at its peril.

Big Bad Science

Here are a few modern-day examples from the news about the high error rate in Big Science today.

‘Major errors’ alleged in landmark study that used microbes to identify cancers (Science Magazine, 2 Aug 2023). Hundreds of citations. 10 new studies. A commercial venture. These are some of the fruits of a flawed study published in the world’s leading Big Science journal, Nature, in 2020. Now look what critics are saying:

a group of researchers claims to have found “major data analysis errors” that undermine the paper’s conclusions. According to a manuscript the critics posted this week on the preprint server bioRxiv, the Nature authors failed to properly filter out human DNA from a database of sequenced cancer tissues. This led to millions of human sequences being wrongly classified as microbial—perhaps explaining why the study found improbable microbes such as a seaweed bacterium associated with bladder cancer, for example.

A separate, computational error related to the team’s analysis inadvertently generated cancer-specific patterns where there weren’t any, the preprint also contends. The paper’s “major conclusions are completely wrong,” says one of the preprint’s authors, Johns Hopkins University computational biologist Steven Salzberg.

To err is human, but scientism is the belief that ‘science’ is exceptional, because the scientific method and peer review weed out most mistakes. Here was a whopper that fooled the world of Big Science for three years. It makes one ask what bombshell discoveries currently in vogue will be debunked three years from now, or later, or never.

Medicine is plagued by untrustworthy clinical trials. How many studies are faked or flawed? (Nature, 25 Aug 2023). Clinical trials are supposed to represent the cream of the crop in scientific methods. They use controlled experiments, often with double-blind testing (where neither patient nor doctor knows who is getting the medicine). Van Noorden and Thompson say in this article, “Teams of scientists, physicians and data sleuths argue that in some fields unreliable or fabricated trials are widespread.”

Australia grapples with how to investigate scientific misconduct (Nature, 24 Aug 2023). The foxes are watching the hen house in Australia.

Australia’s academics are grappling with how to handle investigations into scientific misconduct. Unlike many other countries, the nation does not have an independent body to oversee such probes; instead, universities and research institutes carry them out themselves. Several high-profile misconduct cases are bolstering criticisms of the current system, and momentum is building to set up an independent research-integrity body — but university leaders are divided over whether one is needed.

The decision will not be made by the scientific method. It will be made by fallible humans with biases and political motives.

Relationship between journal impact factor and the thoroughness and helpfulness of peer reviews (PLoS Biology, 29 Aug 2023). Like people in most other human activities, scientists like to look at what’s hot and what’s not. “Journal impact factor” pretends to be a measure of scientific value, but these authors expose it to be a false prophet.

The Journal Impact Factor is often used as a proxy measure for journal quality, but the empirical evidence is scarce. In particular, it is unclear how peer review characteristics for a journal relate to its impact factor. We analysed 10,000 peer review reports submitted to 1,644 biomedical journals with impact factors ranging from 0.21 to 74.7….  In conclusion, peer review in journals with higher impact factors tends to be more thorough, particularly in addressing study methods while giving relatively less emphasis to presentation or suggesting solutions. Differences were modest and variability high, indicating that the Journal Impact Factor is a bad predictor of the quality of peer review of an individual manuscript.

Learning from failures: Support for scientific research needs to include when things don’t work out  (The Conversation, 27 Aug 2023). It’s only human to want to report dramatic results. Journal editors like it, universities like it, and scientists like basking in the fame of a click-baiting discovery. When an experiment fails, it’s tempting for a researcher to move on to something else.

But science should be impartial. If the null hypothesis is right (“the treatment does nothing”) then that fact should be published. Two authors point out in this piece that biology is more complex than to fit into simplistic beliefs from the textbooks.

Geneticists have been taught that chromosomes are independent, don’t modify each other’s expression and that gene expression is similar between individuals. Except they aren’t, they do and it isn’t.

They share from their own experience how a failed experiment led to this paradigm-breaking path to further paradigm-breaking research.

Breakthroughs in understanding require dynamic science and scientists who are supported to explore, ask unusual questions and, occasionally, fail in the lab. Sometimes the most important results from an experiment are the questions it forces us to ask.

How often does this happen, though? How many scientists move on, never facing the questions from a failed experiment?

Want to speed up scientific progress? First understand how science policy works (Nature, 21 Aug 2023). This commentary piece by six authors with science and policy experience describes how “Researchers and policymakers often exist in different worlds and speak different languages.” Their goal to bridge the divide implies that the divide between Big Science and Big Government is not adequately bridged.

Science is a key driver of economic growth and social progress. If science can be accelerated — such as by increasing the efficiency with which research dollars translate into discoveries and commercialized inventions — so can growth. Metascience researchers, like us, can generate evidence on the best way to accelerate science. Much is being learnt, but closer partnerships between researchers and policymakers could allow scientists to do much more.

But should scientists seek to influence government by spending time in government positions, as they suggest? The authors say it would be a good thing, but mixing two special-interest perspectives could make matters worse. By analogy, mixing the Executive and Legislative functions of government would weaken the principle of checks and balances and could lead to tyranny. The authors encourage “use-based” science, but useful to whom? Hidden in the article are some good ideas, but much of it has an undertone of scientism, the myth of progress, and misplaced trust in the integrity of both Big institutions.

Our priorities are all wrong when it comes to new technologies (New Scientist, 30 Aug 2023). “We can’t get life-saving drugs, but we can get dubious self-driving taxis, says Annalee Newitz” in this complaint. She tells about her experience being unable to get Paxlovid during the pandemic while San Francisco was testing self-driving taxis. “I can’t get a widely-available drug that can mitigate a life-threatening illness without a fight, but I can easily hail a robo-taxi that may cause mayhem on the streets. The future is here, but it’s absurd.

This leads to an obvious question: Who sets priorities? The scientific method? Peer review? No: priorities are set by fallible humans. This includes priorities about what scientific questions are worth pursuing. What perverse incentives are possible there? Money. Power. Fame.

The Philosophy of Error

Perspectives on scientific error (Royal Society Open Science, 31 July 2023). Big Science’s dirty laundry is on full display in this paper by 15 authors. Scientism is dead. Peer review is no guarantee of accuracy. Scientific publishing is full of misinformation.

Theoretical arguments suggest that many published findings are false, and empirical reports across fields show that many published findings do not replicate. Spurious or non-replicable research findings suggest a high prevalence of scientific errors in the literature.

The failures can be chalked up to bias or to carelessness.

In this paper, we categorize scientific error as belonging to one of two types. One type of error results from bias and influences scientific output through factors not related to scientific content, but through extraneous factors such as career prospects, funding opportunities and the peer-review process. The other type of error results from mistakes and influences scientific output through inaccuracies and mistakes in the research process itself.

The authors are not talking about ‘error’ the way physicists do. There is inherent error in any measurement. Good science practice requires noting the error bars on graphs, and mentioning the plus-and-minus values surrounding a measurement. It also requires following the amount of error as it accumulates through a calculation to a conclusion. Like entropy, error tends to increase when measurements are combined.

This kind of error is different. It is error due to bias and carelessness. It is error due to wrong assumptions about what science can do. And it can move leaders to embrace errors about the trustworthiness of science.

The existence of errors in science highlights important practical, philosophical and societal issues. From a practical perspective, errors mislead and slow down research projects. From a philosophical perspective, errors raise questions about the norms of scientific inference and about the reliability of science as a process for gathering knowledge. From a societal perspective, errors undermine the authority and relevance of science in public discussions, and the degree to which policy-makers can trust scientific experts.

The authors use terms like “methodological myopia” and “cargo cult inference” to unmask the pretensions of scientism.  They expose how research is really done by fallible humans subject to perverse incentives. One of their preferred solutions is to stop publishing science papers in scientific journals! Why? Journal editors act as gatekeepers, they say, and peer review is often biased, making peer reviewers partners in the crime of gatekeeping.

As proponents of Open Science, which in recent years has become a movement in Europe and is coming to America, they think that major changes in how science is done are both “necessary and inevitable.”

Now watch. Some Twitter/X blowtorch will accuse us of being “science deniers” or simpletons who “don’t understand science.” Hey, we say. Read the quotes from these articles. Read that last paper especially. We didn’t point out these weaknesses. They did.

Another reason to know that Big Science and Big Media have lost trust of the public is by watching their political bias. Surveys have shown that university faculty and deans are overwhelmingly Democrats with TDR (Trump Derangement Syndrome), and leftist in their politics and cultural values. As we have shown repeatedly (14 Oct 2020, 3 Aug 2023), the “science news” wires almost unanimously endorse every leftist position on everything (abortion, guns, climate change, gender ideology, socialism, communism—you name it). And above all, they are anti-creationist, anti-Christian, Darwin bigots. Why do you think that is? The scientific method? Get real.

Big Science and Big Media have become propaganda arms for the Left. That’s why there is a crisis of trust.

 

 

 

(Visited 367 times, 1 visits today)

Leave a Reply