Scientists Blind to Their Failings
Scientism sounds appealing in theory. In practice, human scientists fall short of its ideals of enlightenment, progress and understanding.
The new Ioannidis study. John P. A. Ioannidis has made waves with his studies of scientific bias (see 1/11/17). In the Proceedings of the National Academy of Sciences March 20, his team published results after they “probed for multiple bias-related patterns in a large random sample of meta-analyses taken from all disciplines,” in order to address the widely-reported ‘reproducibility crisis’ in science. Their findings are partly encouraging, but point out factors that contribute to lack of trust:
The magnitude of these biases varied widely across fields and was on average relatively small. However, we consistently observed that small, early, highly cited studies published in peer-reviewed journals were likely to overestimate effects. We found little evidence that these biases were related to scientific productivity, and we found no difference between biases in male and female researchers. However, a scientist’s early-career status, isolation, and lack of scientific integrity might be significant risk factors for producing unreliable results.
The team found “Systematic differences in the risk of bias between physical, biological, and social sciences,” with the latter being worse. But it’s not clear this “bird’s-eye view” found all the bias that exists; “future research will need to determine whether and to what extent these trends might reflect changes in meta-analytical methods, rather than an actual worsening of research practices.”
Reproducibility crisis redux. In plain English, three analysts discuss “The science ‘reproducibility crisis’ – and what can be done about it” at The Conversation. Danny Kingsley thinks that the move toward ‘open research’ will help reduce some of the personality factors that rush scientists to publish, such as the desire to have priority. Jim Grange, a psychologist, says “To me, it is clear that there is a reproducibility crisis in psychological science, and across all sciences.” He thinks his field is getting better at removing bias, but is not ‘out of the woods’ yet. Ottoline Leyser thinks that publication practices should take a lot of the blame for the “current destructive culture” that rushes bad science to print.
Cancer care. “Remember why we work on cancer,” Levi Garraway pleades in Nature. He knows from experience how the motivation to publish “high-impact papers” can go awry if a researcher does not consider whether the results are reproducible. Often, when they are not, other factors come to light, to the embarrassment of the researcher, the publisher, and the reputation of science. He wouldn’t be writing about the need to follow the 3 R’s, “Rigor, Reproducibility and Robustness,” if there weren’t a problem.
Opening the Gates. Speaking of open research, the Gates Foundation, the global health charity founded by Bill and Melissa Gates (Microsoft), has announced its own open-access publishing venture. Unable to get their thousands of research papers published in conventional channels because of the Gates Foundation’s stringent open-access policy, they are going to publish their own. They’ll be going about peer review in a different way: “Papers are peer-reviewed after publication, and the reviews and the names of their authors are published alongside.” Nature doesn’t seem to have a problem with this, showing that scientific practices are not set in stone. Indeed, fossilized tradition is blameworthy. “We believe that published research resulting from our funding should be promptly and broadly disseminated,” says Callahan. “Our research saves lives.” That says something disturbing about conventional practices up till now.
Open science revolution. When institutions are pushing for ‘open science’, is that not presupposing that science has been closed for decades? In Nature‘s comment article, “Five ways consortia can catalyse open science,” 19 academicians make the case for disinfecting scientific practice with the transparency of sunshine. To do this, they will have to break open encrusted habits about ownership, and get into sharing mode. But it won’t be easy. Believers in scientism need to read this: “As philosopher of science Thomas Kuhn documented more than 50 years ago, the scientific community resists challenges to its orthodoxy.” And you thought only religious institutions used that word. It’s time to expand the role of stakeholders in science, they say, and – imagine this – get the public involved. “Conduct outreach so stakeholders explicitly voice goals and identities,” they advise.
Political cluelessness. Polls supposedly use ‘scientific’ methods to assess the state of the country, but the 2016 election proved they were way off. Why? One statistician, according to Phys.org, faults “conventional wisdom, not data,’ for the mistake. The experts in Big Media were simply out of touch with the mood of the country they were measuring. “If you look at public opinion, people weren’t actually all that confident in Clinton’s chances,” Nate Silver said in an interview. “It was the media who were very confident in Clinton’s chances.” Even his polling site, FiveThirtyEight.com, “gave Donald Trump a less than 1 in 3 chance of winning.” News sites don’t understand the relationship between polls and probability, he said, and so they relaxed into non-rigorous, ad hoc reasoning to reinforce their own biases that Clinton would be a shoe-in for election. The data weren’t dead; the fault was in conventional wisdom that was not so wise.
The second part is that there is a certain amount of groupthink. People looking at the polls are mostly in newsrooms in Washington and Boston and New York. These are liberal cities, and so people tend to see evidence (in our view, it was kind of conflicting polling data) as pointing toward a certain thing. People have trouble taking different information about, for example, signs of decline in African-American turnout and reconciling that against supposedly good numbers among Hispanic turnout for Clinton. People weren’t using the more thoughtful sides of their brains; they were using the more emotional sides of their brains.
Leftist science. Speaking of liberal locations, academic institutions are known to be hothouses for liberal bias. Gavin Bailey and Chris O’Leary admit it, but then say it’s not necessarily a bad thing. Their headline in The Conversation states, “Yes, academics tend to be left wing – but let’s not exaggerate it.” Nobody is concerned that bankers tend to be right-wing, they argue, but that’s different. Bankers aren’t teaching science. They are not deciding what science is. Aren’t Bailey and O’Leary concerned that the ‘conventional wisdom’ in academia can lead to the same non-rigorous, ad hoc reasoning that shamed the pundits on the election results? Apparently not. They deny that academics tend to fall on deep political divides; “it is unlikely that most academics are extremists, and many won’t be all that politically minded; much like the rest of society,” they conclude. They’re basically rationalizing a very lopsided situation within the ivied walls. Maybe they need to get out of the echo chamber and meet some real folks. The worst bias is not recognizing one’s own bias.
Fake news hall of mirrors. A week prior to April Fool’s Day, National Geographic posted some examples of how gullible people can have their brains tricked by fake news. First example: “How many animals of each kind did Moses put on the ark?” Obviously it was Noah, not Moses. People often accept the first answer that comes to mind, Alexandra Petri writes. But does this kind of gullibility affect scientists? Why would Petri jump to a conclusion about what presidential advisor Kellyanne Conway meant in a widely-misinterpreted quote?
We live in a world with many “alternative facts,” which means verifying and fact-checking ourselves and those in our community plays an important role in determining what is real and what is fake.
Petri relies on the reputations of psychologists and sociologists, whom Ioannidis reported are often the most guilty of scientific bias. She even exonerates them for running a study in which they lied to participants (see 3/15/17). Nowhere in this article does National Geographic look in the mirror and say, “Are we perhaps purveyors of fake news ourselves?”
Offended humans in the Petri dish. Evolutionary anthropologists sometimes think they can just move into a tribal community and treat the people like lab rats, writing up their behaviors as evolutionary adaptations. But all people have human rights and deserve respect. Can the tribespeople reverse roles? Nature says that a certain tribe in South Africa, the Sans people who have developed a ‘click language,’ decided they’ve had enough of researchers coming in and running roughshod over their feelings and traditions: taking their genomes, calling them ‘Bushmen’ (an offensive name in their culture), and the like. They are the first tribe to draw up a code of ethics for researchers. One can imagine some researchers being shocked at finding out they have been offenders, despite their beliefs about ‘social justice.’ If they really believed in social justice, they would allow the Sans people to conduct research on scientists, wouldn’t they? One can imagine a possible research paper: “A study on manifestations of the Yoda Complex among western sociologists.”
Lies, damned lies, and statistics. Winnifred Lewis and Cassandra Chapman inform the rest of us how to avoid the “seven deadly sins of statistical misinterpretation” in The Conversation. It’s a good piece with good advice. But nowhere do they indicate that scientists commit these sins, too. Confusing correlation with causation, putting a thumb on the outlier, exaggerating small differences, neglecting outside factors — these are not unknown problems in published science papers. Just look at the typical paper on phylogeny (example: 130 years of error).
Sorry science. We end with some quotes from Nature‘s review of Richard Harris’s new book, Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions. Just the title should jolt those who love science and trust its credibility. Reviewer Marcus Munafò begins,
As scientists, we are supposed to be objective and disinterested, careful sifters of evidence. The reality is messier. Our training can give us only so much protection from natural tendencies to see patterns in randomness, respond unconsciously to incentives, and argue forcefully in defence of our own positions, even in the face of mounting contrary evidence. In the competitive crucible of modern science, various perverse incentives conspire to undermine the scientific method, leading to a literature littered with unreliable findings.
This is the conclusion of Rigor Mortis, a wide-ranging critique of the modern biomedical research ecosystem by science journalist Richard Harris. He describes how a growing number of claims over the past decade that many published research findings are false, or at least not as robust as they should be, has led to calls for change, and the birth of a new discipline of metascience.
Metascience is “the scientific study of science itself,” or just philosophy of science. Though Harris focuses on biomedical research, the problems he reports should concern all science. Even if there is a ‘scientific method’, which some philosophers of science doubt, it does no good unless it is followed honestly. So unless and until scientists clean up their act, why should the public listen to the proponents of scientism who exalt science as the most reliable path to enlightenment? Scientists are only human, and humans are biased. Overcoming bias is not a matter of science. It’s a matter of character.
Character requires a moral foundation. A moral foundation must be solid; it cannot evolve. Scientists: you need a solid moral foundation. You need an eternal, unchanging, righteous, just, holy God. There’s only one of those.