Big Science Struggling to Regain Credibility
Peer review is under attack with new move to combat fraud and special interest through integrity and transparency. But where do those come from?
Big Science remains in crisis. Phys.org reports on a study that found “More than a quarter of biomedical scientific papers may utilise practices that distort the interpretation of results or mislead readers so that results are viewed more favourably.” That has certainly been our experience at CEH, daily watching the press releases emanating from university PR departments, where the name of the game is to make your scientist look good no matter how questionable the findings. Public acceptance of scientific claims tracks political party affiliation to a remarkable degree. Allegations of conflict of interest, peer pressure and funding bias are rife. What has happened to the presumptive authority of the science, seeking objective knowledge for its own sake?
The situation recalls the words of Lincoln as he chastised Congress about a union falling apart:
The dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise with the occasion. As our case is new, so we must think anew and act anew.
In science, one of the dogmas is the notion that peer review somehow guarantees objectivity. And yet Phil Hurst, writing at Phys.org, portrays modern peer review as a domain of darkness. These factors are corrupting this pillar of scientific authority:
- Secrecy corrupts
- Politicization of science destroys objectivity
- Journal paywalls bar stakeholders from access to product
- Research misconduct escapes reviewer scrutiny
After recounting the history of peer review back to the days of the Royal Society in 1832, when members moved from publishing minutes of their meetings to having reviewers write reports about what should be published, Hurst echoes Lincoln that the dogmas of the quiet past are inadequate to the stormy present. The occasion is piled high with difficulty. We must think anew and act anew.
It’s time to disinfect Big Science with sunshine, Hurst argues. Transparency is the new buzzword. Transparency, implying open peer review, opens the windows on secret cabals of reviewers and lets everyone see what is going on in the sausage-making called science.
In 2014, the Royal Society launched the journal Royal Society Open Science which offers optional open peer review where reports are published along with articles. This has proved popular with the majority of authors opting for publication of peer review reports and half of reviewers signing their reports. The uptake varies by scientific discipline.
Hurst lists four benefits of open peer review:
- Readers can see the comments by reviewers and reach their own conclusions about the rigour and fairness of the process;
- Reviewers’ suggestions to improve the paper are available to everyone as examples of what makes a good review.
- Reviewers tend to write better and more balanced reviews if they know they will be made public.
- By signing their reports reviewers can get recognition for this vital contribution to the research process.
Overall, he says, “the whole peer review process gains more trust and accountability when everything is transparent.” But will open peer review be a passing fad? As we shall see, transparency alone cannot guarantee objectivity.
Sheryl P. Denker also calls for transparency in a PLoS Blog entry. She says there is “community and public skepticism regarding the quality, trustworthiness and authenticity of the review process, from the initial stage of evaluation before reviewer assignment to the final editorial decision. Making peer review more transparent, at any stage, has the potential to revitalize the process and restore trust in the system.” She lists practical steps that journals and reviewers can take to increase transparency. Reviewers, for instance, can agree to sign their reviews (a radical change from the secrecy of old). But will this create other problems? Denker and Hurst seem to see transparency through rose-colored glasses, but knowing human nature, every solution breeds new problems.
Measures of Significance
Another dogma of the quiet past is the P-value, a traditional measure of significance. By habit, scientists seek a P-value of .05, or 5% or lower, to judge a result as statistically significant over the null hypothesis. But why? What is sacred about that tradition? Nothing, it turns out, and scientists have been known to keep testing an experiment until they get the P-value they want to confirm their hunch. Nature writes about an effort to raise the bar, but then says that scientists are fighting back. Some object that the “one-size-fits-all” measure fails to take into account differences between the sciences. At this moment, sacred P-values are falling faster than statues of Confederate generals. A majority of scientists think the bar needs to be more stringent.
Politicization of Science
By popular misconception, Republicans are the science deniers. Not so, says Phys.org; science denial is not limited to the political right. According to a study at the University of Chicago, people of all political backgrounds are equally tempted:
“Not only were both sides equally likely to seek out attitude confirming scientific conclusions, both were also willing to work harder and longer when doing so got them to a conclusion that fit with their existing attitudes,” says Washburn, the lead author of the study. “And when the correct interpretation of the results did not confirm participants’ attitudes, they were more likely to view the researchers involved with the study as less trustworthy, less knowledgeable, and disagreed with their conclusions more.”
By extension, this propensity afflicts scientists themselves. This explains why academia, so lopsided toward the Democrat party, produces members of scientific institutions whose own confirmation bias propels them to affirm the consensus of their peers. Their work can be motivated by feelings that have nothing to do with science. “Rather than strictly a conservative phenomenon, science denial may be a result of a more basic desire of people wanting to see the world in ways that fit with their personal preferences, political or otherwise, according to the researchers.” That’s a human foible against which every person must struggle, scientist or not.
Drummond and Fischhoff, writing in PNAS, claim that the polarization over science is not a matter of scientific knowledge. In fact, “Individuals with greater science literacy and education have more polarized beliefs on controversial science topics,” they say. Now isn’t that counter-intuitive! “These patterns suggest that scientific knowledge may facilitate defending positions motivated by nonscientific concerns.”
Integrity Is Not a Scientific Question
Three scientists writing in to Science Magazine make the preposterous suggestion of “Addressing scientific integrity scientifically.” Preposterous, we say, because it leads to an infinite regress. What about the integrity of the researchers testing integrity? What about the reviewers checking their work? Who watches the watchers? Who watches the watcher-watchers? etc. Watch it here: “The premise behind this effort is that universities should practice what they preach by supporting the development and adoption of evidence-based policies aimed at improving integrity in research.” Once universities can fake that, they’ve got it made.
Escape to Reality
Science Magazine printed testimonials of three scientists who searched for “Sunshine outside the ivory tower.” They now call themselves “Recovering Academics” and shared similar emotional challenges. “Over the past few years, all three of us have left academia,” they agree, before describing their individual situations. “It was the right decision for each of us, but we still struggled with uncertainty and a feeling of failure, and we could find little community support.” One felt like “I had lost my tribe” but, after awhile, she acclimated. Their descriptions mirror experiences of ex-cult members and drug rehab patients, suggesting that the culture of science puts a grip on people that controls their minds. Each one struggled with depression, a sense of failure, and a loss of community.
Science is not an abstract, objective thing. It is always mediated through humans. People come into science with biases, expectations, and preferences. Hang out with liberal academics, and you will want to be like them. Hang out with superiors who cheat, and you will tend to excuse misconduct. Feel the allure of funding, and you will be tempted to bend your convictions to get that lifeblood of job security. It takes firm self-control and independence of mind to fight those tendencies.
There are many good individual scientists who have integrity; we don’t tarnish them with a broad brush. However, it is scientists themselves who are pointing out these issues from the inside. We dare not assume a simplistic, 1950s-era mindset about scientific objectivity, gazing at Big Science like a Disneyland of wonders. Inside that white lab coat is a person with feelings, dreams, biases and a human soul. Maybe the best scientists are those who, like James Joule, are independently wealthy, alone, and experiment for the sure satisfaction of their curiosity about how the world works. Unfortunately, you can’t build a Large Hadron Collider or spacecraft that way. So while admiring good science, we must always be cautious about bad science.
The best way to get scientists of integrity is to build the fear of God into them when they are young, teaching them the Ten Commandments. Even better is to mature them into those with the love of God, who, with the law of God written on their hearts, pursue truth and righteousness because they love those virtues.