February 8, 2016 | David F. Coppedge

If You Can't Trust Scientists, You Can't Trust Science

Science may be “out there” in the world, but its discoveries are mediated by fallible scientists.

We can’t trust common sense but we can trust science.” That’s Peter Ellerton’s message on The Conversation. Ellerton, a lecturer in critical thinking at Queensland University, relays the typical triumphalist view of science as the rational alternative to intuition: “science is not about common sense,” he intones. Our intuitions don’t apply in quantum mechanics nor in what “feels right” about reality. It’s instructive that two examples he gives of “common sense” being wrong are opposition to gay marriage and unbelief in man-caused climate change. So while he warns of “cognitive biases,” did he warn himself?

Ellerton finds strength in numbers. “In science, the highest unit of cognition is not the individual, it is the community of scientific enquiry…. We are smarter together than we are individually, and perhaps that’s just common sense.” Let’s see if that holds up under scrutiny, based on some recent headlines. Does the scientific community deserve our unqualified trust?

How did that make it through peer review?” In this PLoS Blog, vertebrate paleontologist Andrew Farke is slightly more cynical (realistic?) about collective wisdom in science. From his own experience for years as a researcher, reviewer and editor, Farke exposes the sausage-making that goes on in back rooms of journal companies about peer review, that assumed gold standard of scientific self-correction and trustworthiness. Among the real-world foibles he discusses are shortcomings and bad tendencies all human beings know too well:

  1. Editorial incompetence
  2. Editor’s lack of expertise in the subject
  3. Reviewer lack of expertise in the subject
  4. Reviewer laziness or sloppiness
  5. Reviewer haste
  6. Authorial shenanigans
  7. Author ignoring the reviewers’ comments
  8. Reviewer a personal friend of an author
  9. Wide differences in responses from different reviewers
  10. Egregious errors missed by reviewers
  11. Complexity of the subject matter that few reviewers understand
  12. Multi-disciplinary nature of the subject that no one reviewer can fathom
  13. Much peer review involves one editor and 2 to 4 reviewers
  14. Distracting figures used just to break up long blocks of text
  15. Inability to check original materials, such as fossils in a distant museum
  16. Some journals more lax in editorial policies than others
  17. Taxonomic scoring of characters in phylogenetic analyses are rarely checked (too complex)
  18. Reviewer doesn’t care about review work

Farke’s bottom line is, “Reviewers and editors are human. Peer review isn’t perfect. Mistakes will make it into the permanent literature, even under the best of circumstances. A more open peer review process is one way forward.” He advocates more transparency using social media, although he still believes pre-publication peer review has value.

Reproducibility: A tragedy of errors.” That’s the title of an article in Nature, and Andrew Farke takes some gratification that the leading journal in the world pretty much agrees with his analysis: “Today Nature published a piece that touches on many of the same issues. It’s well worth checking out!” he ends. In this article, Allison, Brown, George and Kaiser ask, “Just how error-prone and self-correcting is science? We have spent the past 18 months getting a sense of that.” They start with a good example: independent reviewers whose analysis led an author to retract a paper. “Sadly,” they say, “in our experience, the case is not representative.” They found many papers with “substantial or invalidating errors.” When they took it upon themselves to search for big errors, human weakness reared its lazy head:

After attempting to address more than 25 of these errors with letters to authors or journals, and identifying at least a dozen more, we had to stop — the work took too much of our time. Our efforts revealed invalidating practices that occur repeatedly … and showed how journals and authors react when faced with mistakes that need correction.

To Farke’s list of peer-review foibles, these four authors add more:

  1. Authors who use inappropriate or non-randomization methods despite claiming experiments were randomized
  2. Authors claiming “mathematically or physiologically impossible results”
  3. Mistaken design or analysis of cluster-randomized trials
  4. Miscalculation in meta-analyses
  5. Inappropriate baseline comparisons
  6. Inconsistent post-publication peer review that doesn’t catch errors
  7. Thinking sincerity is enough
  8. Journals who de-incentivize self-correction, like charging an author $10,000 to initiate a retraction
  9. Editors who are often unable or reluctant to take speedy and appropriate action
  10. Journals that don’t make it clear where to send expressions of concern
  11. Journals that acknowledge invalidating errors but are unwilling to issue retractions
  12. Policies that expect one author to correct another co-author’s mistakes
  13. No standard mechanism to request raw data on which a paper is based
  14. Editors that overlook informal expressions of concern
  15. Editors that delay responding to expressions of concern

“The scientific community must improve,” the authors warn. One commenter probably didn’t make Nature happy with this embarrassing anecdote:

Ironically, Nature itself is not immune to this phenomenon. I tried to get recognition for the following obvious mathematical flaws in a 2009 Nature article: [citation]. After years, I have given up.

Make journals report clinical trials properly.” In an editorial in Nature, Ben Goldacre is fed up at “troubling” trends in science. “There is no excuse for the shoddy practice of allowing researchers to change outcomes and goals without saying so,” this member of the Centre for Evidence-Based Medicine says. Having a Scientific Method is not enough, he argues, nor are Codes of Conduct. There needs to be integrity at both the personal and institutional level.

Science is in flux. The basics of a rigorous scientific method were worked out many years ago, but there is now growing concern about systematic structural flaws that undermine the integrity of published data: selective publication, inadequate descriptions of study methods that block efforts at replication, and data dredging through undisclosed use of multiple analytical strategies. Problems such as these undermine the integrity of published data and increase the risk of exaggerated or even false-positive findings, leading collectively to the ‘replication crisis’….

You might think that this problem is so obvious that it would already be competently managed by researchers and journals. But that is not the case. Repeatedly, academic papers have been published showing that outcome-switching is highly prevalent, and that such switches often lead to more favourable statistically significant results being reported instead. This is despite numerous codes of conduct set up to prevent such switching, most notably the widely respected CONSORT guidelines, which require reporting of all pre-specified outcomes and an explanation for any changes. Almost all major medical journals supposedly endorse these guidelines, and yet we know that undisclosed outcome-switching persists.

Strength in members“. With these concerns aired vociferously by those who compare science ideals with actual practice, what’s the mood at the Editor level? How are things over at the AAAS, publisher of Science? After reading all the above, it’s a little disconcerting to hear Editor Rush Holt speak triumphantly about how good his organization is doing, especially after his controversial appointment by Obama led to charges of polarizing science politically (see 2/22/15). The staunch Democrat doesn’t help matters when he includes, among the AAAS’s list of great achievements, the following: “from funding the legal defense in the 1925 Scopes trial on teaching evolution, to challenging today’s politically motivated interference in research in climate change and in social and behavioral sciences, and denouncing legislated restrictions on the study of gun violence as a public health issue.” Incidentally, regarding climate change, Science Daily just pointed out that assumptions about tree-ring dating to infer past climate failed to take uncertainties into account. “This suggests that there is less certainty than implied by a reconstruction developed using any one set of assumptions.”

False positives are statistically inevitable“. In a letter to Science, a statistician from Virginia Polytechnic hails the trend for reproducibility, but cautions that “false positives are statistically inevitable” given the number of papers published each year. Even in the best papers, written with the best of intentions, following all the proper protocols, R. D. Fricker, Jr. says it’s a mathematical certainty that some papers will show statistically significant results, yet be false – even though the results are reproduced by others.

Reproducibility is clearly important, and we should support and encourage those who promote it—across all fields, not just psychology—as a crucial part of the scientific enterprise. In particular, moving away from publication standards based solely on the statistical significance of a single experiment or a single set of observed data to those based on evidence that observed results can be reproduced is a critical change that we must make in the academic publishing culture. However, we must also recognize that, even within the most careful and rigorous experimental framework, erroneous conclusions are always possible. We should thus always maintain a healthy skepticism when assessing study results.

He calculates that 3,370 papers a year could have “confirmed” spurious results.

For you atheists who read CEH, who worship at the halls of science and wrongly think we hate science, did you notice who wrote these warnings? They come from some of the leading science journals in the world. They’re not basing their concerns on religion, but they are truly worried about bad behavior by scientists, and even bad results from good scientists. The bad practices are widespread, they say; “the scientific community must improve.”

Without doubt there are many excellent, honest scientists doing great work from the purest motives, who are careful, self-critical, and as unbiased as any human can be. But these concerns about the “scientific community” are non-trivial. Pressures on even the honest scientists to fudge or be careless, whether from rivalry, pressure, desire for funding or even fatigue, are real. Errors creeping into published “confirmed” results cannot be eliminated.

We see here again, too, as we have documented often, that Big Science is strongly biased to the political left (10/14/10, 7/16/12, 3/22/13, 6/26/13, 12/10/13, 2/28/14, 5/31/14, 12/07/14), and their enablers in Big Media deliver that ideology uncritically to the public (8/16/15, 1/11/16). Ellerton showed it in his bias toward gay marriage and climate change. Rush Holt showed it in his praise of funding the Scopes Trial and other leftist causes. Is he unaware that the Scopes Trial was one of the most egregiously misreported judicial events of the 20th century? The behavior of Darrow, his science advisers and pro-Scopes reporters was shameful. Holt should be embarrassed, but instead he’s proud that the AAAS funded that circus! If he really believed in academic freedom in science education, he would support honest teaching about Darwinism in this topsy-turvy world where the plaintiffs have become the bigots, refusing to permit critics of Darwinism to have a voice, forcing taxpayers to be indoctrinated with one-sided, sanitized propaganda.

We cannot stress enough that science cannot function without morality. Science is not “out there” for anyone to latch onto using some kind of reliable method. Brute facts are facts, but humans are only human; they can only access facts through fallible senses and fallen natures. No amount of policy can overcome evil hearts. If anyone thinks peer review protects against flawed findings, who watches the watchers? They’re “only human” too. All intellectual endeavor breaks down without integrity.

Peer review can help, both pre-publication and post-publication. Transparency and social media are beginning to open up the sausage factory to the light. But these are mere sieves. The best hope is changed hearts: scientists who love truth and live with internalized self-control through divine power and enablement. That comes about by repentance and trust in the only way our Creator provided for righteousness.

(Visited 243 times, 1 visits today)

Leave a Reply