December 29, 2011 | David F. Coppedge

Chinks in the Scientific Method

V  & V.  That’s shorthand in project design for “validation and verification.”  Does the scientific method provide V & V?  We are all taught to think that peer review, publication and replication help science to be self-checking, so as to avoid error.  Some recent articles show that ain’t necessarily so.  It may sound good in theory, but in practice, the ideal doesn’t always match the real.

Publish and perish:  In Nature (480, 22 December 2011, pp. 449-450, doi:10.1038/480449a) Adam Marcus and Ivan Oransky reminded readers of the world’s premiere science journal that in science publishing, “The paper is not sacred.”  Peer review needs to continue long after a paper appears in print, they argued.  Their concern was prompted by a 15-fold increase in the number of retractions over the last decade.  During the same time period, papers increased by 50%.  This is not necessarily bad, Marcus and Oransky continue, because it indicates corrections are being made.  But what about bad papers that don’t get retracted?  They pointed out disturbing cases where peer review was poorly checked by journal editors, sometimes with “massive” numbers of errors in a paper, under the excuse that peer review is supposed to be secretive.  Often readers are given no explanation for a retraction other than, “This paper has been withdrawn by the authors.”   Notice how extensive the problem is in their words:

Editors have many reasons to pay more attention to retraction and correction notices. For one, scientists often cite papers after they’ve been retracted, and a clear, unambiguous note explaining why the findings are no longer valid might help to reduce that. But, more importantly, a vaguely worded note that includes further claims from researchers whose work has been seriously questioned, in turn raises questions about the integrity of the journal itself, and about the overall scientific record.

Marcus and Oransky pointed to new online methods that might reduce the number of mistakes making their way into the corpus of “scientific knowledge”—even the radical idea that the new methods may reduce the publication of scientific papers in journals.  But their article raises other serious questions.  Since World War II we have been led to believe that peer review provided the V & V science needed.  How do we know that new, untested methods will do better?  To what extent are mistakes entering the corpus because of peer pressure instead of peer review  – the demands of universities to measure a scientist’s performance by how much he or she publishes?  How can scientists keep up with the growing volume of publications?  They raised additional questions:

There are other hurdles. How should scientists treat papers that are hardly read, so are never evaluated post-publication? Does a lack of comment mean that the findings and conclusions are extremely robust, or that no one has cared enough to check? Including readership metrics alongside comments should help here.

The authors could only hope that additional scrutiny and new methods will “make the scientific record more self-correcting.”  That implies that the self-correcting nature of science we have been trusting is not doing a very good job.

Replicate and perish:  In theory, scientific errors are caught because other scientists try to replicate the experiment.  This may have worked for high-profile claims like cold fusion, but how would someone replicate a discovery of the Higgs boson without a second Large Hadron Collider?  Earlier this month, Science Magazine printed a special series on replication.  In the introductory article, “Again and Again, and Again,” (Science, 2 December 2011: Vol. 334 no. 6060 p. 1225, doi: 10.1126/science.334.6060.1225 ), Jasny, Chin, Chong and Vignieri began, “Replication—The confirmation of results and conclusions from one study obtained independently in another—is considered the scientific gold standard.”  That’s the theory.  In practice, they found enough dross in the crucible to be worried: “New tools and technologies, massive amounts of data, long-term studies, interdisciplinary approaches, and the complexity of the questions being asked are complicating replication efforts, as are increased pressures on scientists to advance their research.” 

The series of articles that followed showed why replication is often unreachable in the real world.  How do you get a rare animal, say an ivory-billed woodpecker (or a Loch Ness monster, for that matter), to appear on cue, so that an observation can be replicated?  Unique experiences in the field challenge the gold standard: “although laboratory research allows for the specification of experimental conditions, the conclusions may not apply to the real world,” they said.  Consider, also, the difficulty of replicating medical tests, which might involve thousands of patients in longitudinal studies lasting years. 

Other questions the authors did not mention could be asked.  To what extent does a shared paradigm, or shared beliefs, decrease the motivation to attempt replicating a popular result?  Remember the recent decade-long fraud by superstar Diederich Stapel (11/16/2001, 11/05/2011).  More significantly, if science cannot live up to its own ideals of peer review and replication, what right does it have to claim epistemic superiority over other departments in the academy?

Reduce and perish:  How big does a sample have to be to arrive at a sound conclusion?  That’s what Medical Xpress asked in an article, “The perils of bite-size science.”  Two psychologists are worried about a trend toward shorter papers and smaller samples (a principle applicable to any scientific field, not just psychology).  Yes, people may enjoy reading shorter papers—but now there are more of them, and publishers have to do more work, contrary to their hope that word limits would simplify things.  Worse, since small sample sizes can lead to false positives and wrong conclusions, “two short papers do not equal twice the scientific value of a longer one,” the researchers argued.  “Indeed, they might add up to less.

Yet the psychologists’ implicit contention that longer, more detailed papers are more reliable may not be true.  In fact, they pointed to other factors that can undermine the credibility of any paper, short or long.  Consider these three steps to misinformation: (1) “surprising, ‘novel’ results are exactly what editors find exciting and newsworthy and what even the best journals seek to publish”; (2) “The mainstream media pick up the ‘hot’ stories”; (3) “And the wrong results proliferate.”  The trend toward bite-size science is leading scientists away from the healthy skepticism on which science depends, the authors believe.

Form a consensus and perish:  Scientists like to be objective, not subjective.  But Andrew Curtis (U. of Edinburgh) argues that science cannot rid itself of subjectivity.  In his essay “The Science of Subjectivity” published in the journal Geology (open access, Geology v. 40 no. 1 p. 95-96, doi: 10.1130/focus012012.1), he reminded geologists that subjectivity is built into the scientific method:

While the evidence-based approach of science is lauded for introducing objectivity to processes of investigation, the role of subjectivity in science is less often highlighted in scientific literature. Nevertheless, the scientific method comprises at least two components: forming hypotheses, and collecting data to substantiate or refute each hypothesis (Descartes’ 1637 discourse [Olscamp, 1965]). A hypothesis is a conjecture of a new theory that derives from, but by definition is unproven by, known laws, rules, or existing observations. Hypotheses are always made by one individual or by a limited group of scientists, and are therefore subjective—based on the prior experience and processes of reason employed by those individuals, rather than solely on objective external process. Such subjectivity and concomitant uncertainty lead to competing theories that are subsequently pared down as some are proved to be incompatible with new observations.

Curtis presented a fairly positivist view that science will guide itself from the subjective to the objective.  Subjectivity can even be good for science.  “Allowing subjectivity is a positive aspect of the scientific method: it allows for leaps of faith which occasionally lead to spell-binding proposals that prove to be valid,” for instance.  (He did not provide statistics of valid vs. nutty spell-binding proposals).  But he cautioned readers to realize that even quasi-objective methods, like the popular Bayesian analysis, have built-in subjective aspects.

A study of how geologists arrived at a consensus pointed to the influence of group dynamics.  One study showed that geologists were influenced to change their previously-solid opinions as a result of interacting with colleagues.  A particular geologist changed his mind twice because of what the group did.  Curtis pointed to several studies that illustrated similar kinds of group dynamics at work.  What is the upshot?

The above studies significantly influence the way one should interpret consensus-driven results. Consensus positions clearly may only represent the group opinion at one instant in time, and may not represent the true range of uncertainty about the issue at hand (e.g., Fig. 1C). This is disturbing because consensus is often used in the geosciences.

As an example, he pointed to climate change: “IPCC conclusions are all consensus driven—positions agreed between groups of scientists.”  While consensus formation may soften the bias of the overconfident, “the group consensus approach may also introduce dynamic biases … which are more difficult to detect without tracking the dynamics of opinion. ”  What this means is that the herd mentality operates even in scientific meetings.  It takes courage to be a lone ranger, but the maverick might be right.

Better late than never?  Sigmund Freud is a fallen superstar, once exalted within the triumvirate of modern movers along with Marx and Darwin.  He has even been compared to Copernicus.  His theory of psychoanalysis spawned a whole industry of couch-side therapists, using Freud’s new vocabulary that lent scientific credibility to his ideas.  Guess what: psychoanalysis never existed.  That’s what New Scientist reported, based on new revelations that have come to light in The Freud Files:

The Freud Archives, a collection of letters and papers, were deposited at the US Library of Congress by Freud’s daughter, Anna, to put them out of reach of unofficial biographers. This move also locked away Freud’s patients’ versions of their own problems.

But now, as primary material is made public, parts of the archive are declassified and his letters re-edited without censorship, the legend is “fraying from all sides”.

Freud was a legend in his time, and apparently a legend in his own mind.  This should sound alarm bells.  How could a large portion of academia be duped for so long?  What legends are we following today that will be exposed as tomorrow’s frauds?

Science for dummies:  In a strange paper that sounds like a script for Revenge of the Nincompoops,  Peter Fiske invited the scientific community to “Unleash Your Inner Dummy.”  That’s right; in Nature itself (Nature 480, 7 December 2011, p. 281, doi:10.1038/nj7376-281a), he argued that “There is something to be said for letting go of the mantle of expert.”  Intelligence, intellect, and prestige are valued in academia, but nincompoops have all the fun:

Ironically, always playing the expert can be limiting, in terms of both contributions to science and career options. Sometimes, playing the dummy can be liberating and help to reveal opportunities that would otherwise have been overlooked. Dummies ask questions that experts assume were answered long ago. Dummies explore subject areas in which they lack knowledge. Dummies listen more and talk less.

The mantle of expertise, in other words, can be a choke rag.  Loosen up, he says, and ask the dumb questions.  It’s OK to kick a sleeping dogma:

Becoming a dummy frees you from dogma. Developing expertise can often mean ingesting unquestioned assumptions and accepted facts. Such received beliefs can lead to unchallenged group decision-making and prevent a community from recognizing a path-breaking discovery — especially when it comes from someone outside the discipline.

What a radical concept.  Could it be that the next great idea will come from a dummy, someone not tied to the paradigm?  It’s happened.  Moreover, Fiske argues, “Embracing your inner dummy is also a powerful tool for communicating science.”  Scientists in the role of expert talk down to the public and think all they need is facts, when maybe it would be good for them to humble themselves and “seek to understand the audience’s cultural and ethical perspectives.”  Let’s hear it for thinking outside the box.

This journey into the engine room of science has been brought to you by the dummies at Creation-Evolution Headlines, who are too stupid to realize that evolution is a fact, because the scientific consensus says so.  But oh, do we have more fun.  Come out, come out, ye Darwin Dogmatists, and see the beauty of the cultural and ethical perspectives.  Loosen your tie that binds you to the consensus.  Ask the dumb questions.  Do some peer review on peer review.  Check to see if peer pressure is undermining the pier on which the amusement park of science sits.  Exercise your autonomy: doubt a publication, question a Project Scientist, vote against the crowd.  Trust not in a flawed human enterprise.  Freud has fallen.  Marx has fallen.  Darwin is next.  Turn in your false gods for a true One.  Recognize that while logical thinking, clarity and accuracy are noble traits, they are not the exclusive property of scientists – a word invented in 1832 by William Whewell for natural philosophers, ostensibly to energize their group dynamics, but has resulted in an elitist class of self-proclaimed experts who know more and more about less and less until they know absolutely everything about nothing that really matters.  You matter more than matter.  It’s all about soul – the soul of science, which is faith in a unified, sensible, created order that points to its Source.  Become a dummy in the world’s eyes, that you may begin to become truly wise (I Corinthians 2).

 


(Visited 106 times, 1 visits today)

Comments

  • D says:

    Guess I am just another one of those “dummys” without any advanced degree to add to my resume.

    Thank God that He gave me enough faith to read His Word and believe what He tells me, not what some educated idiot says.

    May Rom 1:20 live forever in my mind!!

Leave a Reply