Can Scientific Journals Perpetuate False Ideas?
An unusual paper appeared in PNAS this week.1 Four social scientists from Columbia and Yale argued that scientific papers can actually perpetuate false ideas rather than correct them. The abstract says that an influential paper can generate momentum that becomes merely cited as fact by subsequent authors:
We analyzed a very large set of molecular interactions that had been derived automatically from biological texts. We found that published statements, regardless of their verity, tend to interfere with interpretation of the subsequent experiments and, therefore, can act as scientific “microparadigms,” similar to dominant scientific theories [Kuhn, T. S. (1996) The Structure of Scientific Revolutions (Univ. Chicago Press, Chicago)]. Using statistical tools, we measured the strength of the influence of a single published statement on subsequent interpretations. We call these measured values the momentums of the published statements and treat separately the majority and minority of conflicting statements about the same molecular event. Our results indicate that, when building biological models based on published experimental data, we may have to treat the data as highly dependent-ordered sequences of statements (i.e., chains of collective reasoning) rather than unordered and independent experimental observations. Furthermore, our computations indicate that our data set can be interpreted in two very different ways (two “alternative universes”): one is an “optimists’ universe” with a very low incidence of false results (<5%), and another is a “pessimists’ universe” with an extraordinarily high rate of false results (>90%). Our computations deem highly unlikely any milder intermediate explanation between these two extremes. (Emphasis added in all quotes.)
In other words, scientists tend to follow bandwagons, and one can either be an optimist that they will get it right most of the time, or a pessimist that they get it wrong most of the time. Either way, the problem arises partly because scientists do not have the resources to study or replicate every experiment, so they tend to trust what is published as authoritative. The volume of published material is daunting: “More than 5 million biomedical research and review articles have been published in the last 10 years,” they said. “Automated analysis and synthesis of the knowledge locked in this literature has emerged as a major challenge in computational biology.” Although new tools for sifting and collecting this information have been designed, what comes out may not always accelerate knowledge toward the truth, but rather maintain inertia against change.
The authors examined millions of statements from scientific texts, then formed a mathematical model to study the “large-scale properties of the scientific knowledge-production process” –
We explicitly modeled both the generation of experimental results and the experimenters’ interpretation of their results and found that previously published statements, regardless of whether they are subsequently shown to be true or false, can have a profound effect on interpretations of further experiments and the probability that a scientific community would converge to a correct conclusion.
They discovered “chains of reasoning” that relied on previously-published interpretations. This counters the commonly-held belief that scientific findings act like independent data points that accumulate toward a more accurate picture. Scientists, like other people, can follow the lemmings over a cliff:
There is a well established term in economics, “information cascade”, which represents a special form of a collective reasoning chain that degenerates into repetition of the same statement. Here we suggest a model that can generate a rich spectrum of patterns of published statements, including information cascades. We then explore patterns that occur in real scientific publications and compare them to this model.
Sure enough, scientists fell into this trap. They tended to gather around accepted interpretations, though tending to believe their own interpretations most of all: “scientists are often strongly affected by prior publications in interpreting their own experimental data,” they said, “while weighting their own private results… at least 10-fold as high as a single result published by somebody else.”
The researchers applied probability theory to study how likely a chain of reasoning would lead to a correct result:
An evaluation of the optimum parameters under our model (see Model Box) indicated that the momentums of published statements estimated from real data are too high to maximize the probability of reaching the correct result at the end of a chain. This finding suggests that the scientific process may not maximize the overall probability that the result published at the end of a chain of reasoning will be correct.
As they noted, the model is more significant than just for teasing academic curiosity: “If the problem of convergence to a false ‘accepted’ scientific result is indeed frequent, it might be important to focus on alleviating it through restructuring the publication process or introducing a means of independent benchmarking of published results.”
1Rzhetsky, Iossifov, Loh and White, “ Proceedings of the National Academy of Sciences published online before print March 16, 2006, doi 10.1073/pnas.0600591103.
Imagine that: the very methodology invented to uncover truth could suppress it. This could explain the near uniform acceptance of Darwinism and condemnation of intelligent design (and other maverick ideas) in Big Science. Could it be that publication sets off a chain reaction that gains momentum and leads to erroneous interpretations? Could scientists sometimes be just as prone to crowd psychology as the rest of us? And you thought that the scientific method, peer review and publishing were safeguards against collective error. The Hwang scandal should have provided a sharp wake-up slap (see 01/09/2006).
Lest we make this one paper a self-fulfilling prophecy and start a new erroneous information cascade, we grant that such things are difficult to model mathematically with confidence. Thomas Kuhn’s cynical view of science is not without controversy, and many scientists do work independently and interpret their results carefully. These authors, though, should be commended for alerting us to the fact that scientists and scientific publications can perpetuate “microparadigms” that could be false.
There is anecdotal evidence to support this claim in the case of evolution vs. intelligent design. Those who publish in the journals any statements about I.D. tend to cite the standard ID-bashing texts as references: Pennock, Gross, Forrest etc. It is unlikely they actually read those books, and even less likely they consider the arguments on both sides. To them, the experts have spoken, and Judge Jones has ruled, so all is needed is to make a short statement with a footnote to the authorities.
More anecdotal evidence comes from a scientist active in the ID movement, who shall remain unnamed, who stated that, in his experience, scientists tend to be very fair and self-critical in their own narrow specialties, but on other subjects, are among the most dogmatic, closed-minded people he knows. Time and again he has seen them follow the leader – to merely ask questions like “what does Richard Dawkins think about it? Well, then I’m agin it, too!”
On the flip side, pro-evolution scientific papers often reference authorities carelessly. An author may refer briefly to Darwin’s finches as evidence for natural selection, for instance, passing a lateral footnote to the Grants, merely assuming that the Grants demonstrated evolution in their work, without actually studying their work critically to see whether the evidence is valid or convincing (08/24/2005, 04/26/2002). These cases illustrate how scientists can sometimes march in lock-step on certain topics, assuming one another’s authority, instead of contributing their own independent empirical findings toward an objective truth.
Science is an intensely human enterprise and, therefore, is subject to human foibles like crowd psychology. Our finiteness and human nature limit our ability to grasp natural realities. One scientist cannot possibly know everything even in his or her own field. Imagine mastering five million articles in ten years, just in one area (biomedical research), to say nothing of replicating or verifying each paper’s experimental results. We’re human; we’re limited; it’s so much easier to cite the popular statements of the leaders and follow the chain-of-reasoning gang. The more controversial the material (e.g., evolution vs intelligent design), the more it would seem that polarized interpretations are geared to maintain their own momentum. Applying Newton’s Laws to social science, a body of ideas tends to remain stationary or in uniform linear motion unless acted on by a sufficient force. And – every action to oppose the momentum has an equal and opposite reaction.