September 12, 2013 | David F. Coppedge

"Evidence-Based" Findings May Not Be

Patients are encouraged to seek only “evidence-based” treatments for disease, but a look behind the scenes of clinical trials reveals some of the same human foibles that plague any science: shortcomings in honesty and transparency.

Finding cause-and-effect relationships in medical science is notoriously difficult.  Supposedly, the path to reliable findings is to use randomized clinical trials, where a proposed new therapy goes through three distinct phases of testing on large numbers of people.  Sounds good in theory, but what happens when investigators find less-than-full disclosure and potential conflicts of interest?  Those issues were addressed in Science Insider recently.  Violations are, unfortunately, more common than expected.

We’ve heard of studies funded by tobacco companies that prove cigarettes are safe.  Give a researcher enough money, and it’s tempting (though not necessarily guaranteed) his or her findings will corroborate the company’s claims.  How are conflicts of interest avoided?  How are standards for reporting maintained?  Science Insider attended a recent International Congress on Peer Review and Biomedical Publication in Chicago, and reported some red flags: (1) “Published trial results often differ from those initially posted“; and (2) “Potential conflicts of interest often go unreported.”

The honor system, such as merely deploying forms asking researchers to list all conflicts of interest, is insufficient.  Despite years of reminding researchers how important it is to maintain transparency about potential conflicts, many still fail to disclose them.  Often it is left up to the researcher’s own judgment whether such conflicts are “relevant” to the trial.  Ignorance of the need for high standards, the Congress on Peer Review and Biomedical Publication found, is sadly widespread.

Although most of the doctors disclosed relationships they had with the firm funding the published research, fewer than half shared relationships they had with industry competitors. And despite all the talk in recent years about conflicts, 16% who had a financial tie to a sponsor or drug manufacturer leading the study didn’t report it. One example cited by Rasmussen: a physician who was an advisory board member and speaker for AstraZeneca, maker of the drug being covered by the paper, who declared he or she had no conflicts.

“I was actually very disappointed” by this, says Vivienne Bachelet, editor-in-chief of the journal Medwave in Santiago, who was not involved in the study. In her country, she says, the “level of awareness is just nil” about conflicts of interest. Medical societies in particular get substantial funding from drug companies but almost no one—the societies themselves, drug regulators, or the individual doctors—see this as something that should be disclosed, Bachelet says. “If they’re not disclosing over there,” in Denmark, “what’s to be expected in Chile?”

Regarding publication discrepancies, a survey of thousands of papers revealed frequent inconsistencies between public reports and journal publications about results of primary endpoints (main purposes of the trial) and secondary endpoints (serendipitous findings):

For 21% of the primary endpoints, what appeared in the journal wasn’t exactly the outcome described on, and in 6%, the Yale group suggested that this difference influenced how the results would be interpreted.

For secondary endpoints, the difference was even more dramatic: Of more than 2000 secondary endpoints listed across the trials, just 16% appeared the same way in both the public database and the published article along with the same results. Results for dozens of secondary endpoints were inconsistent. “Our findings raise concerns about the accuracy of information in both places, leading us to wonder which to believe,” Becker said.

The director of at the National Library of Medicine called the website a “view into the sausage factory” of how research results are reported.

Speaking of randomized clinical trials (RCT), Nature reported that little more than half of them produce treatments better than the standard of care – and that’s as it should be, given that RCT outcomes are unpredictable.  Progress is incremental but steady.  There’s no question that cancer patients are surviving much longer on average than they were a couple of decades ago, thanks to clinical trials.

The slowness of the process, though, is frustrating to patients, especially those with cancer, who can’t wait a decade for all three phases to complete before government approval is given.  Medical Xpress raised the question of whether clinical trials are always necessary.  Sometimes phase III (comparing the new treatment with the standard treatment) might be superfluous if a new therapy has already shown benefit, and patients are out of options.  Another recent trend is toward individualized care based on genetic screening or specific tissue sample characteristics.  Trends like that may not jive with randomized clinical trials, because each patient is treated as a unique case (a sample of one).  Alternatives to RCT may need to be devised for such new developments.

In the philosophy of science, nothing like peer review or RCT (as practiced) is set in stone.  As practices and findings change, policies and procedures need to keep in step with them.  One thing that should not change, though, is a scrupulous insistence on honesty.

Update 9/14/13: Medical Xpress reported that leading medical societies in Britain and America are poised to start publishing negative findings.  This is important, be knowing what doesn’t work can be just as important as knowing what does.   “It is ethically correct for pharmacologists working in academia, industry and the health services to publish negative findings,” the head of the British Pharmacological Society said.  “Openness not only ensures that the research community is collectively making the best possible use of resources, but also that clinical trial volunteers are not unnecessarily exposed to likely ineffective or potentially unsafe treatments when evidence may already suggest that the drug target in question is flawed.”  The lack of openness about negative results can waste time and resources if researchers unknowingly repeat a failed trial. “Historically, negative findings have tended to remain unpublished,” one journal editor noted with apparent regret.  Another expert feels that all clinical results, both positive and negative, should be in the public domain.

No science can survive without honesty.  We are often told that science is self-checking.  The problem is that the checking is inconsistent, and often found out long after damage has been done.  This is shameful.  In medical clinical trials, people’s lives are on the line.  How can the public have confidence in findings, when they lose confidence in the honesty of the researchers?  Miracle treatments are promised that might actually be hyped by the drug company funding the research, or the researcher is on the company’s board, but refuses to disclose the conflict of interest, considering it (in his opinion) “not relevant.”  Then there is the temptation to announce breakthroughs to advance one’s career or the reputation of the institution.  Now we hear about the actual very low rate of honest reporting.  To put it mildly, “What they found was not particularly encouraging.”

This is not to disparage the many honest, hard-working individual researchers with pure motives, or the reputable institutions that succeed in finding and helping patients with new effective treatments.  It just goes to show that scientific research is nothing without honesty.  The answer is not to run from “evidence-based” research toward unproven alternative therapies, many of which have even less evidence and are riddled with deeper conflicts of interest (such as hyped claims motivated to sell a product).  There are quacks who prey on the desperate, but conspiracy theories alleging collusion with drug companies to keep alternatives off the market are sometimes a ploy to mislead by undermining the credibility of competition.  In the morass of potential pitfalls, is anything better than clinical trials?  The answer is to improve the system: require independent checking for compliance, publicly humiliate violators, and financially punish institutions found culpable.

Randomized clinical trials offer the best hope for establishing cause and effect in medical research, but sometimes the anecdotal reports of alternative treatments have merit; we should remain open to them and check them with a skeptical yet inquiring eye, weeding out conflicts of interest as best we can, investigating the reasonableness of the correlation.  As these reports show, “evidence-based” reports sometimes fail to live up to their ideal.  Honest researchers will keep an open mind about alternatives.  There’s much human beings do not know.  Things that work for some individuals do not always work for others.

One other lesson: if correlations are this difficult to establish in humans, of which there are 7 billion to test, how much more error-prone are claims about the unobservable past supposed millions of years ago – especially when certain researchers have a conflict of interest to maintain their secular worldview?





  • doug hulstedt says:

    Hi Blessings
    having both feet firmly planted in conventional and integrative medicine, ie 2 boats rocking at different rates I disagree with the medical commentary .Monetary process. Ie new england Journal of medicine, JAMA J Peds and Lancet have some 90% of their funding from pharmaceutical companies. There is an innate bias toward reporting pharmaceutical results and not herbal or alternative results even if herbal results are robust. The bias is pervasive. Also no huge monies are around to support evaluation of things like IV vitamin C therapy or chelation therapy amongst a myriad of other modalities.
    Double blind randomized placebo control studies can be compromised severely. Of course one can just lie.
    Most of us Medical folks don’t even realize there is a philosophy of science.
    One short true story an herbalist and an orthopedic surgeon from UCSF did a study on an herbal product for osteoarthritis. Their products showed a huge ie 95% efficacy at pain relief.The study was submitted to major journals refused then to small journals and pigeonholed.So no dissemination of info
    Still love your commentary
    Doug Hulstedt MD

    • Editor says:

      Hello Dr Hulstedt,
      Your points are well taken. Our commentary did not defend “evidence-based” trials in practice, only in theory. It would be nice if it worked, if practitioners were always honest, and if they looked at alternatives fairly.

Leave a Reply