June 6, 2015 | David F. Coppedge

You Should Trust Scientists (To Be Fallible)

Public trust in scientists exceeds their trustworthiness, experts warn.

Nature is worried. People trust scientists too much. In the Nature Editorial this week (“Misplaced faith”), the subtitle is suggestive. “The public trusts scientists much more than scientists think. But should it?” On one hand, the editors are glad that polls show the majority of people giving scientists high marks for reliability despite a flurry of scandals in recent news. The recent retraction of that gay-marriage paper (see 12/12/14 and Science Magazine report; see more below) is a case in point. But on the other hand, they know better.

Media coverage of the same-sex-marriage retraction was laced with portentous language, claiming that faith and trust in science had been profoundly shaken. Yet, as researchers who follow misconduct issues will know, faith and trust in science have survived worse in recent years.

That should not be taken as an excuse to ignore the problem of research misconduct or to minimize its importance. And although high-profile fraud makes headlines, a broader and more common set of unappealing behaviours — from corner-cutting to data-juggling — lie under the surface. Convention says that a tiny minority of scientists cheats, yet academics and researchers frequently make the case that irregularities are widespread. A 2014 survey of hundreds of economists, for example, found that 94% admitted to having engaged in at least one “unaccepted” research practice (S. Necker Res. Policy 43, 1747–1759; 2014).

… it seems that the wider public’s view of science and research is rosier than that of many people who are directly involved. For how long can this continue?

As insiders, Nature’s editors get a view of science’s dirty laundry that the public is blissfully unaware of. And they’re not alone.  Other writers have pointed out reasons to doubt the iconic image of the scientist in the white lab coat, altruistically researching nature’s secrets for the pure love of the truth.

Influence or influencer? Anna Gielas, in a PLoS Blog printed on PhysOrg, turns scientific journals into carts pulling the horses. Rather than depicting them as channels for research dissemination, she argues that journals are often instruments that shape science and academia. Tracing the history of academic journals over centuries, she shows them to be dynamic, evolving instruments that often made or broke personal reputations and, sometimes, shaped political decisions. “I wish to learn how we have created this unique and intricate communication system,” she ends, “—and why we have endowed it with so much power.

Measurement power corrupts: What’s science without measurement? In The Conversation, Aussie academics Mike Calver and Andrew Beattie warn that “Our obsession with metrics is corrupting science.” Specifically, the process of ranking scientific papers by citations and other arbitrary measures lets some scientists game the system, and consigns other worthy research into dustbin of obscurity. Ranking has been a poor predictor of Nobel Prizes, they point out. (See also Nature‘s list of “sleeping beauty” papers whose merits were not recognized till after the author’s deaths.) Merlin Crossley, another Aussie dean of science, replies in The Conversation that “All academic metrics are flawed, but some are useful.” Useful to whom? He presents the “best-in-field” fallacy by arguing that it’s “better than the alternative.”

Correlation not causation: Speaking of measurement, Science Magazine enjoyed a list of “spurious correlations.” These come about through “a technique known as ‘data dredging,’ in which one data set is blindly compared to hundreds of others until a correlation is identified.” For instance, one can show that “The number of civil engineering doctorates awarded in the United States between 2000 and 2009 was strongly correlated (95.9%) with mozzarella cheese consumption during the same period.” The editors comment, “Presented as a series of graphs prepared from real data sets, Spurious Correlations serves as a hilarious reminder that correlation most certainly does not equal causation.” It also implies that drawing valid conclusions requires honesty and training in logic.

Conflict of interest: A Policy Forum statement in Science Magazine shows that scientists are also stakeholders in government decisions. Fifteen academics from Harvard, the Union of Concerned Scientists, the Center for Science and Democracy and some other foundations are upset that Congress is making “attacks on science-based rules.” But rules are not discovered by scientists; they are matters of policy decided by parties with competing interests (including taxpayers who have to foot the bill, and legislators who have to prioritize limited resources). Rules might be informed by science or metrics, but as we have just seen, metrics can corrupt if not properly interpreted. These academics vent the emotion of righteous indignation, pretending their own interests are not part of the equation.

There is a growing and troubling assault on using credible scientific knowledge in U.S. government regulation that will put science and democracy at risk if unchecked. We present five examples, and the false premises on which they are based, of current attempts in the U.S. Congress in the supposed pursuit of transparency and accountability but at the expense of the role of science in policy-making.

A look at their five examples shows it heavily weighted in favor of government regulation and the ability of scientific institutions to police themselves. At whose expense? And for which group’s interest?

The scientific community needs to push back. Elected officials respond to constituents, and there are scientists in every congressional district. With leadership from professional societies and scientific organizations, scientists across the country should tell their members of Congress how much they value the opportunity to engage in informing policy and how important it is that these attacks on the process are defeated.

They end by claiming they are all for transparency and avoidance of conflict of interest. Their concerns may well be justified in some of the specific cases they cite, but their own comments betray a lack of objectivity.

Whose conflict of interest?  Policies that attempt to control conflict of interest may themselves be flawed, an article on Science Daily suggests. Some scientists are objecting to the stringent rules of the New England Journal of Medicine (NEJM) on disclosure of financial ties to health industries, claiming that “there are negative consequences of such policies.” One thing seems certain; policies will be made by fallible humans who may not be aware of all the influences behind their decisions, or willing to admit them.

Scientific fraud made several headlines recently. Most recently, the exposure of Michael LaCour at UCLA as a fraudster for his Dec. 2014 paper on gay-marriage persuasion was noted by Science Magazine (which retracted the paper last month), Nature, and major media outlets. But few are pointing out that his credibility should have been suspect at the start, since he is a gay activist and recruited only gay activists in his “experiments” on interviewing people—and they only tested the ability to persuade people for gay marriage, not against it. That seems hardly a controlled experiment. In other headlines, social psychologist Jens Förster is in deeper trouble after investigators found further evidence he “made up” his data, Science Magazine says (see 5/22/14). Förster still maintains his innocence. Nature reports that Paolo Macchiarini, inventor of the artificial windpipe, has been charged with misconduct for “misrepresenting the success of his pioneering procedure.” And in a PLoS Blog piece posted by PhysOrg, Beth Skwarecki asks an unusual question, “Was it unethical to hoax the world about chocolate as a weight loss ‘accelerator’?” It’s another story about P-hacking (tweaking significance measures) to pull a causation out of a correlation.

When you envision a scientist, stop thinking of the cartoon drawing. Picture a real human being, just like yourself, getting out of bed each day and getting dressed to go to work. Like each one of us, the scientist is a complex mix of influences, beliefs, biases and desires. Many scientists usually work in an academic environment that is profoundly leftist in ideology and subject to speech codes or standards of political correctness (we admit exceptions, of course). The scientist has undergone years of rigorous study and practice, part of which constitutes indoctrination into certain ways of thinking. He or she attends conferences with colleagues at which habits of behavior are reinforced by groupthink, where independent thinking is tolerated only to a point. The scientist does not observe nature as a newcomer, but follows years of tradition, working on some specific puzzle in the current paradigm. Scientists are often dependent on government funds, or else support from private industry, which also influence their judgment. Like other humans, scientists desire fame and recognition for their work.

Lest one argue that it’s the scientific community that protects against bias and makes science a self-correcting enterprise, let’s get real. A community is a collection of fallible individuals. Academia can reinforce bias as much as prevent it. Look at the articles above; journals, peer review and other aspects of self-correction can end up shaping policies and attitudes, even facilitating fraud. Nature just told us that people have an undue trust for science as it really us. Standards have evolved over the centuries; are we to believe that what Newton or Faraday did in their day was unscientific by today’s standards? Peer review is under attack from many quarters these days. Journals are evolving to adapt to social media. And how can they protect themselves from computer-generated fraud? (see Evolution News & Views article).

Never forget that science cannot work without (1) a commitment to truth, and (2) honesty. Those are not discoveries of science; they are prerequisites for science. Logical reasoning requires both. So what are we to expect when evolutionary scientists tell us that crime is a product of evolution? (see PhysOrg). Carried to its logical conclusion, that rationalizes fraud as an evolutionary strategy. Science needs God to say, “Thou shalt not!” (see 5/24/15). The current flood of scientific misconduct is to be expected from a culture that has abandoned Biblical morality for evolving strategies, and truth for pragmatism.

So what are honest truth seekers to think of science? We have to judge it based on the evidence and the logic, and on the individual researcher’s character. We cannot take a scientist’s word for anything. We need to be aware of the biases that influence their statements. We need to examine their “materials and methods” that formed the basis of their conclusions. We need the courage to fight a strong consensus when it is wrong. We need to complain when they fail to be truthful or honest. In a sense, we need to be scientists ourselves, if we take the root of science to refer to “knowledge.” Since knowledge is defined as a “justified true belief,” no scientific statement should be accepted at face value because “science says so,” but because its truth is justifiable.

(Visited 204 times, 1 visits today)

Leave a Reply