Scientific Ethics Concerns Rising
For an enterprise as secular and materialistic as science, there’s a lot of talk about morality these days.
Human subjects: This past week, Science magazine reported on a government panel that is revising the 1991 regulations on human research. Rebecca Dresser reported, “Although these concepts underlie many Common Rule provisions, insights gained since 1991 and unaddressed problems in the current oversight system point to new measures that could enhance the rule’s ethical legitimacy.” (Science, 3 August 2012: Vol. 337 no. 6094 pp. 527-528, DOI: 10.1126/science.1218323.) She used the word “moral” five times, as in the last section, “A Fundamental Moral Judgment” –
Underlying the research oversight system is a fundamental moral judgment: Human subjects have interests that should not be subordinated to the interests of the patients, researchers, industry stakeholders, and others who gain health and monetary benefits from the research enterprise. In the United States and elsewhere, allegiance to this moral judgment demands robust efforts to educate prospective research subjects, help subjects who are harmed in research, and evaluate the quality of human research proposals.
Research misconduct: In Nature, Colin Macilwain wrote that “The time is ripe to confront misconduct.” He is encouraged that some scientific institutions are beginning to take this problem seriously: “For too long, scientists’ instinctive defensiveness has produced general denial that misconduct constitutes a serious problem.” The statement suggests that scientists tend to have a moral superiority complex. Science, after all is supposed to be self-correcting; misconduct, they thought, must be rare among their ranks. “Few senior scientists now believe that,” Macilwain said. “They know that misconduct exists and that, unchecked, it can undermine public regard for science and scientists.” Some institutions have seen fraud investigations as contrary to academic freedom, but noteworthy cases of fraud are changing attitudes. “Worldwide, however, research integrity is now very much in the spotlight.” He spoke of a couple of initiatives being taken to address the issue, then ended: “Together, the studies represent a historic opportunity to deal with what is, perhaps, the single most potent threat to science’s prestige” (Nature 488, 02 Aug 2012, page 7, doi:10.1038/488007a).
Mentoring: It’s natural for a trainee to want to please and imitate his or her mentor. Nature recognized this as a problem and an opportunity: mentors should be the ones to teach integrity and forestall misconduct. In “The roots of research misconduct,” William Neaves argued that “Mentors should understand what causes misconduct among trainees — and keep in mind some possible remedies” (Nature 488, 01 Aug 2012, pp. 121-122, doi:10.1038/nj7409-121a). It’s not enough to teach about the importance of integrity, he said; “Consistently modelling good practice beats lecturing hands down, and discussing ethical guidelines at laboratory meetings helps the team to appreciate honesty — and the grim consequences of misconduct.” This requires overcoming the mentor’s natural reluctance to bring up the subject, and understanding what motivates fraud among young scientists. “Mentors should not avoid a discussion on research integrity just because of their own discomfort,” Neaves ended. “The potential consequences for careers and reputations are too severe.”
Conflict of interest: Bouncing off a case of a scientist with ties to industry contributing to a report giving fracking a clean bill of health, Nature‘s editors took the opportunity to call for openness: “Scientists must remember that however irrelevant their involvement in industry might seem to them, others will see it differently — only full disclosure will avert the taint of scandal.” (Nature 488, 02 Aug 2012, p. 5, doi:10.1038/488005a). The editors were not claiming a scandal existed; they were just skittish about the possibility of damage to the reputation of science if scientists do not reveal possible biases. Sunlight is the best disinfectant, they believe:
Experts in many fields bounce between academia, government and industry during their careers. Universities could not exclude people who have industry connections from their ranks, nor would they want to. The same goes for government. There is also nothing inherently wrong with universities accepting donations from industry to conduct studies, as long as the proper protections are put in place. The key is transparency, because that is the basis for trust between institutions and the wider public, which is especially important when people are buffeted by confusing, contradictory and inflammatory information. What the public needs, and what scientists must deliver, is reliable information that is honest about both its methods and its inevitable biases. What it needs is full disclosure.
False positives: Ethics requires avoidance of exaggeration. On July 27, Daniel MacArthur wrote in Nature about the risk of scientists treating “eye-catching artefacts” as “genomic insights” (Nature 487, 26 July 2012, pp. 427-428, doi:10.1038/487427a). Beginning with a recent highly-advertised case, he said, “As it turned out, at least some of the results from this study were surprising simply because they were wrong.” Technical errors not caught by quality control can lead to false positives, especially in data sets where the complexity is huge:
In fact, it has never been easier to generate high-impact false positives than in the genomic era, in which massive, complex biological data sets are cheap and widely available. To be clear, the majority of genome-scale experiments yield real results, many of which would be impossible to uncover through targeted hypothesis-driven studies. However, hunting for biological surprises without due caution can easily yield a rich crop of biases and experimental artefacts, and lead to high-impact papers built on nothing more than systematic experimental ‘noise’.
Flawed papers cause harm beyond their authors: they trigger futile projects, stalling the careers of graduate students and postdocs, and they degrade the reputation of genomic research. To minimize the damage, researchers, reviewers and editors need to raise the standard of evidence required to establish a finding as fact.
In genomics, for instance, surprising data can occur by chance. Additionally, the technologies can generate their own biases. In a paraphrase of the maxim, “If something is too good to be true, it probably is,” MacArthur wrote, “Few principles are more depressingly familiar to the veteran scientist: the more surprising a result seems to be, the less likely it is to be true.” Yet quality control and reproducibility take time. He suggested standards for journal editors and scientists; fortunately, open-access and online commenting are providing more rapid critical responses, which MacArthur encouraged. His last paragraph shows that carefulness is a part of ethics:
Nothing can completely prevent the publication of incorrect results. It is the nature of cutting-edge science that even careful researchers are occasionally fooled. We should neither deceive ourselves that perfect science is possible, nor focus so heavily on reducing error that we are afraid to innovate. However, if we work together to define, apply and enforce clear standards for genomic analysis, we can ensure that most of the unanticipated results are surprising because they reveal unexpected biology, rather than because they are wrong.
As with any human enterprise, honesty is an essential pillar. Without it, nothing else matters when trust collapses. But where does ethics come from? In Science, John T. Jost reviewed a new book by Jonathan Haight, The Righteous Mind Why Good People Are Divided by Politics and Religion (John T. Jost, “Social Psychology: Left and Right, Right and Wrong,” Science 3 August 2012: Vol. 337 no. 6094 pp. 525-526, DOI: 10.1126/science.1222565 ). Haight, a popular social psychologist, tried to conjure up man’s moral sense from evolutionary “psychological foundations” –
In The Righteous Mind, Haidt attempts to explain the psychological foundations of morality and how they lead to political conflicts. The book’s three parts are not as compatible or settled as Haidt’s ingenious prose makes them seem. The first revisits the intriguing arguments of an earlier, influential paper (1) in which he argued that moral reasoning is nothing but post hoc rationalizing of gut-level intuitions. The second introduces an evolutionarily inspired framework that specifies five or six “moral foundations” and applies this framework to an analysis of liberal-conservative differences in moral judgments. In the third part, Haidt speculates that patriotism, religiosity, and “hive psychology” in humans evolved rapidly through group-level selection.
Jost found contradiction in Haight’s premise that morality is nothing more than post-hoc rationalization of intuitive, emotional reactions by finding post-hoc rationalization in the book’s own moral judgments about what humans ought to do. “Ultimately, Haidt’s own rhetorical choices render his claim to being unbiased unconvincing,” Jost said charitably. He is not ready to accept the premise that our “primitive ancestral legacy” can be a guide to right and wrong:
Before drawing sweeping, profound conclusions about the politics of morality, Haidt needs to address a more basic question: What are the specific, empirically falsifiable criteria for designating something as an evolutionarily grounded moral foundation? Haidt sets the bar pretty low—anything that suppresses individual selfishness in favor of group interests. By this definition, the decision to plunder (and perhaps even murder) members of another tribe would count as a moral adaptation. Recent research suggests that Machiavellianism, authoritarianism, social dominance, and prejudice are positively associated with the moral valuation of ingroup, authority, and purity themes [e.g., (6, 7)]. If these are to be ushered into the ever-broadening tent of group morality, one wonders what it would take to be refused admission.
I see no compelling reason to assume that morality is—let alone should be—whatever comes first, easiest, or even most forcefully to mind (because of our evolutionary heritage or otherwise). In many situations behaving morally may require us to do what is difficult, perhaps even “unnatural” in some sense. Or, as John Stuart Mill put it (8), “… Nature cannot be a proper model for us to imitate. Either it is right that we should kill because nature kills; torture because nature tortures; ruin and devastate because nature does the like; or we ought not to consider what nature does, but what it is good to do.”
Jost, however, failed to define goodness or reveal his own theory of the grounds of morality.
It’s nice that ethics is getting a hearing more and more, but who are the editors of Nature to lecture the rest of us about morality? The rag since its inception was devoted to pushing the Darwinian world view: a system where “ethics,” whatever that means, is a mere artifact of the struggle for fitness. They can’t play both sides of the fence here, preaching the Darwin-Tyndall materialist view most of the time, but the Christian sermon when scientific fraud becomes an issue. How about some full disclosure by the editors? Tell us about all your leftist political connections that generate a hugely lopsided leftist viewpoint whenever anything political is involved. How about some repentance for Nature‘s involvement with eugenics and other atrocities with human subjects in the past? How about some fact checking when evolutionists push their false positives about some bone shedding light on evolution? How about confessing your own conflicts of interest when advocating increased taxpayer funding of your favorite projects? We don’t need your sermons about ethics. You need to go to church. You need to hear some real sermons about the only solid foundation for ethics: the word of the Lord: “Thou shalt not bear false witness.”