Reproducibility Crisis in Psychology, and Other Science Woes
Here’s more evidence that human pride, greed and ambition can get in the way of the ideals of science.
The Open Science Collaboration tried to reproduce 100 psychology results published in leading journals. Their results, published in Science Magazine, show that only about a third yielded significant results in support of the conclusions, even when the original authors certified the methods used.
“Many psychology papers fail the replication test,” John Bohannon reported in the same issue of Science Magazine. By any measure, a two-thirds failure rate is statistically significant. The researchers understand this does not bode well for science’s reputation. “Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown,” they begin their paper. “Scientific claims should not gain credence because of the status or authority of their originator but by the replicability of their supporting evidence.” Live Science followed up with this headline: “Only One-Third of Psychology Findings May Be Reliable – Now What?”
Elizabeth Gilbert, one of the members of the Open Science Collaboration, wrote the following recommendations in The Conversation:
- We must stop treating single studies as unassailable authorities of the truth. Until a discovery has been thoroughly vetted and repeatedly observed, we should treat it with the measure of skepticism that scientific thinking requires. After all, the truly scientific mindset is critical, not credulous….
- Of course, adopting a skeptical attitude will take us only so far. We also need to provide incentives for reproducible science by rewarding those who conduct replications and who conduct replicable work….
- Better research practices are also likely to ensure higher replication rates. There is already evidence that taking certain concrete steps – such as making hypotheses clear prior to data analysis, openly sharing materials and data, and following transparent reporting standards – decreases false positive rates in published studies. Some funding organizations are already demanding hypothesis registration and data sharing.
- Although perfect replicability in published papers is an unrealistic goal, current replication rates are unacceptably low. The first step, as they say, is admitting you have a problem. What scientists and the public now choose to do with this information remains to be seen, but our collective response will guide the course of future scientific progress.
It sounds like many researchers should form another collaboration. They could call it Scientists Anonymous.
PLoS Biology published a paper titled, “The Question of Data Integrity in Article-Level Metrics.” The article describes various anti-fraud measures the Public Library of Science (PLoS) is taking to prevent and detect fraud. “The efforts summarized above aim to mitigate some of the concerns about data integrity, but more work needs to be done,” the authors say before issuing their Call to Action. It should be obvious, though, that data are morally neutral. Any integrity in data requires moral action from human beings.
Fake Peer Review
On another front, Nature reported that faked peer review led to 64 retractions. “The cull comes after similar discoveries of ‘fake peer review’ by several other major publishers,” reporter Ewen Calloway says. Some publishers are setting up safeguards in light of the scandal, but “It is important publishers take rapid but careful action,” a spokesperson for the Committee on Publication Ethics (COPE) says.
“The particular problem of fake review comes about when authors are allowed to suggest possible peer reviewers,” says Wager. “The system sounds good. The trouble is when people game the system and use it as a loophole.”
The involvement of third-party companies in bogus peer review is “more worrying”, Wager adds, because it could mean that the practice is more systemic and extends beyond a handful of rogue authors.
Another worry is robotic publishing. Software can be designed to generate fake journal papers that might get published. It seems intuitive that robots can only have the ethics that are programmed into them by humans. If future dishonest robots vanquish the honest robots, the robots will not be morally accountable; it will only reflect the talent of the respective programmers.
You can’t have science without honesty. Where does honesty come from? Can it evolve? See our next entry.