Scientists Are Just as Morally Fraught as Other People
Working in a lab and publishing in a peer-reviewed journal does not offer immunity from the moral or logical lapses.
No place for bullies in science (Nature). The editors of Nature feel a need to preach: “High-profile allegations of bullying at a German research institute highlight the need for better systems to protect young scientists.” Good. Once they find such systems, maybe they can use them to stop bullying and censoring Darwin doubters, too. Perhaps they can run a controlled experiment to find a new drug that turns bullies into encouragers. Maybe they can identify a random mutation that might, over millions of years, solve the problem by making encouragers more fit than bullies. In the short term, the Editors of Nature might prove their fitness by bullying the German research institute into cooperation.
Before reproducibility must come preproducibility (Nature). What? You mean reproducibility is not standard practice in science? Philip B. Stark is so concerned about non-reproducible published results, he has a new idea: prove your science project is reproducible before you publish it. “Instead of arguing about whether results hold up, let’s push to provide enough information for others to repeat the experiments,” the summary states, implying that is a rare practice.
Lack of group-to-individual generalizability is a threat to human subjects research (PNAS). This paper uncovers a flaw in scientific logic. Errors in generalities and statistics may undermine numerous conclusions published by the social sciences. If you thought precision was a hallmark of science, watch this:
The current study quantified the degree to which group data are able to describe individual participants. We utilized intensive repeated-measures data—data that have been collected many times, across many individuals—to compare the distributions of bivariate correlations calculated within subjects vs. those calculated between subjects. Because the vast majority of social and medical science research aggregates across subjects, we aimed to assess how closely such aggregations reflect their constituent individuals. We provide evidence that conclusions drawn from aggregated data may be worryingly imprecise. Specifically, the variance in individuals is up to four times larger than in groups. These data call for a focus on idiography and open science that may substantially alter best-practice guidelines in the medical and behavioral sciences.
To the extent these authors are right, their paper renders thousands of research projects untrustworthy. So does peer review filter out false beliefs? Results that are “worryingly imprecise” and that require substantially altering best practices become no more reliable than those reached by non-scientific methods. Idiography relates to trademarks. If you think that publishing in a big-name journal like Nature, Science, or even PNAS (where this paper was published) offers protection from fake science, these authors say no! Open science (transparency outside the major journals) is needed. We can rest assured, however, that the problem only exists in the “medical and behavioral sciences,” though. Right? Uh, … right?
Fake news: algorithms in the dock (Phys.org). Identifying lies published as news in social media is a serious concern. We can rest easy, though, because we know that behavioral scientists will help write the algorithms to sift the wheat from the chaff. They will aggregate worryingly imprecise data to make generalizations, and use current best-practice guidelines, so that fake science can identify fake news for the big internet giants. They will publish their results in big-name idiographic peer-reviewed journals. No worries.
Human rights in a changing sociopolitical climate (Phys.org). To solve the current immigration problem, we can turn it over to the scientists. They know how to run “studies” and collect statistics. With their non-political outlook, they can provide scientific analysis of “human rights,” and tell the president and Congress why they should let more refugees seek asylum. A “study” at University of Minnesota shows how this works. “The study also found that misinformation was not only a significant source of anti-refugee sentiment but also deepened mistrust between politicians and the public.” We can trust the study, because behavioral scientists never commit fake science (i.e., misinformation). There need not be any mistrust between the public and scientists. Scientists do studies. They publish non-reproducible studies in peer reviewed, big-name idiographic journals.
Time to review Finagle’s Rules for scientists:
To study a subject, understand it thoroughly before you start.
Always keep a record of data – it indicates that you’ve been working.
Draw your curves first, then plot your data.
If in doubt, make it “sound” convincing.
Experiments should be reproducible – they should fail in the same way every time.
Do not believe in miracles; rely on them.