Scientist, Heal Thyself
People who play “King’s X” in the science lab had better be aware of their limitations and vulnerabilities.
Science can be viewed as an organized way to seek knowledge about natural phenomena. It involves observing, publishing and review. But science is a big tent, granting undue prestige to some questionable fields, and granting undeserved reputation to individuals and institutions unaware of their biases and blind spots. Most importantly, science’s ideals cannot be met without character qualities like honesty, integrity, and allegiance to the truth, because every paper, every article, and every database must pass through the fingers of fallible human beings. If science were a mechanical method to crank out knowledge, there wouldn’t need to be continual chastisements and admonitions coming from within its own institutions, like these recent examples. Look at the temptations to sin in scientific communities:
Dirty Money
More Scientists, Institutions with Links to Jeffrey Epstein (The Scientist). Ivory tower elites had better not be too quick to point the finger. In this article, reporter Catherine Offord points back: “Researchers continued to meet and accept funding from the wealthy donor even after he was convicted of sex crimes in 2008.” How many secular media reporters have covered this scandal? Talk about dirty money! Scientists are not immune from temptation. Do they get a “King’s X” from accusation because they wear the “Trust Me: I’m a Scientist” shirt?
A number of scientists and research institutions continued to maintain links with convicted sex offender and financier Jeffrey Epstein after he was sentenced to prison in 2008, BuzzFeed News reported yesterday (August 26). While some of the payments and meetings he had with members of the research community were already public knowledge, others were identified by the news site through public records requests.
Hypocrisy
Peer reviewers need a code of conduct too (Nature). Here’s a piece on the theme, “Who watches the watchers?” Linda Beaumont gets angry at the very peer reviewers who, in her experience, need some review on their behavior by watcher-watchers. She has witnessed that the reality is not the ideal it is made up to be.
Learning to accept criticism is part of surviving the fierce competition in research. But an invitation to review the work of a peer, usually anonymously, is not a licence to patronize, intimidate or otherwise act in a way that would be unprofessional in the workplace. Such reviews are unnecessarily discouraging, particularly to an early-career researcher with limited experience of the system.
Selfishness
Why I said no to peer review this summer (Nature). A senior scientist and veteran reviewer, Jennifer Rohn, explains why she no longer feels a “moral obligation” to review every paper coming across her desk, especially when she is on vacation. “Moral obligation”? There’s the M-word again. Her reasons provide a glimpse into ulterior motives in the peer review system. They are not always pure. When she was younger, she explains, she felt obligated to take the requests out of a sense of reciprocity: her reviews would help her own reputation, or lubricate grant applications: “having this paper accepted and published quickly would surely help our next bid with a major funder.” That doesn’t sound like pure love of the truth. Duty is a quality that can be sacrificed for expediency. To the degree that’s how she feels, how trustworthy will her reviews be if the funder might not like the conclusions?
In theory, we all have a duty to keep the wheels of peer review spinning. There is an unspoken pact of reciprocity in our tight-knit research community. Science has long operated like this: the expectation is that for every paper of mine being poked and prodded at by peers, I’m spending a roughly equal amount of time inspecting work by others. And because I know it’s frustrating to wait for a decision on a paper, why would I want to irritate a colleague by causing delays?
Pride
Hundreds of extreme self-citing scientists revealed in new database (Nature). Scientists are vulnerable to another deadly sin: pride. In this article, Van Noorden and Chawla find a bit of a Good Old Boys Club going on in academia, where insiders pat each others’ backs to pad their reputations. Getting cited is supposed to be a measure of the value of research. In many cases, it is more a measure of how well insiders know how to play the rigged game.
The world’s most-cited researchers, according to newly released data, are a curiously eclectic bunch. Nobel laureates and eminent polymaths rub shoulders with less familiar names, such as Sundarapandian Vaidyanathan from Chennai in India. What leaps out about Vaidyanathan and hundreds of other researchers is that many of the citations to their work come from their own papers, or from those of their co-authors.
Blindness to Bias
Generic language in scientific communication (PNAS). This is a meta-paper: a paper about papers. Four psychologists examine the kind of language psychologists use in their research, apparently painfully aware of the scandals in the “soft sciences” of psychology and psychiatry. Sure enough, they find unexpected sources of bias in the very language psychologists use. Whether intentional or not, words can obfuscate rather than illuminate. This problem is not limited to psychology. How many scientists in all fields consider this blind spot?
Scientific communication poses a challenge: To clearly highlight key conclusions and implications while fully acknowledging the limitations of the evidence. Although these goals are in principle compatible, the goal of conveying complex and variable data may compete with reporting results in a digestible form that fits (increasingly) limited publication formats. As a result, authors’ choices may favor clarity over complexity. For example, generic language (e.g., “Introverts and extraverts require different learning environments”) may mislead by implying general, timeless conclusions while glossing over exceptions and variability. Using generic language is especially problematic if authors overgeneralize from small or unrepresentative samples (e.g., exclusively Western, middle-class). We present 4 studies examining the use and implications of generic language in psychology research articles…. We found that generics were ubiquitously used to convey results (89% of articles included at least 1 generic), despite that most articles made no mention of sample demographics. … We highlight potential unintended consequences of language choice in scientific communication, as well as what these choices reveal about how scientists think about their data.
Misplaced Trust
Modelling hubris may lead to “trans-science”, a practice which lends itself to the language and formalism of science but where science cannot provide answers.
A short comment on statistical versus mathematical modelling (Nature Communications). Much of science relies on models. The best of models, however, are only simulations of reality. Often they leave out factors that could be consequential. “While the crisis of statistics has made it to the headlines, that of mathematical modelling hasn’t,” this article begins. “Something can be learned comparing the two, and looking at other instances of production of numbers. Sociology of quantification and post-normal science can help.” Post-normal science? Is that like last year’s Word of the Year, “Post-Truth”?
Andrea Saltelli reveals a hidden crisis where ghosts of “methodological abuse” and “wicked incentives” that have haunted statistics are making apparitions in the more-trusted field of mathematical models:
While statistical and mathematical modelling share important features, they don’t seem to share the same sense of crisis. Statisticians appear mired in an academic and mediatic debate where even the concept of significance appears challenged, while more sedate tones prevail in the various communities of mathematical modelling. This is perhaps because, unlike statistics, mathematical modelling is not a discipline. It cannot discuss possible fixes in disciplinary fora under the supervision of recognised leaders. It cannot issue authoritative statements of concern from relevant institutions such as e.g., the American Statistical Association or the columns of Nature….
Yet if statistics is coming to terms with methodological abuse and wicked incentives, it appears legitimate to ask if something of the sort might be happening in the multiverse of mathematical modelling.
She points out a truism: “All model-knowing is conditional on assumptions.” If statistical models can be misleading, why not mathematical models, which also rely on assumptions? Need examples? Her next sentence should be put into textbooks:
Modelling hubris may lead to “trans-science”, a practice which lends itself to the language and formalism of science but where science cannot provide answers. Models may be used as a convenient tool of displacement – from what happens in reality to what happens in the model. The merging of algorithms with big data blurs many existing distinctions among different instances of quantification, leading to the question “what qualities are specific to rankings, or indicators, or models, or algorithms?” Thus the problems just highlighted are likely to apply to all of these instances, as shown by the recent alarm about unethical use of algorithms, the disruptive use of artificial intelligence exemplified by Facebook, or the well documented problems with the abuse of metrics, which is now reflected in an increasing militancy against statistical and metrical abuses.
Dishonesty
Civic honesty around the globe (Science). Here is a curious paper. Four economists from America and Switzerland decided to run the scientific method on honesty. They basically found that people were more apt to return wallets that had more money in them. That’s an interesting thing to know, but wait for the commentary (below).
Civic honesty is essential to social capital and economic development but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. In these experiments, we turned in more than 17,000 lost wallets containing varying amounts of money at public and private institutions and measured whether recipients contacted the owners to return the wallets. In virtually all countries, citizens were more likely to return wallets that contained more money. Neither nonexperts nor professional economists were able to predict this result. Additional data suggest that our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, both of which increase with the material benefits of dishonesty.
Their work was published in Science, “Financial temptation increases civic honesty.”
It’s an interesting “result,” to be sure, and may well be true. We have no particular reason to doubt it. But notice that to trust this paper, and the analysis by Shaul Shalvi quoted above, you have to assume the researchers are honest! What if they were not? Oh, but it was peer reviewed. But what if the reviewers were in on the scam? Who watches the watchers? One can certainly imagine ulterior motives for the research: prestige for oneself or one’s institution, necessity (“publish or perish”), or influence by funders or stakeholders desiring a certain outcome. The point is that without honesty and integrity, there is no science at all.

Chuck-in-the-Box pops up in unexpected places.
Careful readers of this last paper will notice several citations to the “evolution of altruism” and other such Darwinist mumbo-jumbo in the references. To the extent the authors believe that stuff, one could assume they wrote this paper for their own self-interest. If they were truly consistent, all their behaviors stem from the pursuit of personal fitness, not truth.
Now, if they believe in the Ten Commandments and strive to follow them, that might be a valid foundation for trust. But even then, if their methods or assumptions are flawed, they might only be right by mistake.