July 17, 2021 | David F. Coppedge

Science Mismeasures Itself

Blundering along, Big Science creates perverse incentives and ignores its biases while awarding itself happy face stickers.

Like partisan politicians, the influencers in Big Science can create narratives that are just wrong. Consider the revision of the Easter Island Myth. Jennifer Micale at the University of Binghamton explains how this narrative was unquestioned for decades:

You probably know this story, or a version of it: On Easter Island, the people cut down every tree, perhaps to make fields for agriculture or to erect giant statues to honor their clans. This foolish decision led to a catastrophic collapse, with only a few thousand remaining to witness the first European boats landing on their remote shores in 1722.

But did the demographic collapse at the core of the Easter Island myth really happen? The answer, according to new research by Binghamton University anthropologists Robert DiNapoli and Carl Lipo, is no.

Lemmings, by JB Greene. Used by permission.

One could suspect that the old myth fit the favored narrative of the radical environmentalists who love talking about sustainable resources. The new myth, by contrast, fits the current trendy narrative that all native peoples deserve respect, and that demeaning them is racist. That’s why the name “Easter Island” should be shunned, as it supposedly conjures up images of European white supremacy and colonialism. The natives should be addressed by their preferred name, “Rapa Nui.” Either way, the narrative is a product of cultural and political biases of the time.

Those interested can read the article to learn about the new myth of Easter Island that is replacing the old myth. But it doesn’t matter: we may never know. Narratives like this are often constructed from incomplete historical evidence to support a narrative that often reflects the cultural values of the ones in power.

Currently, for instance, scientists are co-opting statistics to support narratives like the need for (a) universal vaccination, (b) action on climate crisis, and (c) shaming any tradition that can be called systemically racist. These, too, will pass, and new narratives will arise out of the next culture in power.

That being the case, it’s amazing that anything really true gets discovered by Big Science. Young’s Law quips, “All great discoveries are made by mistake.” Someone added a corollary, “The greater the funding, the longer it takes to make the mistake.”

These remarks are not intended to disparage the many honest individual scientists who do good work and advance real knowledge. Most often, they are the scientists working on observable, repeatable, testable phenomena. But scientists need to be aware of the pressures to conform to certain expectations that compromise the integrity of their institutions.

The Mismeasure of Science

Examining potential variance in academic research (Phys.org). Researchers at ESMT Berlin are wondering why different scientific teams can use the same scientific method on the same data set and come out with opposite results. Something is wrong here.

New research seeks to understand what drives decisions in data analyses and the process through which academics test a hypothesis by comparing the analyses of different researchers who tested the same hypotheses on the same dataset. Analysts reported radically different analyses and dispersed empirical outcomes, including, in some cases, significant effects in opposite directions from each other. 

The h-index is no longer an effective correlate of scientific reputation (PLoS One). Journal editors and academic deans love measures of research quality that are simple, easy-to-use and wrong. The h-index is a measure of a scientist’s reputation that assesses, by citation counts, the impact of his or her work. A number score that can rank people seems more credible, more “scientific.”

The impact of individual scientists is commonly quantified using citation-based measures. The most common such measure is the h-index. A scientist’s h-index affects hiring, promotion, and funding decisions, and thus shapes the progress of science. Here we report a large-scale study of scientometric measures, analyzing millions of articles and hundreds of millions of citations across four scientific fields and two data platforms. We find that the correlation of the h-index with awards that indicate recognition by the scientific community has substantially declined.

By the h-index, Einstein would have been judged a failure, even during his “miracle year” when he published four groundbreaking papers. For an easy-to-understand account of the fake objectivity and perverse incentives that the h-factor produces, see Robert Marks’ article on Mind Matters, “Why It’s So Hard to Reform Peer Review” [*good read*]. Robert Marks, a tenured distinguished professor at Baylor, knows a lot from experience about the h-factor and other well-intentioned measures of scientific quality and professorial merit. He introduces Goodhart’s Law and Campbell’s Law (two semi-humorous, Murphy-like laws) that prove the following: any measure of scientific reputation leads to abuses. When a measure (“bean-counting”) becomes a target, people always find ways to game the system. He gives examples that cast significant doubt on the value of peer review: it sounds good in theory, but in practice, it often stinks.

It’s a tough and time-consuming job to evaluate the performance of a professor. It’s easiest to just count beans. But, as I’ve been saying, if we want to minimize the undesirable impact of both Goodhart and Campbell’s laws, those beans must be tasted, chewed, swallowed and digested after they are counted. That’s hard work and it requires much more of a commitment to reform.

Twenty Years on, Aliens Still Cause Global Warming (Jonathan Bartlett at Mind Matters). In this piece, [*good read*] Bartlett builds on the famous speech novelist Michael Crichton gave to Caltech students in 2001: “Aliens Cause Global Warming.” Crichton argued that pseudo-empirical measures like the Drake Equation are disguised myths. Crichton also famously shouted that “Science is not Consensus. Consensus is not science. Period.”

Bartlett shares examples from the interim years how fake science can evolve into political narratives. Dressed up in scientific garb, beliefs with no scientific support can be perpetuated in media narratives, becoming transmogrified by repetition into conventional wisdom. Challenging the prevailing narrative can even set one up for censorship… and in today’s cancel culture, it has.

So, what we have is politically motivated science that is backed up by scientifically motivated censorship. The science pretends to be an unbiased source of truth and the big media companies use such claims as a pretense for removing content of which they disapprove—all the while appearing high-minded. It’s traditional political censorship dressed up to look like concern for “science.”

Mission impossible? A cultural change to support scientific integrity (EMBO Reports). The “mission impossible” these three writers speak of is the attempt to get scientists to behave like they are supposed to— you know, as objective seekers of the truth.

Actively changing the culture of an organization is a considerable task since culture is rarely defined by written rules and established structures. Rather, it is an unwritten consensus, a mutual understanding of how people behave and interact with each other. This informal nature becomes a challenge whenever the need arises to actively change things, for instance, an institution’s research culture in order to foster responsible conduct of research.

They try to reform it. They look at promising sets of guidelines. They examine strategies to get scientists to do better. Who, do you think, is likely to follow their conclusions? Who in management, or in the lab, or at the publishing house, will care?

In conclusion, we aimed to establish a culture of active participation and multiplication by the research integrity scouts. Awareness and competence are now prevalent on all levels of the research teams. The Research Center Borstel is well aware that there is no efficient way to prevent wilful misconduct. But we are convinced that we have achieved a successful transition of the institute’s culture towards transparency, voluntary self-regulation and learning from mistakes. Sharing our experience hopefully encourages other institutes and universities to consider our strategy.

What this implies is that many institutions are not being transparent, self-regulating and learning from mistakes. In fact, as the other articles show, mismeasures of science are creating perverse incentives to maintain bad habits. Telling scientists, “You need to behave!” is not likely to work when the culture in academia rewards sloppiness and cheating.

A call for structured ethics appendices in social science papers (PNAS). The advice in this paper is not likely to do any better than the “mission impossible” above. Four academics, two from Ghana and two from Northwestern University in Illinois, put forth their own path to reform researchers in the social science department.

Ethics in social science experimentation and data collection are often discussed but rarely articulated in writing as part of research outputs. Although papers typically reference human subjects research approvals from relevant institutional review boards, most recognize that such boards do not carry out comprehensive ethical assessments. We propose a structured ethics appendix to provide details on the following: policy equipoise, role of the researcher, potential harms to participants and nonparticipants, conflicts of interest, intellectual freedom, feedback to participants, and foreseeable misuse of research results.

Nice sentiments. Good luck. With scientific misconduct on the rise, how will they give their message any teeth?

Why science needs a new reward and recognition system (Nature).  These opinion writers for Nature use the pandemic and lockdowns to paint a picture of inequity. They say that families with children had more stress during the pandemic, and “gender bias” persists. Some researchers worked well from home but others had to run experiments in the lab. Whether one agrees with the diagnoses and proposed remedies, this article points out that scientists are people. Their responsibilities outside of academia have a bearing on the quality of the work they are able to perform.

Studying social media can give us insight into human behaviour. It can also give us nonsense (The Conversation). Here’s an example of how research can produce nonsense, like Crichton’s joke about aliens causing global warming. Smartphones seem a perfect way for social scientists to collect data on human behavior. As three professors prove, it can also lead to “absurd science” like the conclusion that height is contagious.

As researchers continue to analyse social media data and identify factors that shape the evolution of public opinion, hijack our attention, or otherwise explain our behaviour, we should think critically about the methods underlying such findings and reconsider what we can learn from them.

The p-hacking technique, for instance, turns out to be a method of cherry picking data to support preconceived beliefs. They offer suggestions to prevent fake conclusions, but warn:

These practices are important, but they alone are not sufficient to deal with the problem we identify. While developing standardised research practices is needed, the research community must first think critically about what makes a finding in social media data meaningful.

So it is not just content consumers who must think critically. It is the content producers, too.

In this period of time when Big Science is following the lead of the radical Left (24 June 2021), astute citizens need to understand the extent of non-empirical biases that shape scientific narratives. Much of what we hear as “accepted truth” and “common knowledge” is neither truth nor knowledge. When radical-left Big Science is in cahoots with radical-left Big Media, and both are in cahoots with Big Government, watch out!

Suggested reading: At Evolution News, see Casey Luskin’s response to the objection that “you can’t measure intelligent design.”

 

 

(Visited 312 times, 1 visits today)

Leave a Reply