Big Science Losing Public Trust
Scientists themselves are warning that the scientific community has lost a great deal of public trust, and for good reasons.
Stefan Pfenninger let loose with both barrels in a “world view” column in Nature this week. Titled “Energy scientists must show their workings,” he excoriates his colleagues in the climate science community for lack of transparency. They must open up their research if they expect the public to trust them when that research is used to set policy. He gives one example of an inscrutable modeling tool called NEMS, but launches off that into deeper issues:
At least NEMS (National Energy Modelling System) is publicly available. Most assumptions, systems, models and data used to set energy policy are not. These black-box simulations cannot be verified, discussed or challenged. This is bad for science, bad for the public and spreads distrust. Energy research needs to catch up with the open-software and open-data movements. We energy researchers should make our computer programs and data freely accessible, and academic publishing should shun us until we do.
Pfenninger dismisses the typical excuses for secrecy (business confidentiality, etc.). He thinks the public has a right to know what modelers are telling the government, and what it’s based on. How can anyone trust secretive wizards behind curtains? He gives examples of mistakes made in the past when the public was unable to look into the modelers’ black boxes. “This closed culture is alien to younger researchers, who grew up with collaborative online tools and share code and data,” he says. Yet the closed culture is exacerbated by academia’s publish-or-perish culture, he says.
He ends with comments about the difference between ‘truth’ and ‘trust’ and which matters more in policy decisions:
A change in journal policies would help to kick-start these discussions. In policy-focused research, where one ‘truth’ does not exist, one cannot assess whether a modelled scenario is ‘correct’, so the important yardstick is not truth, but trust. The arrival of the post-truth world shows that trust in experts is lower than ever — and surely this is partly the experts’ fault.
This interesting remark can shed light on the phrase ‘alternative facts‘ by presidential adviser Kellyanne Conway that unleashed howls of rebuke from scientists and reporters. Pfenninger basically agrees with her. Conway was not denying the existence of truth, as she and White House press secretary Sean Spicer explained afterward, but was pointing out that sources for facts in which to trust can differ: different polling organizations, for instance, can give different estimates. Which one is ‘the truth’? It’s not always possible to know. Pfenninger’s point is that the only way to increase trust is to decrease secrecy about how the numbers were arrived at.
One academic, Peter Neal Peregrine, unfairly took media reports about Conway’s statement for granted. In The Conversation, he accused her of relying on the “argument from authority” for truth. If she had only followed the lessons of The Enlightenment, he argues, she wouldn’t have accelerated the decline into our post-truth world. But should we take Peregrine’s word on authority? Joel Pollack at Breitbart defends Conway’s use of the phrase as a common and benign legal term where each side presents their version of the facts for the court (or public) to decide. Peregrine, mocking the supposed ‘fake news’ about crowd size, also fails to differentiate the numbers of viewers present in Washington DC for the inauguration and those viewing online, which could indeed have been a new record. He ignored questions of when the photos were taken, and what other factors beside popularity influenced crowd size. This is all beside the point, but does tend to undermine our trust in Peregrine (an anthropologist at Lawrence U) as an arbiter of truth. Our trust is further undermined when he takes a swipe at creationists in his piece, but in the same article presents de facto intelligent design arguments to show how archaeologists distinguish between ‘natural’ and ‘human’ objects. [This commentary is an aside from the main topic of this entry about scientific trust.]
Can we trust climate researchers’ findings? It’s more complicated than we are led to believe. On Phys.org, Blake Francis shares some of the complex issues between measurement and policy. He leans on the side of trust in what scientists say, but raises questions:
“As natural scientists, we know a lot about what controls the climate and what kind of impacts we’re likely to see in the future,” said [Chris] Field, a professor of biology and of Earth system science and a member of Francis’ dissertation committee. “But increasingly the important questions are human ones. What will people decide is important regarding climate change? Natural science can’t speak to those issues and philosophy can.“
But can natural science even “know a lot” on this topic? Here at CEH (where we do not take a position on anthropogenic climate change), we routinely find scientific papers uncovering new facts that weren’t factored into previous climate models. Here are three examples from just this week:
- Nature Scientific Reports shared discoveries of “widespread methane seepage” off the coast of Norway that they say is not caused by anthropogenic global warming, but probably existed for thousands of years. Methane is 28 times more potent as a greenhouse gas than carbon dioxide.
- Nature Communications announces “Massive production of abiotic methane during subduction,” clearly not due to humans. They conclude, “These studies, together with our data, suggest that deep abiotic methanogenesis (and possibly other types of deep hydrocarbons) by high-pressure serpentinization may be a more common process than previously thought, with potential implications for geo-astrobiological (for example, Archean, Mars) detection and search of both abiotic and biotic C compounds.” Undoubtedly this “massive production” of methane—a potent greenhouse gas—has implications for climate models as well.
- Nature Communications reports on rapid cooling in the North Atlantic in 1970. There’s no way to know if it will happen again. Observations show no long-term warming in that region, either. Most models don’t account for it. “Thus, due to systematic model biases, the CMIP5 ensemble [a model comparison project] as a whole underestimates the chance of future abrupt SPG cooling, entailing crucial implications for observation and adaptation policy.“
Our issue with climate change, therefore, is philosophical: we question whether it is even possible for fallible humans who don’t know all the factors to come up with a trustworthy consensus about the ‘fact’ of human-caused global warming. No wonder the public is skeptical of scientists’ confident claims about climate change.
It’s not just energy research and climate science where trust is falling. The BBC News reports, “Most scientists ‘can’t replicate studies by their peers.‘” Dr. Tim Errington, who has been investigating the ‘reproducibility crisis’ in science (see 9/05/15), remarked about his latest reproducibility test, “It’s worrying because replication is supposed to be a hallmark of scientific integrity.” Face it: without integrity, science (or any other human enterprise) does not deserve anyone’s trust. In a commentary piece in Nature, Jeffrey S. Mogil and Malcolm R. Macleod propose a radical change of direction in scientific practice: “No publication without confirmation.” What does that imply? Publications are going forward without confirmation: in other words, sometimes journals are publishing ‘fake science’ that cannot be trusted.
We believe that this requirement would push researchers to be more sceptical of their own work. Instead of striving to convince reviewers and editors to publish a paper in prestigious outlets, they would be questioning whether their hypotheses could stand up in a large, confirmatory animal study.
Getting back to the BBC article, reporter Tom Feilden says that journals sometimes tidy up data “to present a much clearer, more robust outcome.” In this way, published literature gives a “highly curated” and “rose tinted” view of the evidence. Is this rare or widespread? Apparently it’s rigged all over: “The way the system is set up encourages less than optimal outcomes.” That’s wording it kindly. Feilden quotes the editorial director at Nature admitting, “It’s a big problem.” It goes to the heart of of the scientific process, he admits. So whom can you trust if not scientists? Look at these worrying statements at the end of the article. Talk about ‘alternative facts’—
“Without efforts to reproduce the findings of others, we don’t know if the facts out there actually represent what’s happening in biology or not.“
Without knowing whether the published scientific literature is built on solid foundations or sand, he argues, we’re wasting both time and money.
“It could be that we would be much further forward in terms of developing new cures and treatments. It’s a regrettable situation, but I’m afraid that’s the situation we find ourselves in.“
A certain teacher said something about building on sand.
Out of the Echo Chamber into Reality
This teacher also coined the phrase, “Physician, heal yourself.” Can we expect scientists to operate on their own brain tumors? Let’s read the prescription from another academic, a “communications expert” at the University of Wisconsin named Dominique Brossard. She’s going to tell scientists how to respond to ‘fake news.’ Initially, she tells those who think fake news started with the current election cycle that no, “fake news about science has always existed.” She defines fake news as “using false information, with the goal of sharing it as real news to influence people.” That’s short for ‘Big Lie.’ In the modern world of social media, it’s key to “get science right from the start,” she advises.
In her recent address to the AAAS, Brossard gave three tips for scientists on dealing with members of the public confronted with stories about cures for Alzheimers, whether vaccines cause autism, or whether caffeine causes cancer. (1) Scientists need exit their echo chambers, talk to journalists, and find common ground with non-scientists. She warns that simply pronouncing scientific facts can make people double down on their beliefs. (2) Scientists need ‘brand control’ like Coca-Cola, including damage control when social media misrepresents scientific findings. (3) Retracted papers should be removed from search engines. In conclusion, she says,
“There is not a clear dichotomy between fake news and real news,” she says. “Scientists should engage in communicating their work and realize it’s not ‘us versus them, the public.’ They need to be aware of the consequences of what they say and take into account what we know about science communication. They shouldn’t shy away.”
Her statements seem wise, but are incomplete. She is assuming scientists have the truth and their only problem is communicating it to the ignorant masses who want to misinterpret it. She is, after all, a “communications expert” – presumably focused on delivery, not content. Yet the stories reported above cast doubt on the trustworthiness of scientists’ content. What if scientists have become purveyors of fake news? In that case, communication becomes irrelevant. The public would be morally right to fight back with truth from what they consider more reliable sources.
Let’s see if the Editors of Nature can get out of the echo chamber. In this week’s Editorial, “Researchers should reach beyond the science bubble,” they display rare penitence for having been out of touch with the public. “Scientists in the United States and elsewhere ought to address the needs and employment prospects of taxpayers who have seen little benefit from scientific advances,” they begin. At a recent AAAS convention, it was clear the science bubble hasn’t popped yet. Most attendees, they observed, had a knee-jerk reaction to the Trump administration, vowing to “Stand up for science” as if the news was going to be all bad.
But it’s the wrong question. It is not Trump that scientists must respond to. The real question is what science can do for the people who voted for him. Exactly who did support him, and why, is still being debated by political scientists, but it’s clear that many of those who voted Trump are those he canvassed in his campaign and credited in his inauguration speech. It is people who feel left behind by supposed progress and who have suffered a real or perceived collapse in their quality of life.
Do scientists bear any of the blame for this? After all, by common assumption, Obama was a pro-science President for the last eight years, yet left behind a disenchanted public. Listen to the Editors describe science’s “story” as a fairy tale:
Just telling the same old stories won’t cut it. The most seductive of these stories — and certainly the one that scientists like to tell themselves and each other — is the simple narrative that investment in research feeds innovation and promotes economic growth. ‘It’s the economy, stupid’, so the saying goes, and as nations become a little less stupid by pushing against the frontiers of knowledge, so the benefits of all this new insight spread from the laboratory to the wider population, as improvements in the standard of living and quality of life.
This comfortable story has all the hallmarks of a bubble waiting to pop. For a start, it always has a happy ending. The hero of various quests, science slays the dragon of childhood disease and retrieves the elixir, if not of everlasting life, then at least of increased lifespan. And, like all good stories, this one comes with a pleasing twist: for when it sets off on its quest, science does not know exactly which good deeds it is planning to perform. Pure of heart and research, it is merely enough to send our science hero out into the world, with its consumables, overheads and a postgraduate squire paid for by donations from a grateful and trusting public.
Like any fairy tale, “This story is truthful enough to have sustained itself for many decades,” they continue. Sometimes the story comes true, as when Einstein’s theory of relativity (an apparently impractical exercise of pure research) bore fruit in GPS technology. By implication, though, such benefits are rare. The agendas of scientists have not generally trickled down to the public good. Often they serve the rich. The “benefits of discovery science arguably deepen the pools of wealth and privilege already in place,” they lament. So what to do when the bubble pops and everyone sees that the hero of science’s narrative is a selfish nerd?
It is right that more scientists should tell stories of the good their research can do. But it is more important and urgent than ever that researchers should question how these stories really end — and whether too many of the people they claim to act for don’t really get to live happily ever after. Equally, they should focus more effort on how science education and scientific research can help the many whose jobs are going to be displaced by the very inventions that scientists are producing.
Amusingly, the Editors think we should get people jobs in climate change mitigation. But they also suggest getting the hay down where the cows can eat it, engaging more in social science to find out what people want and need: water quality, soil quality, health care to the elderly.
By implication, that’s not happening now.
It’s nice to see scientists concerned about truth and integrity. We applaud the editors of Nature and the “communications expert” for at least peering outside the bubble. We’re glad they stopped talking long enough to listen for sounds outside the echo chamber. But they do not go far enough, because they still presume the reliability of science. They need to take History of Science 101 again, and along with it, Philosophy of Science 101, Rhetoric of Science 101, and Sociology of Science 101. We’re not at all endorsing postmodernism here, but pointing out that scientism and positivism are intensely vulnerable to criticism. Don’t these people know that? Science is always mediated by humans. Humans are fallible, and subject to a welter of non-rational influences. Those influences can be moderated by the need to produce goods and services that serve other people, as in free market economics (see Prager University video on this).
For this reason, rather than living on the public dole their whole careers, scientists should be required to take a sabbatical once a decade and go to work as an entrepreneur, or work a cattle ranch shoveling manure and feeding animals in the rain, or work as a cook at a restaurant — something to take them out of their bubble and get them to mix with real folk who have to serve others for a living. We’re not saying that being a scientist is not hard work. It’s just a different kind of work to live in an academic bubble, hearing only sounds bouncing off the bubble walls. There’s a real world out there. There are real people with needs. If scientists can take their Yoda hats off for a while, and see themselves on the same level with other members of the human race, maybe they would come back with motivation to gear their science to serve the common good more directly. Better yet, visit a church, hear the gospel, and get saved. They would get a new redemptive view on the purpose of science (e.g., Psalm 104, Psalm 111, Psalm 139).
Live Science still doesn’t get it at all. Once again shooting herself in the foot, Stephanie Pappas asks, “Are humans inherently selfish?” Well if they are (as she argues from evolutionary theory, hypocritically pointing a Yoda finger at Donald Trump), then we can dismiss everything she says, because she is selfish. After shallow perusals of philosophers Socrates, Hobbes and Locke, she asks “What does the science say?” (Here it comes.) Positivism. Scientism. Evolutionary psychology. Evolutionary game theory. Biological determinism.
See if you can parse the errors in this example of her “social strategies” argument:
Both cooperation and selfishness may be important behaviors, meaning that species may be most successful if they have some individuals that exhibit each behavior, Weissing told Live Science. In follow-up experiments that have not yet been published, he and his colleagues have found that in some economic games, mixed groups perform far better than groups made up only of conformists or only of those who look out for themselves.
So if cooperation and selfishness are morally equivalent strategies, then take your pick. Be selfish! Actually, according to her own evolutionary beliefs, your genes determine that for you. If these people do not (or will not) learn to avoid the fallacy of self-refutation, they are hopeless. Their words cannot be trusted. It would never enter these people’s minds to take a theological argument seriously. In their mind’s eye, they are Yoda talking down to selfish pawns in seminary. Meanwhile, the theologians in seminary are praying for their salvation, that God would turn them from darkness to light.
For a good inoculation against Live Science’s kind of evolutionary nonsense, read Tom Bethell’s excellent historical/philosophical account of Sociobiology in Darwin’s House of Cards (chapter 17). You’ll learn the terminology of kin selection, group selection, and other controversial abstractions of natural selection. You’ll learn about the vitriolic battles involving E. O. Wilson, Dawkins, Lewontin, Tooby, Cosmides, Gould, Trivers, Coyne and other players calling each other’s views illogical and groundless, or betrayals of Darwin’s vision. It’s really rather funny. But let’s not laugh at them. Let’s pray for them. God still can turn a Saul into a Paul.