Trusting Science Is Not the Same as Critical Thinking
Experiments with participants given fake science
show that those who “trust science” can be gullible
Like most psychologists in academia, those at the University of Pennsylvania tend to promote the consensus on matters like vaccination and mask-wearing. But they also realize that trusting science is not enough. Without critical thinking, people who claim to “trust science” can be gullible. And if they repeat stories that are false, their too-trusting nature can spread pseudoscience.
Two psychologists at UPenn and another from University of Illionois decided to test this by feeding fake science to groups of people. One fictitious story claimed a virus had been created as a bioweapon. The other story made a claim about the effect of genetically-modified organisms on tumors.
The invented stories contained references to either scientific concepts and scientists who claimed to have done research on the topic or descriptions from people identified as activists. Participants in each experiment, ranging from 382 to 605 people, were randomly assigned to read either the scientific or non-scientific versions of the stories.
What the researchers found was that among people who did not have trust in science, the presence of scientific content in a story did not have a significant effect. But people who did have higher levels of trust in science were more likely to believe the stories with scientific content and more likely to disseminate them.
A further experiment asked participants to role-play “trust in science” or “critical evaluation” mindsets. The latter appeared to be less gullible about the fake stories.
According to the press release, the three authors believe that they have learned something important about the need for people to balance trust in science with critical evaluation of scientific claims.
The lead author, postdoctoral researcher Thomas C. O’Brien of the University of Illinois at Urbana-Champaign, added, “Although trust in science has important societal benefits, it is not a panacea that will protect people against misinformation. Spreaders of misinformation commonly reference science. Science communication cannot simply urge people to trust anything that references science, and instead should encourage people to learn about scientific methods and ways to critically engage with issues that involve scientific content.”
The published paper about this is by O’Brien et al., “Misplaced trust: When trust in science fosters belief in pseudoscience and the benefits of critical evaluation” in the Journal of Experimental Social Psychology, Volume 96, September 2021, 104184.
Well, we hope you had your baloney detectors turned on, because the joke is on the psychologists! While we would agree that critical thinking is essential to avoid being snookered by scientific claims, and that understanding scientific methods and the validity of data behind claims is also good insurance against pseudoscience, look at the pseudoscience in this project!
-
- How does one measure “trust in science”? What are the units?
- How does one measure “critical evaluation”? What is the measuring stick?
- How do they know participants were telling the truth in their answers?
- Why didn’t they control for age, sex, political party, internet use and education?
- Did they control for their own political biases going into the experiment?
- Did they control for their own biases about philosophy of science?
- How does one objectively define misinformation without bias?
- Did they control for manipulation by the way the questions were asked?
- Since they performed the study online, did they control for spammers, trolls and international spies?
- On what basis do they assume that “scientific methods” in psychology are comparable with those in physics?
- Why do they think it was OK to lie to people for science? (see 15 March 2017)
It appears that this project did nothing more than to confirm the psychologists’ opinions at the outset. Even if readers agree with their conclusions, the methods they used cannot measure the effects they claim to have measured. (see 6 August 2019)
The right conclusion for CEH readers is that critical evaluation should be applied to these UPenn psychobabblers and their pseudoscientific paper. Remember that the “science” of psychology is in total meltdown over the reproducibility crisis (30 Aug 2018, 4 April 2017, 8 Feb 2016). Though dressed up in sciency garb, their conclusions are matters of subjective opinion, not science.