June 10, 2019 | David F. Coppedge

How Science Could Destroy Itself

Without this essential ingredient, long taken for granted, science could collapse.

Imagine a world where science journals could not be trusted. In this imaginary world, at least half of all journal articles have been generated by artificial intelligence (AI), but are fake. The algorithms have gotten so good in this imaginary world, nobody can tell the difference—not editors, not peer reviewers, not even other AI algorithms. And as another consequence, suppose that reporters are fooled, and write up these fake research findings as fact. Is this dystopian vision possible?

We hear a lot about fake news in politics. Each election season, journalists worry about disinformation campaigns by the Russians using social media to stir up partisan attitudes based on bogus claims. Is science immune from the Big Lie? New Scientist looked into the very real problem that AI can generate fake science. In “Fake news generating AIs could be the best weapons to fight fake news,” Donna Lu included two fake science stories, generated by AI, that mimicked the content and style of New Scientist itself. One of them claimed, “Eating bread crusts actually gives you curly hair.” Another headline made in the experiment proclaimed, “Overweight dogs have barks that sound cuter to humans.” Amusing as these spoofed articles are, it’s scary how believable they look. When more credible articles appear, who will be able to detect the fakery? Donna Lu thinks that AI can heal itself.

Fake news spreads faster than the truth on social media, and as fake news generating machines get more sophisticated, distinguishing real from fake is becoming more difficult than ever.

Artificial intelligence that can quickly generate convincing paragraphs of text from a simple prompt already exists and can be used to churn out convincing but untrue stories for influencing public opinion. But paradoxically, these problem AIs may also offer a solution.

Lu is trusting in an algorithm by “Rowan Zellers at the University of Washington and colleagues” who “have created an AI that can both write and detect fake news.”

Most likely Zellers and team have the public good in mind. But let’s ask a simple question. Can you trust Rowan Zellers?

They trained the AI, called Grover, on tens of millions of articles from news websites totalling 120 gigabytes of data. Grover learned to write articles, adjusting its style to mimic pieces published during a particular time period or that feature on a specific news website, such as newscientist.com.

Given fake headlines such as “No substantial evidence for climate change” or “New study provides evidence that vaccines cause autism”, within seconds it spits out articles complete with invented statistics and faked quotes, often from real experts or politicians….

A little bit of thought leads to the concern that such a tool could be used for either side of an issue. Grover can only blindly do whatever it is ordered to do by its creator. It could generate fake science to support either side. And the higher the stakes for influencing the outcome, the higher the temptation to cheat. If those in power want more climate skepticism, they could generate “science” for that side. If those in power want more support for the climate consensus, they could generate “science” to support that side. Reality need not be part of the equation.

Powerful interests will have plenty of money to perfect the AI algorithms so that they can generate even more convincing material. A drug company could write a journal paper that demonstrates high efficacy of its questionable product. A foreign power could publish fake research to build up its image in the scientific community. And scientists with a political agenda their funding agency likes could generate tons of fake science that give their position an aura of legitimacy.

Peer Review, Reproducibility and Self-Checking

Some may argue that fake science could never succeed for long, because science is self-checking and reproducible. But as many journals have worried, there is a “reproducibility crisis” going on, and peer review has also come under scrutiny (18 May 2019). Many science projects are too expensive or difficult to reproduce. Who will build another LHC to check an exotic particle? And what if peer reviewers have the same political bias as the research they are reviewing? Who will watch the watchers?

Fake-science algorithms using AI should worry scientists as well as politicians. Lu hopes that AI algorithms can become better, but the spy-vs-spy scenario is turning into AI-vs-AI, where watchers may have to choose which algorithm does a better job. One algorithm given a mix of fake news and real news achieved 73 percent accuracy. Grover achieved 92 percent. That means that 6 percent got through the screener. In cyber warfare, the effectiveness of the algorithms will depend on skill of the programmers to understand what the adversary is trying to do. Programmers differ on their opinions about the advisability of sharing the algorithms. The creators of Grover want to make the code open to all, but Elon Musk’s Open AI group disagrees.

“It seems quite possible that the release of generators is going to be quite harmful,” says Open AI’s Jeff Wu. In the wrong hands it would equip people with the ability to rapidly generate tens of thousands of articles, he says.

The Essential Ingredient

As possibilities for Big Lie tactics proliferate, the spy-vs-spy scenario morphs into lie-vs-lie. Leaders from the top down are going to really need to know whom they can trust. Somehow, somewhere, there needs to be a real, live, breathing human being with the essential ingredient:

Integrity.

In its 3 June editorial, Nature pointed out that “Research integrity is much more than misconduct.” A morality-colored word creeps into the sub-heading: “All researchers should strive to improve the quality, relevance and reliability of their work.” What does science know about “should”? The word connotes responsibility to a moral standard. They continue: “Conducting research with integrity, honesty and accuracy is something to which every scientist should proudly aspire.” They use the word four times.

AI algorithms will do what they are told, whether good or evil. Only a righteous-minded scientist or engineer with integrity will respond to the should word. If all scientists come to have ulterior motives, and look at every directive only for how it can promote their selfish interests, science is doomed.

In the same issue of Nature, C. K. Gunsalus believes she has a solution. “Make reports of research misconduct public.” But notice how this strategy can backfire. What if an unscrupulous lab manager accuses one of his underling’s work to be misconduct? He follows the advice of Gunsalus and makes it public, ruining his career. Worse, any reporter could dox a scientist they don’t like, threatening his safety as well as his career.

If the ones writing reports of research misconduct have no integrity, no amount of should-ing will solve the problem. In the asymptotic limit, science collapses into the fatalistic dystopia, “Everybody lies, but nobody listens.”

How did integrity evolve? That is the conundrum Darwinism cannot answer. Darwin’s world is one of rampant self-interest. Even when self-interest morphs into group-interest, the only value is survival. Whatever works to pass on genes of the population is “moral” in that view. For those Darwinians who worship science as their sole pathway to truth, abandon all hope.

If you are a scientist who values trustworthy science, promote the teachings that give human beings the desire and the power to achieve integrity. An essential start is the command, “You shall not bear false witness.” Given human nature, though, a command only condemns; it cannot empower. The same Voice that gave the command later promised He would write His law on men’s hearts, implying they would become able to obey joyfully and willingly. Read how here.

Exercises

Look at this article on Phys.org, “Changing minds: How do you communicate with climate deniers?” The protagonist, Emma Frances Bloomfield, a communications specialist at the University of Nevada, is portrayed as a nice, gentle lady only trying to help people overcome their wrong views. She listens. She interacts. She engages. This is all fine and good, but the entire article conveys a narrative of the elitist persuading the buffoon. Bloomfield takes the consensus as undeniable fact, and she is there to heal the “deniers” who (predictably) are portrayed as motivated by religion. How could Bloomfield be replaced by an AI robot to achieve the same objective?

Read this article on Medical Xpress, “Where to draw the line between mental health and illness?” Given that psychology’s values and categories evolve, how could powerful science lobbies use fake science and AI to marginalize skeptics of their consensus with the label ‘mental illness’?

 

 

(Visited 480 times, 1 visits today)

Leave a Reply