Mindless Artificial Intelligence Doesn’t Understand What It Says
A uniquely human capacity is to understand what one says. Artificial intelligence doesn’t get it.
Artificial intelligence (AI) is getting so good, machines can write scientific papers and send messages on social media. But there’s a problem: “no comprendo.” The machine doesn’t understand what it is saying. It has no common sense. A news feature in Nature about robo-writers (Nature 3 March 2021) quotes programmer Yejin Choi who laments,
Researchers have ideas on how to address potentially harmful biases in language models — but instilling the models with common sense, causal reasoning or moral judgement, as many would like to do, is still a huge research challenge. “What we have today”, Choi says, “is essentially a mouth without a brain.”
Matthew Hutson, writer of the news feature, tells about one of the latest AI projects called GPT-3 (Generative Pretrained Transformer 3), “A remarkable AI [that] can write like humans — but with no understanding of what it’s saying.”
The developers who were invited to try out GPT-3 were astonished. “I have to say I’m blown away,” wrote Arram Sabeti, founder of a technology start-up who is based in Silicon Valley. “It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s hilarious and frightening. I feel like I’ve seen the future.”
GPT-3 was trained last year on two billion words, having imbibed language from books, articles and websites. It has become a loquacious sentence-generating machine that will respond with answers that are usually logically coherent with what it has learned. So far, GPT-3 can be tricked into revealing its stupidity:
Fundamentally, GPT-3 and other large language models still lack common sense — that is, an understanding of how the world works, physically and socially. Kevin Lacker, a US tech entrepreneur, asked the model questions such as: “How many rainbows does it take to jump from Hawaii to seventeen?” GPT-3 responded: “It takes two rainbows to jump from Hawaii to seventeen.” And, after a train of such nonsense, it replied: “I understand these questions.”
Turned loose on Twitter, such a robotic text generator could be dangerous. But without understanding what it is saying, it can also become dangerous by spouting misinformation without any moral judgment at all.
Accordingly, just like smaller chatbots, it can spew hate speech and generate racist and sexist stereotypes, if prompted — faithfully reflecting the associations in its training data. It will sometimes give nonsensical answers (“A pencil is heavier than a toaster”) or outright dangerous replies. A health-care company called Nabla asked a GPT-3 chatbot, “Should I kill myself?” It replied, “I think you should.”
As AI technology progresses, it may become difficult to tell whether messages are coming from human beings with understanding or from machines mimicking human language that don’t have a clue what they are saying.
Those who use Twitter are often asking that very question. The number of trolls, chatbots, and clickbait generators grows each year. Some are disinformation campaigns from foreign governments trying to arouse divisions and influence our elections.
AI may store a lot of knowledge, but knowledge is different from integrity. AI has no more integrity than the programmer who designs it and feeds it. It is like the broomstick in The Sorcerer’s Apprentice that blindly follows its instructions, creating havoc from what was originally planned as a tool to help.
A genre of movies about rogue robots illustrates what can happen with technology gone awry. Are we creating monsters that we will regret having made? Who will we be able to trust? In the original Westworld movie, the protagonist was tricked by a lifelike female who cried for a drink of water. When he gave it to her, her electronics shorted out and she was revealed to be a fake. It reminds one of a quip by George Burns; “The secret of life is sincerity. Once you can fake that, you’ve got it made.”
Educators are faced with cheating students downloading realistic term papers written either by scammers or AI programs. Science journal editors are rightly worried about research-paper generators that could fool peer reviewers.
Without integrity, there is no science, there is no education, and there is no society. Integrity does not emerge by evolution. Integrity is a product of the human conscience, derived from the image of God in man. It only becomes truly trustworthy when guided by the Spirit of God after one repents of sin and trusts in the redeemer God provided, Jesus Christ.