The Patriot Post® · The Dangers of AI

By Brian Mark Weber ·
https://patriotpost.us/articles/96481-the-dangers-of-ai-2023-04-14

Remember when artificial intelligence was the stuff of science fiction?

Not long ago, we were introduced to ChatGPT, a powerful chatbot designed to replicate human speech and thought. At the time, a reporter for The New York Times boasted that “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.”

No one doubts the impressive ability of the system to generate ideas at the stroke of a computer key. Immediately, though, red flags went up in the academic world. Suddenly, students could ask their computers to write essays on any topic imaginable. Need a five-pager on a Robert Frost poem? You’ve got it in minutes. How about a history term paper? Not a problem. Even literature magazines had to wonder whether their poetry submissions were the product of creative students or ChatGPT.

But other than plagiarism, what harm could it do?

As this analyst penned1 in a recent column, there are plenty of concerns about ChatGPT — not the least of which is woke programming that becomes yet another weapon of the Left to silence those with differing viewpoints.

Those worries were, of course, scoffed at by leftists who claimed ChatGPT was unbiased. Yet how could anything programmed by humans be unbiased? It didn’t take long for the conspiracy theories to become conspiracy facts.

It turns out that ChatGPT generated false information about law professor and Fox News contributor Jonathan Turley, essentially accusing him of sexual harassment and citing a fabricated Washington Post article as proof. Turley didn’t even discover the shocking accusation on his own. A fellow professor and lawyer contacted him after having conducted a search for legal scholars who had sexually harassed someone.

Turley “has been outspoken about the pitfalls of artificial intelligence and has publicly expressed concerns with the disinformation dangers of the ChatGPT bot, the latest iteration of the AI chatbot,” reports2 Fox News.

The New York Post adds3: “ChatGPT wasn’t the only bot involved in defaming Turley. This baseless claim was reportedly repeated by Microsoft’s Bing Chatbot — which is powered by the same GPT-4 tech as its OpenAI brethren — per a Washington Post investigation that vindicated the attorney.”

Making matters worse, there was no way for Turley to challenge the baseless claims. After all, there isn’t an editor one can call at ChatGPT to take issue with its work. Once information is generated, it spreads like wildfire and puts the wrongly accused on the defensive.

Even The Washington Post admits4, “As largely unregulated artificial intelligence software such as ChatGPT, Microsoft’s Bin and Google’s Bard begins to be incorporated across the web, its propensity to generate potentially damaging falsehoods raises concerns about the spread of misinformation — and novel questions about who’s responsible when chatbots mislead.”

The Post adds: “While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say. Users have posted numerous examples of the tools fumbling basic factual questions or even fabricating falsehoods, complete with realistic details and fake citations.”

Yikes.

And Turley isn’t alone. Brian Hood, an Australian regional mayor, has threatened5 to file the world’s first defamation lawsuit against ChatGPT, which incorrectly claimed that Hood had served time in prison for bribery.

But never fear: The Biden administration is getting involved and plans to implement some ground rules. “In a first step toward potential regulation,” reports6 The Wall Street Journal, “the Commerce Department on Tuesday put out a formal public request for comment on what it called accountability measures, including whether potentially risky new AI models should go through a certification process before they are released.”

Around the country, many state legislatures7 are also taking a hard look at AI regulation. And over in Europe, government leaders are taking steps to control8 the wildly popular chatbot generator, including Italy, which has already banned ChatGPT over privacy concerns. Meanwhile, in China, the communist government wants to ensure that the service doesn’t affect the social fabric or its iron grip on power and is thus considering9 strict controls.

Back here at home, Elon Musk and other AI experts are calling10 for a six-month pause on new technology in order to set safety standards.

This, we think, is a better approach than letting the Biden administration weaponize yet another tech platform, or allowing Congress to politicize and over-regulate something it doesn’t understand.

Links