The Patriot Post® · Invisible Steering: AI, Public Opinion, and Free Speech

By Geoffrey Douglas ·
https://patriotpost.us/articles/127443-invisible-steering-ai-public-opinion-and-free-speech-2026-05-11

The next wave of speech control may not look like censorship at all. It may be built into the systems we rely on every day.

The first great wave of speech suppression in the digital age arrived on the backs of social media platforms, and left a blueprint now in the hands of a far more powerful tool. Beginning around 2020, Facebook, Twitter, YouTube, and their peers began removing, labeling, and banning content that challenged the official line on everything from COVID-19 to election integrity, doing it quickly, broadly, and with little explanation. Some platforms went further, quietly limiting the reach of certain posts without notifying users. Accounts were suspended without appeal. Videos disappeared. The suppression was swift and broad.

When a social media platform removes a post, you can see that it’s gone. An AI system embedded in a search engine, a news feed, or an encyclopedia doesn’t remove anything. It shapes what gets surfaced, how it gets framed, and where it leads you. The suppression isn’t an act. It’s an architecture. Architecture is much harder to see, challenge, or contest. The material AI learns from — and who decides what counts as reliable — can bake political bias in from the start.

Take Grokipedia, launched by Elon Musk’s xAI in late 2025 as an alternative to Wikipedia. The pitch is appealing: an AI-curated encyclopedia, free of Wikipedia’s well-known political bias. But what Grokipedia introduces isn’t a correction. It’s a different kind of problem. There’s no editorial board to audit, challenge, or hold accountable. False or misleading information can be presented with the same confident tone as established fact. Wikipedia, for all its flaws, spreads that power across thousands of contributors. Grokipedia hands it to one company. Research has consistently found that even a routine AI summary of a contested topic can tilt readers toward a particular political conclusion without their awareness.

The machinery for large-scale opinion manipulation is already in place. The threat isn’t only that AI can generate false content; it’s that the misinformation it produces isn’t evenly distributed. It tends to cluster around exactly the topics that are most politically charged, and where skewed information can do the most damage: elections, public health, and national security. Leading AI systems show measurable political lean in how they present those same topics.

When AI-generated content looks exactly like a news article — same headline, same byline, same clean layout — most readers assume a human reporter verified the claims. Often none did. That effect is even stronger with AI-generated “fact-checks,” which audiences tend to perceive as especially objective, regardless of whether the calls they’re making are actually neutral. A tool sold as a guardian of truth can become one of the most effective means of distorting it.

Shaping what people believe and preventing them from hearing certain ideas are not the same thing, but they work together. One quietly steers you toward a conclusion. The other makes sure you never encounter the evidence that might lead you somewhere else. AI is the first technology powerful enough to do both simultaneously, at scale, and without leaving fingerprints.

The First Amendment prohibits Congress from abridging the freedom of speech. The Founders crafted the First Amendment on a specific bet: that open argument, even ugly argument, produces better outcomes than any authority deciding in advance what people should hear. What they couldn’t have anticipated is a world in which speech can be suppressed without any government actor at all, by a technology company, an algorithm, and a set of rules nobody voted on.

During the 2020 election and its aftermath, Twitter, Facebook, and YouTube coordinated with the White House and federal agencies — including the FBI and the Department of Homeland Security — to suppress accounts and limit what Americans could read about COVID’s origins, vaccine safety, and the 2020 election. Internal Twitter records released in 2022, known as the Twitter Files, and a federal lawsuit (Murthy v. Missouri) that reached the Supreme Court, both exposed the mechanism in detail. Congress didn’t order the suppression, which means it technically wasn’t a First Amendment violation. But the effect on public discourse was indistinguishable from a government censorship program. Millions of Americans lost access to entire categories of opinion and evidence without ever knowing it.

AI amplifies that capacity enormously. A large language model, the kind of AI system behind tools like ChatGPT, can be adjusted to push certain ideas down, reframe them, or lead users away from them without ever issuing an outright refusal. These tendencies don’t have to be deliberate. They can settle in without anyone noticing: from the books, articles, and websites the system was fed, to the human reviewers who rated its answers and shaped what responses they deemed appropriate. The result is suppression that is more pervasive and harder to detect than anything a human editor could achieve. Several recent studies have found that leading AI systems displayed consistent left-leaning political orientation across a range of policy questions.

Every citizen should be asking the same question: Who controls the AI systems deciding what you read and believe, and what values are built into them?

A small number of corporations, including Google, Microsoft, Meta, and OpenAI, now dominate AI development. The politics and values of the people who built these systems will be reflected in everything they produce. Google’s internal debates over search rankings, or OpenAI’s shifting policies on what its tools will and won’t discuss, are public record. But the systems themselves offer almost no transparency into how sources are chosen, ranked, or left out entirely.

None of this is a reason for paralysis. It is a reason for demands. Citizens should demand to know how these systems are built, what they were trained on, and who is checking them for bias. Legislators need to decide whether free speech protections should cover not just government censorship, but the quiet editorial power of a handful of private algorithms — and whether companies wielding that power should be required to open their systems to independent audit. And every person who gets their information through these systems, which is nearly everyone, needs to start treating AI the way they’d treat a salesman: fluent, confident, and not necessarily on your side.

Free societies don’t remain free by accident. They remain free because enough citizens are willing to think carefully, speak plainly, and refuse to hand over their judgment to any authority, human or machine, that hasn’t earned it. Giving everyone access to the same depth of information once available only to the privileged few is a genuine good. The danger lies in the passivity it encourages and the powerful interests that will exploit it.

Learning to question what AI tells you isn’t just smart. It’s a civic duty.