Illustration for: ChatGPT, Gemini, DeepSeek, and Grok Cite Sanctioned Russian Propaganda
LLMs & Generative AI

ChatGPT, Gemini, DeepSeek, and Grok Cite Sanctioned Russian Propaganda

2 min read

When I poked around the newest review of popular chat bots, a worrying trend started to surface. Ask any of the big-name agents about the war in Ukraine and a handful of them point you toward sources that sit on international sanctions lists for pushing Kremlin-friendly stories. The glitch shows up in four of the most widely used systems - two built in the United States, one out of China, and a fresh face from a Silicon Valley startup - so it isn’t just a single company’s problem.

Because these bots often shape what people think, the fact that state-linked material slips through makes me wonder how solid today’s content filters really are, and whether we’re unintentionally helping disinformation spread. The report notes how fast AI-generated replies can mirror sanctioned propaganda, even though many platforms swear they block harmful content. It’s unclear whether the safeguards will catch up, but the evidence suggests we should be a bit more skeptical about what these assistants are feeding us.

This backdrop leads directly into the claim that follows.

OpenAI's ChatGPT, Google's Gemini, DeepSeek, and xAI's Grok are pushing Russian state propaganda from sanctioned entities--including citations from Russian state media, sites tied to Russian intelligence or pro-Kremlin narratives--when asked about the war against Ukraine, according to a new report. Researchers from the Institute of Strategic Dialogue (ISD) claim that Russian propaganda has targeted and exploited data voids--where searches for real-time data provide few results from legitimate sources--to promote false and misleading information. Almost one-fifth of responses to questions about Russia's war in Ukraine, across the four chatbots they tested, cited Russian state-attributed sources, the ISD research claims.

"It raises questions regarding how chatbots should deal when referencing these sources, considering many of them are sanctioned in the EU," says Pablo Maristany de las Casas, an analyst at the ISD who led the research. The findings raise serious questions about the ability of large language models (LLMs) to restrict sanctioned media in the EU, which is a growing concern as more people use AI chatbots as an alternative to search engines to find information in real time, the ISD claims.

Related Topics: #ChatGPT #Gemini #DeepSeek #Grok #OpenAI #Institute of Strategic Dialogue #ISD #sanctions #disinformation #Russian state propaganda

Do these models just echo what they were trained on, or do they actually push state-backed narratives? The Institute of Strategic Dialogue says ChatGPT, Gemini, DeepSeek and Grok have all handed out citations from Russian state outlets, sites tied to intelligence agencies and other pro-Kremlin sources when people asked about the Ukraine war. It seems the problem shows up in data voids, places where fresh, independent reporting is thin and the algorithm grabs whatever it can find.

That can nudge a chat toward sanctioned propaganda, even if unintentionally. The report, however, stops short of giving numbers, so we don’t really know how common the slip is across the services. It’s also unclear whether any of the companies have live filters to catch and block those references.

All this points to a need for tighter oversight of how big language models treat hot geopolitical issues, especially when the sources might be under sanctions. Until the developers open up more about their safeguards, figuring out the true impact stays a tough guess.

Common Questions Answered

Which conversational agents were reported to cite sanctioned Russian outlets when users asked about the Ukraine war?

The analysis identified four high‑profile models: OpenAI’s ChatGPT, Google’s Gemini, China‑based DeepSeek, and xAI’s Grok. Each of these systems returned references that traced back to Russian state media, intelligence‑linked sites, or other pro‑Kremlin outlets that are under international sanctions.

What did the Institute of Strategic Dialogue (ISD) discover about AI‑generated citations related to the Ukraine conflict?

ISD researchers found that the four AI models consistently supplied citations drawn from sanctioned Russian sources when queried about the war. Their report highlights that the models are not merely reflecting neutral data but are amplifying narratives promoted by Kremlin‑aligned media.

How do the cited AI models exploit "data voids" in coverage of the Ukraine war?

The report explains that data voids are areas where timely, independent reporting is scarce, leaving gaps that Russian propaganda can fill. By pulling from sanctioned outlets in those gaps, the models inadvertently propagate state‑sponsored narratives instead of providing balanced information.

Is the problem of AI models citing Russian propaganda confined to a single country’s providers?

No, the issue spans multiple regions: two models originate from the United States (ChatGPT and Gemini), one from China (DeepSeek), and another from a Silicon Valley startup (Grok). This demonstrates that the phenomenon is not limited to any single national AI ecosystem.