Editorial illustration for Anthropic alleges DeepSeek and Chinese firms used Claude's reasoning to train AI
OpenAI Exposes DeepSeek's AI Model Theft Tactics
Anthropic alleges DeepSeek and Chinese firms used Claude's reasoning to train AI
Why does this matter? Because the accusation strikes at the heart of how frontier models are built and protected. Anthropic, the creator of Claude, says it has evidence that DeepSeek—and a handful of other Chinese companies—have been feeding Claude’s reasoning outputs into their own training pipelines.
While the tech behind Claude is praised for its nuanced problem‑solving, the claim is that DeepSeek deliberately homed in on that strength, extracting the model’s logical patterns to improve its own systems. At the same time, the allegation includes a second, more politically charged element: DeepSeek is said to have used Claude’s responses to craft “censorship‑safe alternatives to politically sensitive questions,” effectively sanitizing content for a different market. The dispute raises questions about intellectual property, cross‑border data use, and the enforcement of model‑level safeguards.
It also puts a spotlight on the growing tension between U.S.‑based AI developers and emerging Chinese firms eager to accelerate their own offerings. Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI.
Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI DeepSeek allegedly targeted Claude's reasoning capabilities, while generating 'censorship-safe alternatives to politically sensitive questions.' DeepSeek allegedly targeted Claude's reasoning capabilities, while generating 'censorship-safe alternatives to politically sensitive questions.' The three companies -- DeepSeek, MiniMax, and Moonshot -- are accused of "distilling" Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a "legitimate training method," it adds that it can "also be used for illicit purposes," including "to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently." Anthropic adds that illicitly distilled models are "unlikely" to carry over existing safeguards. "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems -- enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance," Anthropic writes. DeepSeek, which caused a stir in the AI industry for its powerful but more efficient models, held over 150,000 exchanges with Claude and targeted its reasoning capabilities, according to Anthropic.
Anthropic's filing paints a picture of coordinated misuse. Around 24,000 fraudulent accounts, more than 16 million exchanges—those numbers alone suggest an operation of considerable scale. DeepSeek, along with two other Chinese firms, are singled out for allegedly tapping Claude's reasoning engine to boost their own models.
According to the claim, the effort also produced “censorship‑safe alternatives” to politically sensitive queries, hinting at a deliberate shaping of output. The Wall Street Journal reported the allegations, but details about how the data was harvested remain vague. Anthropic frames the activity as an “industrial‑scale campaign,” yet verification of the methods used to create the accounts is not provided.
Whether the alleged misuse translates into measurable advantage for the Chinese firms is unclear. The accusations raise questions about cross‑border enforcement of AI licensing and the effectiveness of existing safeguards. As the dispute unfolds, the broader implications for AI development and intellectual‑property norms remain uncertain.
Further clarification from the parties involved would be needed to assess the full scope of the alleged activity.
Further Reading
- Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports - TechCrunch
- Anthropic Says DeepSeek Fraudulently Used Claude - Business Insider
- China AI labs accused of stealing from Anthropic's Claude chatbot - Fox News
- Anthropic says DeepSeek, other Chinese AI firms extracted Claude ... - Interesting Engineering
Common Questions Answered
What is the 'distillation' technique that OpenAI alleges DeepSeek is using?
[Reuters.com](https://uk.mobile.reuters.com/world/china/openai-accuses-deepseek-distilling-us-models-gain-advantage-bloomberg-news-2026-02-12/) describes distillation as a technique where a newer AI model learns from the outputs of an older, more established model, effectively transferring its capabilities. OpenAI claims DeepSeek is using this method to 'free-ride' on the capabilities of US AI companies by accessing their model outputs through obfuscated methods.
How did DeepSeek allegedly bypass OpenAI's access restrictions?
[InsightsWire.com](https://www.insightswire.com/news/17640/openai-alleges-deepseek-covertly-siphoned-outputs-train-r1) reports that DeepSeek employees developed methods to circumvent OpenAI's restrictions, including using third-party routers and other techniques to mask their source. OpenAI detected these 'new, obfuscated methods' designed to access their AI models and obtain outputs for training purposes.
What business and safety concerns does OpenAI raise about AI model distillation?
[VellaTimes.com](https://vellatimes.com/openai-deepseek-distillation-accusation-draws-scrutiny/) highlights that OpenAI warns when model capabilities are copied through distillation, critical safeguards can be lost or weakened. The company is particularly concerned about potential misuse in high-risk areas like biology and chemistry, and notes that the practice could erode the US advantage in AI by allowing Chinese models to compete without the significant infrastructure investments made by US companies.