Anthropic, a leading artificial intelligence safety and research company, has accused three Chinese AI labs – DeepSeek, Moonshot AI, and MiniMax – of orchestrating a large-scale effort to extract capabilities from its Claude AI model. The alleged operation involved the creation of over 24,000 fake accounts and more than 16 million interactions with Claude, utilizing a technique known as “distillation” to enhance their own competing models. This development arrives as the U.S. Government continues to debate the extent of export controls on advanced AI chips, a policy intended to limit China’s advancements in artificial intelligence.
The accusations center around the practice of distillation, a common AI training method where a smaller, more efficient model is created by learning from a larger, more complex one. Yet, this technique can also be exploited by competitors to essentially replicate the functionality of proprietary models. Earlier this month, OpenAI reportedly alerted U.S. House lawmakers to similar distillation efforts by DeepSeek, aiming to mimic its own products. This latest claim from Anthropic underscores growing concerns about intellectual property theft and the potential for rapid AI development through illicit means.
According to Anthropic, the targeted capabilities included Claude’s strengths in “agentic reasoning, tool use, and coding.” The scale of the alleged attacks varied between the three labs. DeepSeek generated over 150,000 exchanges focused on improving foundational logic and alignment, particularly around circumventing content restrictions. Moonshot AI initiated more than 3.4 million exchanges, concentrating on agentic reasoning, coding, and data analysis, while MiniMax reportedly redirected nearly half of its traffic to siphon capabilities from the latest Claude model upon its release, resulting in 13 million exchanges.
Distillation and the AI Chip Export Debate
The timing of these accusations coincides with ongoing discussions regarding U.S. Export controls on advanced AI chips. Last month, the Trump administration authorized U.S. Companies like Nvidia to export high-end chips, such as the H200, to China, a move that has drawn criticism from those who believe it will accelerate China’s AI capabilities. Anthropic argues that the scale of the extraction performed by DeepSeek, MiniMax, and Moonshot necessitates access to these advanced chips, reinforcing the rationale for stricter export controls. “Restricted chip access limits both direct model training and the scale of illicit distillation,” the company stated in a blog post.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, expressed little surprise at the allegations. “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we recognize this for a fact,” Alperovitch told TechCrunch. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
National Security Implications
Beyond competitive concerns, Anthropic highlighted the potential national security risks associated with distillation attacks. The company emphasized that its AI systems are designed with safeguards to prevent misuse by state and non-state actors for malicious purposes, such as developing bioweapons or conducting cyberattacks. Models created through illicit distillation, however, are unlikely to retain these crucial safety features, potentially leading to the proliferation of dangerous capabilities. Anthropic specifically pointed to the risk of authoritarian governments utilizing frontier AI for “offensive cyber operations, disinformation campaigns, and mass surveillance,” a threat amplified by the open-source nature of some distilled models.
DeepSeek has already made significant strides in the AI landscape, releasing its open-source R1 reasoning model last year, which reportedly rivaled the performance of leading American models at a fraction of the cost. The company is anticipated to launch DeepSeek V4 soon, with reports suggesting it could surpass both Anthropic’s Claude and OpenAI’s ChatGPT in coding performance. Moonshot AI recently released its Kimi K2.5 model and a coding agent, while MiniMax continues to refine its own AI offerings.
Anthropic stated This proves actively investing in defenses to make distillation attacks more difficult to execute and easier to detect, but stressed the need for a collaborative response. The company is calling for coordinated action among AI developers, cloud providers, and policymakers to address this growing threat.
The incident underscores the complex interplay between innovation, competition, and security in the rapidly evolving field of artificial intelligence. As the debate over AI chip exports continues, the implications of these alleged distillation attacks will likely play a significant role in shaping future policy decisions.
What comes next will depend on the response from U.S. Policymakers and the AI industry. Further investigation into these claims and the development of robust defenses against distillation attacks will be crucial in safeguarding American AI leadership and mitigating potential national security risks.
Share your thoughts on this developing story in the comments below.