Anthropic alleges Chinese AI firms used Claude outputs to train rival models

US based artificial intelligence company Anthropic has accused three Chinese AI firms of improperly using its Claude chatbot to enhance their own large language models, intensifying debate over model distillation practices and cross border technology controls.
In a recent statement, Anthropic said that DeepSeek, Moonshot and MiniMax generated more than 16 million interactions with Claude through roughly 24,000 accounts that it described as fake. According to the company, the activity violated its terms of service and regional access restrictions. The interactions were allegedly aimed at extracting high quality outputs from Claude to train or refine competing AI systems.
Anthropic stated that the companies relied on a method known as distillation, a technique in which a smaller or less capable model is trained on the outputs of a more advanced system. While distillation is widely used within organizations to optimize performance and efficiency, the controversy arises when outputs are harvested without authorization from external proprietary models.
The company said it detected the campaigns through internal monitoring systems and claimed that at least one of the efforts was still active when discovered. Anthropic added that when it released a newer version of its model, traffic patterns shifted rapidly, suggesting attempts to capture capabilities from the updated system.
The allegations come amid rising geopolitical tension around artificial intelligence development. Earlier this month, OpenAI warned US lawmakers that certain Chinese firms were seeking to replicate advanced US models and integrate similar capabilities into domestic systems. Both US and Chinese companies are investing heavily in foundation models that power chatbots, coding assistants and enterprise AI tools.
Anthropic argued that improper distillation not only undermines intellectual property protections but may also bypass safety safeguards embedded in original models. According to the company, models trained on extracted outputs may lack alignment controls, increasing the risk of misuse. It also linked the issue to ongoing discussions about export controls on advanced AI chips, saying restrictions on high performance processors could limit both direct model training and large scale distillation efforts.
DeepSeek, Moonshot and MiniMax have not publicly responded to the allegations. Industry observers note that the line between legitimate benchmarking and unauthorized extraction can be difficult to define in a competitive environment where model outputs are accessible through application programming interfaces.
The dispute highlights the growing complexity of regulating artificial intelligence in a globalized digital economy. As AI systems become more capable and commercially valuable, questions surrounding data access, model replication and cross border technology governance are likely to intensify, particularly between the United States and China, two of the world’s leading AI powers.

