AI Safety

OpenAI Alleges DeepSeek Used Model Distillation to Train AI Systems

OpenAI Alleges DeepSeek Used Model Distillation to Train AI Systems

OpenAI has told U.S. lawmakers that Chinese artificial intelligence startup DeepSeek may have used a technique known as model distillation to train its systems by leveraging outputs from leading American AI models, according to a memorandum reviewed by Reuters. The disclosure adds a new dimension to rising tensions between Washington and Beijing over access to advanced artificial intelligence technologies.

In a memo addressed to the U.S. House Select Committee on Strategic Competition between the United States and the Chinese Communist Party, OpenAI said it had observed accounts linked to DeepSeek employees attempting to bypass access restrictions. The company alleged that these efforts included using third party routing services to mask the origin of requests and programmatically extract outputs from its models for training purposes.

Model distillation is a common machine learning method in which a smaller or newer system is trained using the outputs of a larger and more advanced model. In standard research settings, the process is used to improve efficiency and reduce computational demands. However, OpenAI suggested that DeepSeek’s activities involved unauthorized access to proprietary systems, raising concerns about intellectual property protection and compliance with usage policies.

DeepSeek and its parent firm High Flyer did not immediately respond to requests for comment. The company gained global attention last year after releasing AI models that analysts said rivaled some leading U.S. offerings despite reportedly being developed with fewer high end computing resources. The development sparked debate in Washington about whether export controls on advanced chips were sufficient to maintain the United States’ technological edge.

The allegations come as U.S. authorities continue tightening restrictions on the sale of advanced semiconductors and AI related hardware to China. American officials have argued that limiting access to high performance computing chips is necessary to protect national security and slow the development of advanced military applications.

OpenAI’s memo signals growing concern within the private sector about how leading AI systems are accessed and replicated. The company indicated that it had identified patterns suggesting coordinated attempts to retrieve model outputs at scale. Such activity, if proven, could intensify calls for stricter safeguards around AI deployment and cross border data flows.

The dispute highlights broader questions about how AI innovation is governed globally. As competition accelerates, companies and governments are grappling with how to balance open research traditions with commercial protection and geopolitical rivalry. Lawmakers reviewing the memo are expected to examine whether additional regulatory or enforcement mechanisms are needed to address alleged misuse of U.S. developed AI systems.