China Telecom Says AI Model With 1 Trillion Parameters Trained With Chinese Chips

China has taken another major step toward technological self reliance as China Telecom announced the development of two large language models trained exclusively on domestically produced chips. The achievement highlights Beijing’s acceleration in artificial intelligence research at a time when the country faces some of the most restrictive US export controls on advanced semiconductors. The milestone also reflects how quickly national efforts to build a fully local AI ecosystem are starting to pay off.
According to the Institute of AI at China Telecom, the organisation behind the project, the flagship model TeleChat2 115B and another yet unnamed model were trained using tens of thousands of Chinese made processors. These models fall in the category of large language models, the same technology powering global platforms such as OpenAI’s ChatGPT, but developed in a completely independent computing environment inside China.
A Landmark Achievement In Domestic AI
China Telecom described the accomplishment as a moment that demonstrates China has reached genuine self sufficiency in large scale AI training. Developing and training a 1 trillion parameter model is a feat that requires immense computing power, advanced chip capabilities and stable engineering frameworks. Completing it with non imported chips sends a clear message about the country’s advancing capabilities under tight geopolitical pressure.
The company did not reveal the exact chip models used but stated they were entirely locally manufactured. China Telecom maintains a deep partnership with Huawei, a leading developer of domestic AI chips such as the Ascend series. While no direct confirmation was provided, the scale of the project suggests the infrastructure may have relied on Huawei backed chips and cloud systems with full end to end localisation.
Importance Of Domestic Chips In A Restricted Environment
The US has progressively tightened its restrictions on Nvidia’s most powerful AI chips, such as the A100 and H100 series, and on advanced US owned semiconductor manufacturing technology. These constraints are aimed at slowing China’s AI development, but recent announcements across Chinese tech companies show that the domestic sector is rapidly adapting.
China Telecom’s breakthrough underscores that domestic operators are no longer entirely dependent on foreign hardware to build high end models. This shift marks the beginning of what the firm describes as a new era for China’s large language model innovation, one where local companies can continue advancing AI without external technological bottlenecks. The company emphasised that the results represent an inflection point in China’s long term strategy for self reliance.
Open Source Strategy To Expand Industry Impact
TeleChat2 115B has been released as an open source model, inviting developers, researchers and enterprises to experiment with and expand the technology. For China’s AI environment, open sourcing serves two purposes. First, it boosts adoption across both the public and private sectors by lowering entry barriers for companies seeking to build domain specific AI applications. Second, it strengthens China’s global presence in open source ecosystems traditionally dominated by US developers and commercial players.
By releasing a domestically trained trillion parameter model to the global community, China Telecom is signalling confidence in its technological maturity while also accelerating the country’s participation in international AI collaboration.
Broader Implications For China’s AI Ambitions
This achievement comes at a time when Chinese tech firms, cloud operators and state backed research institutes are investing heavily in large language models. Companies such as Alibaba, Tencent, Baidu and ByteDance have each released their own LLMs over the past two years. However, access to advanced chips has been one of the biggest constraints in scaling these models to the trillion parameter level.
China Telecom’s announcement targets that bottleneck directly, showing that large scale AI can continue expanding even under sanctions. It also strengthens the national AI blueprint, which calls for technological independence, domestic chip ecosystems and a more resilient AI supply chain.
This development will likely encourage more state backed institutions and commercial developers to accelerate work on models trained with Chinese made processors. With more training clusters, advanced algorithms and growing datasets, China’s AI ecosystem is expected to become increasingly self sustaining.
A New Benchmark For China’s AI Industry
The successful training of a trillion parameter model on domestic chips sets a powerful benchmark for China’s next phase of AI development. It reinforces the growing confidence among Chinese researchers and companies that high end AI innovation can continue without reliance on Western technology. It also signals to the international AI community that China is entering a new stage where locally produced hardware and software can support the training of some of the world’s largest and most complex AI systems.


