Huawei-style chip stacking viewed as a potential route for China to close the AI hardware gap with Nvidia

China is accelerating efforts to improve its domestic chip capabilities by combining mature semiconductor processes with innovative computing designs that could help the country narrow its gap with top global players such as Nvidia. According to a leading industry expert, the strategy centers on near-memory computing and advanced chip stacking techniques, approaches that could significantly boost performance even without access to the most advanced manufacturing nodes.
Wei Shaojun, vice president of the China Semiconductor Industry Association and one of the country’s most respected voices in the chip sector, said that processors built using a 14 nanometre process could potentially match the performance levels of Nvidia’s 4 nanometre AI chips when paired with high bandwidth memory and optimized computing architectures. His comments were delivered at a recent industry conference in Shenzhen and reported by the Chinese technology outlet ESM China.
Wei noted that while China continues to face limitations due to United States export controls on cutting-edge chips and manufacturing tools, the country can still chart a viable path forward by maximizing the potential of the technologies it can access. Near memory computing, which moves computation closer to memory to reduce bottlenecks, and chip stacking, which vertically integrates multiple chip layers, have emerged as areas of rapid development. These methods can substantially boost processing efficiency and reduce latency, both of which are crucial for advanced artificial intelligence applications.
The idea mirrors some of the architectural approaches used in Huawei’s recent products, which have relied heavily on multi-chip packaging to overcome constraints in semiconductor fabrication. Rather than competing directly at the most advanced manufacturing levels, Chinese companies are focusing on integrating multiple components to achieve high performance through system-level engineering.
Wei said that with continued progress in the design of memory systems, interconnect technologies and packaging techniques, it is possible for Chinese firms to deliver AI chips that offer competitive performance even when built with older production processes. He emphasized that high performance computing does not depend solely on the smallest transistor size but rather on a combination of architecture, software optimization and system integration.
Industry analysts say this approach is increasingly significant as the demand for AI accelerators grows. Nvidia currently dominates the global market with its advanced GPU line, which is widely used to train large language models and other AI systems. China’s goal is to reduce its reliance on foreign suppliers, especially as tightening U.S. restrictions limit access to Nvidia’s most powerful processors.
Wei’s remarks suggest that China sees alternative design strategies as a practical way to push ahead despite geopolitical constraints. The strategy could also encourage more domestic innovation in packaging technologies, memory integration and energy efficient computing, areas where progress can deliver major performance gains without requiring access to cutting edge chipmaking tools.
As China continues its long term drive for technological self sufficiency, experts believe hybrid approaches such as chip stacking will likely become central to the country’s future AI hardware roadmap, offering a potential bridge to more advanced capabilities in the years ahead.


