China Doubles “AI for Science” Computing Scale in Just Two Months Using Zero U.S. Chips
Reflecto News – China has rapidly expanded its largest artificial intelligence computing cluster dedicated to scientific research, doubling its capacity from 30,000 to 60,000 domestically produced AI accelerator chips in only two months — with no reliance on restricted U.S. technology.
The milestone was achieved at the Zhengzhou core node in Henan province, which officially entered full operation this week. The cluster, developed using Chinese-made accelerators (primarily from Sugon, affiliated with the Chinese Academy of Sciences), represents a significant acceleration in Beijing’s drive for self-reliant AI infrastructure amid ongoing U.S. export controls.


Rapid Expansion Details
- February 2026: Trial operations began with approximately 30,000 domestic AI accelerator chips.
- April 2026: The system scaled to 60,000 chips, effectively doubling the computing power dedicated to “AI for science” applications.
- Key achievement: The expansion relied entirely on homegrown hardware, bypassing Nvidia, AMD, and other U.S.-origin chips restricted since 2022.
This growth highlights China’s ability to ramp up domestic semiconductor production and system integration at speed, even as global supply chains remain fragmented by export restrictions. The cluster supports advanced scientific research, including large-scale simulations, data analysis, and AI model training for non-commercial purposes.
Representative image of a large-scale AI computing facility in China, similar to the Zhengzhou cluster that recently doubled in capacity using domestic chips.
Broader Context: China’s Push for AI Self-Reliance
U.S. export controls on advanced AI chips and manufacturing equipment, tightened in recent years, were intended to slow China’s progress in cutting-edge AI. Instead, they have accelerated domestic innovation and investment in alternatives.
China’s strategy includes:
- Heavy government support for companies like Sugon, Huawei, Cambricon, Biren, and others developing AI accelerators.
- Massive buildout of computing infrastructure, with a focus on both training and inference workloads.
- Integration of domestic chips into national research networks, reducing vulnerability to future restrictions.
While the Zhengzhou cluster is dedicated to scientific research rather than commercial frontier models (like those from OpenAI or DeepSeek), it demonstrates scalable deployment of Chinese AI hardware. Experts note that domestic chips still generally lag behind top U.S. GPUs in raw performance per chip, but China compensates through sheer volume, architectural optimizations, and software improvements.
Implications for the Global AI Race
- For China: This expansion strengthens its position in AI-driven scientific discovery and provides a foundation for broader compute self-sufficiency. It reduces dependence on foreign technology and mitigates risks from future sanctions.
- For the U.S. and allies: It signals that export controls have not fully halted China’s AI infrastructure growth, potentially shifting the dynamics of the U.S.-China tech competition toward software efficiency, algorithmic breakthroughs, and alternative hardware ecosystems.
- Global energy and supply chains: Rapid scaling of AI clusters worldwide continues to strain power grids and semiconductor manufacturing capacity, with China’s domestic push adding new demand for non-U.S. components.
Analysts caution that while volume has doubled quickly, challenges remain in achieving parity with the most advanced U.S. systems in efficiency and software ecosystem maturity. Nevertheless, the speed of this particular expansion is notable and reflects Beijing’s prioritization of compute sovereignty.
Reflecto News will continue tracking China’s AI infrastructure developments, domestic chip advancements, and their impact on the global technology landscape.
Frequently Asked Questions (FAQs)
Q1: What exactly doubled in China’s AI computing?
The Zhengzhou “AI for science” computing cluster doubled its number of domestically produced AI accelerator chips from 30,000 to 60,000 in two months.
Q2: Did China use any U.S. chips?
No. The expansion relied entirely on Chinese-made accelerators, with no U.S.-origin chips involved.
Q3: Which company or entity built this cluster?
The core hardware comes primarily from Sugon (a Chinese supercomputer firm linked to the Chinese Academy of Sciences), with support from national research initiatives.
Q4: Is this the same as frontier commercial AI models?
This cluster is focused on scientific research (“AI for science”), such as simulations and large-scale data analysis, rather than training the largest commercial large language models.
Q5: Why is this significant?
It demonstrates China’s ability to rapidly scale AI infrastructure using only domestic technology, reducing vulnerability to U.S. export controls and advancing self-reliance goals.
Q6: How does this compare to global AI compute?
While the U.S. still leads in high-end GPU performance overall, China is closing gaps in accessible compute volume through massive domestic deployments and optimizations.
Q7: What challenges remain for China’s AI chips?
Domestic accelerators often trail top U.S. chips in per-unit performance and software ecosystem maturity, though volume scaling and architectural improvements are narrowing the gap in specific use cases.
For ongoing coverage of AI infrastructure, U.S.-China tech competition, and semiconductor developments, stay tuned to Reflecto News.