Huawei vs Nvidia. Challenging Nvidia’s Dominance in AI Computing (2025-2028) By EV • Post Published Sept 18, 2025 Huawei Technologies, one of China’s leading technology giants, has recently made an unprecedented announcement detailing its ambitious plans to rival Nvidia—a global leader in AI chips and GPU technologies. With the backdrop of escalated US-China tech tensions…

Huawei vs Nvidia. Challenging Nvidia’s Dominance in AI Computing (2025-2028)


By EV • Post

Published Sept 18, 2025


Huawei Technologies, one of China’s leading technology giants, has recently made an unprecedented announcement detailing its ambitious plans to rival Nvidia—a global leader in AI chips and GPU technologies. With the backdrop of escalated US-China tech tensions and export restrictions, Huawei aims to secure China’s AI ambitions through domestically-made hardware by unveiling a multi-year roadmap for its Ascend AI chip lineup and large-scale AI supercomputing clusters. This report delves into Huawei’s current AI chip breakthroughs, its comprehensive future roadmap, and how these aim to compete with Nvidia’s established market dominance.


Introduction: Huawei’s Strategic AI Chip Breakthrough


In September 2025, Huawei publicly showcased a computing breakthrough: the world’s most powerful supernode and supercluster AI computing system. The innovation is built entirely using Chinese chip manufacturing processes and technology, marking a milestone for China’s self-reliance ambitions in AI hardware. Huawei’s rotating chairman, Eric Xu Zhijun, revealed that these developments emerged from Huawei’s desire to bypass reliance on Nvidia’s restricted AI chips amid ongoing US export controls.

Huawei’s approach centers around building a “supernode + cluster” architecture, enabling thousands of Ascend AI chips to collaborate efficiently, matching and potentially surpassing the compute requirements of training advanced AI models. This marks a shift from secretive chip development to transparent competition with global industry leaders like Nvidia and AMD, highlighting China’s strategic resolve to cultivate an indigenous AI hardware ecosystem.

The Ascend AI Chip Family: Current Models and Future Plans

Ascend 910C: The Flagship Chip of 2025


Huawei’s Ascend 910C AI chip is currently its pinnacle offering in the AI chip market. Launched earlier in 2025, the Ascend 910C delivers around 800 teraflops (TFLOPS) of FP16 compute power and targets inference workloads with energy efficiency. Powered by domestically developed high-bandwidth memory technology, it is designed for applications ranging from cloud AI services to edge computing.

Despite a strong domestic footing, the Ascend 910C trails Nvidia’s flagship H100 GPU, which offers roughly 4,000 TFLOPS of FP16 performance. Huawei plans strategic enhancements to close this gap through iterative chip designs.


The 2026-2028 Roadmap: Ascend 950, 960, and 970 Series

Looking forward, Huawei presented a detailed Ascend chip roadmap spanning three years:


  • Ascend 950 Series (2026): This will debut in two variants, the Ascend 950PR and 950DT. The 950PR is optimized for the prefill phase of inference and recommendation tasks, incorporating Huawei’s proprietary HiBL 1.0 high-bandwidth memory offering 128 GB at 1 TB/s bandwidth. The 950DT targets decoding inference and training stages, featuring a more advanced HiZQ 2.0 memory with 144 GB and 4 TB/s bandwidth. This dual-variant strategy reflects Huawei’s intent to specialize chips for different AI workload phases.
  • Ascend 960 (2027): This chip is set to double the compute power and memory capacity of the Ascend 950, catering to even larger, more complex AI models.
  • Ascend 970 (2028): Huawei plans the 970 as the most powerful in the series, aimed to push performance and efficiency well beyond the 960, continuing an exponential compute growth curve similar to Moore’s Law for AI-specific chips.

Each iteration focuses not only on raw computing power but also on advanced memory architectures tailored for AI’s unique demands, including bandwidth and latency improvements that are key to accelerating deep learning workloads.

Cluster Computing and Supernode Systems


Huawei supplements its AI chip lineup with large-scale cluster computing architectures that interconnect thousands of chips to work as a unified system:

  • Atlas 900 SuperPoD: Currently Huawei’s flagship AI supercomputing cluster, it links 384 Ascend 910C chips, providing competitive AI training power against some Nvidia solutions.
  • Atlas 950 SuperPod (coming Q4 2026): This next-generation system will comprise 8,192 Ascend 950DT chips housed across 160 cabinets. Expected compute power will be 6.7 times that of Nvidia’s anticipated NVL144 system launching in 2026, with memory capacity 15 times greater.
  • Atlas 960 SuperPoD (planned for 2027): Designed to scale to 15,488 Ascend AI chips in 220 cabinets, this superpod will significantly upscale both compute density and energy efficiency, filling up a 2,200-square-meter facility.

The supernodes utilize cutting-edge interconnect technologies to enable massive parallel processing and lower latency communication between chips—a critical requirement for training large AI models like transformers or multi-modal networks.


Proprietary High-Bandwidth Memory (HBM) Technologies


Memory bandwidth has become a key bottleneck in AI chip performance. Huawei addresses this with proprietary HBM solutions:

  • HiBL 1.0: Integrated with the Ascend 950PR, this HBM offers 128 GB capacity and 1 TB/s bandwidth. Huawei claims cost advantages over South Korean HBM3E and HBM4E memory chips mainly supplied by SK Hynix and Samsung.
  • HiZQ 2.0: Paired with the Ascend 950DT, this memory delivers 144 GB and a bandwidth of 4 TB/s, crucial for the heavy decoding and training AI workloads needing rapid data transfer rates.

These high-bandwidth memory chips underpin Huawei’s efforts to optimize AI training and inference at scale, highlighting a vertical integration approach where Huawei designs not just the AI cores but also the surrounding memory ecosystem.


Competitiveness Against Nvidia’s AI Chip Ecosystem


Nvidia remains the worldwide gold standard for AI chips, with the H100 architecture powering numerous leading AI deployments thanks to its exceptional performance and broad software ecosystem:

  • Raw Performance: Nvidia’s H100 provides around 4,000 TFLOPS FP16 throughput, roughly five times the Ascend 910C chip. Ascend 910D, launching later in 2025, targets parity by enhancing architecture and efficiency.
  • Software Ecosystem: Nvidia’s CUDA platform supports over 2.5 million developers and 25,000 GPU-accelerated applications. This rich ecosystem enables seamless integration, optimized workflows, and faster adoption.
  • Ecosystem Bridging by Huawei: To mitigate this gap, Huawei is investing heavily in developing tools converting Nvidia CUDA code to its Mindspore AI framework, enabling easier migration and development on its Ascend chips.
  • Manufacturing Differences: Huawei’s chips are manufactured mainly via China’s SMIC foundry, which lags slightly behind TSMC’s state-of-the-art processes that produce Nvidia’s chips. Nonetheless, Huawei and local suppliers are making advances in chip quality and yields.

While Huawei faces challenges matching Nvidia’s ecosystem and efficiency, it is rapidly closing technological divides, particularly domestically where Nvidia’s chips are restricted by export bans.


Strategic Context and Market Implications


Huawei’s AI chip strategy emerges from a geopolitical context where US export controls limit Nvidia’s sales of advanced chips to China. This drives China’s push for independent, indigenous AI hardware capabilities:

  • Reducing Import Dependency: Huawei’s Ascend chips and computing clusters represent key assets for China’s self-sufficiency in AI hardware, enabling deployment of cutting-edge AI models without foreign technology constraints.
  • Market Share and Penetration: Huawei currently controls about 12% of China’s AI chip market and is expected to expand as it scales the Ascend family and clusters. Global reach remains limited due to sanctions but may improve if ecosystem and performance catch-up succeed.
  • Long-Term Vision: Huawei’s 3-year chip roadmap alongside its supercluster strategy is aligned with China’s national goals to compete in AI innovation and infrastructure on a global scale.

The Huawei-Nvidia competition highlights the intertwining of technology and geopolitics, reflecting a broader contest shaping AI hardware technology and supply chains for years ahead.


Moving Forward


Huawei’s recently revealed AI chip and GPU roadmap is a landmark declaration of its intent to challenge Nvidia’s global leadership in AI computing. The Ascend family, starting with the 910C and progressing through the 950, 960, and 970 models, showcases Huawei’s drive to exponentially improve compute power, memory bandwidth, and efficiency.

Combined with cutting-edge supernode clusters like the Atlas 950 SuperPod capable of tens of thousands of interconnected chips, Huawei aims to build infrastructure that rivals or surpasses Nvidia’s dominant AI systems. Proprietary high-bandwidth memory innovations further strengthen Huawei’s vertical integration and cost competitiveness.

Despite Nvidia’s current lead in chip performance and software ecosystem maturity, Huawei’s aggressive roadmap, bolstered by China’s semiconductor investments and US export restrictions on competitors, stands to reshape the AI hardware landscape significantly over the next three years.

Huawei’s journey from manufacturing chips behind closed doors to openly challenging a world leader demonstrates the fusion of technology innovation with strategic national ambitions—a story unfolding at the heart of the global AI race.

Enjoyed this post?
Subscribe to Evervolve weekly for curated startup signals.
Join Now →

Similar Posts