Zhipu AI's GLM-5: 744B Open-Source LLM Trained Entirely on Huawei Ascend Chips, Record-Low Hallucination

Zhipu AI's GLM-5 is a 744B-parameter open-source LLM with 40B active parameters (MoE), trained on 28.5T tokens entirely on Huawei Ascend chips using MindSpore—fully independent of NVIDIA hardware. Released under MIT license with 205K context window and DeepSeek Sparse Attention. Claims parity with Claude Opus 4.5 and GPT-5.2 on reasoning/coding/agent benchmarks, with record-low hallucination rates.

Zhipu GLM-5: Frontier Open-Source LLM Trained on Domestic Chips

Model Overview

Zhipu AI's GLM-5 is one of 2026's most significant open-source LLMs. With 744B total / 40B active parameters (MoE), trained on 28.5T tokens under MIT license with 205K context window. The standout: entirely trained on Huawei Ascend chips with MindSpore framework—zero NVIDIA dependency.

Technical Innovations

DeepSeek Sparse Attention (DSA) enables efficient long-context processing at 205K tokens. The novel "slime" asynchronous RL infrastructure improves training efficiency. GLM-5 claims record-low hallucination rates among open-source models through high-quality pretraining data and refined alignment.

Performance and Strategic Significance

Claims parity with Claude Opus 4.5 and GPT-5.2 on reasoning, coding, and agent benchmarks. Beyond model capabilities, GLM-5 proves that US chip export controls cannot fully prevent Chinese AI advancement. It provides the strongest endorsement for Huawei Ascend's chip ecosystem.

GLM-5-Turbo

In March, Zhipu also released GLM-5-Turbo optimized for OpenClaw and automated agent scenarios, with significant latency and throughput improvements for long-chain agent tasks.

In-Depth Analysis and Industry Outlook

From a broader perspective, this development reflects the accelerating trend of AI technology transitioning from laboratories to industrial applications. Industry analysts widely agree that 2026 will be a pivotal year for AI commercialization. On the technical front, large model inference efficiency continues to improve while deployment costs decline, enabling more SMEs to access advanced AI capabilities. On the market front, enterprise expectations for AI investment returns are shifting from long-term strategic value to short-term quantifiable gains. However, the rapid proliferation of AI also brings new challenges: increasing complexity of data privacy protection, growing demands for AI decision transparency, and difficulties in cross-border AI governance coordination. Regulatory authorities across multiple countries are closely monitoring these developments, attempting to balance innovation promotion with risk prevention. For investors, identifying AI companies with truly sustainable competitive advantages has become increasingly critical as the market transitions from hype to value validation. This trend is expected to deepen over the coming years, profoundly impacting the global technology industry landscape. The convergence of AI with other emerging technologies such as quantum computing, biotechnology, and robotics is creating entirely new market opportunities that did not exist even two years ago.