Nvidia Open-Sources Nemotron 3 Super for Agentic AI

Nvidia open-sources Nemotron 3 Super, optimized for complex agentic AI workflows.

Nvidia Open-Sources Nemotron 3 Super: A Hybrid Architecture for Complex Agent Workflows

On March 11, 2026, Nvidia released Nemotron 3 Super - an open-source LLM for complex multi-agent applications, fully opening weights, datasets, and training recipes under a permissive license.

Architecture: Triple-hybrid MoE combining Mamba layers (O(n) linear attention for long contexts), Transformer layers (complex reasoning), and MoE routing (120B total, 12B active per inference). Over 5x throughput improvement.

Training: 10T+ curated tokens with specialized reasoning/coding focus and interactive RL across agent environments - teaching planning, execution, and iteration of multi-step workflows.

Targets: Software development agents (code comprehension to automated repair) and cybersecurity triage (SOC automated analysis and response).

Available on Nvidia NIM, Hugging Face, OpenRouter, and Perplexity.

Significance: MoE+SSM hybrid as new paradigm; agent-native models trained specifically for agentic workflows; Nvidia's strategic open-source play driving GPU/inference platform adoption. First production-grade validation of Mamba+Transformer+MoE triple-hybrid architecture.