Code & Cloud Chronicle

MiniMax M2.7 OSS agentic MoE model on NVIDIA

MiniMax M2.7 OSS agentic MoE model on NVIDIA

Key Questions

What is MiniMax M2.7?

MiniMax M2.7 is an agentic MoE model with 230B total parameters and 10B active parameters. It excels in production-ready agents, achieving 2.5-2.7x performance on Blackwell, 56% on SWE-Pro, and self-evolving SRE capabilities.

How is NVIDIA integrating MiniMax M2.7 into its AI stack?

NVIDIA has added MiniMax M2.7 via NIM, HF, vLLM, NemoClaw, and OpenShell for production-ready agents. This integration supports its latest agentic model lineup.

What is the context of MiniMax's IPO and growth in Chinese AI?

MiniMax is pursuing a Chinese IPO with $79M revenue amid a surge to 140T tokens/day. It involves Qwen OSS integrations and reflects the AI agent craze in China, where 'ciyuan' is the new word for token.

NVIDIA adds M2.7 (230B/10B active) to AI stack via NIM/HF/vLLM/NemoClaw/OpenShell; 2.5-2.7x Blackwell, SWE-Pro 56%, self-evolving SRE. Chinese IPO $79M rev amid 140T tokens/day surge, Qwen OSS integrations.

Sources (2)
Updated Apr 13, 2026