Open Source AI

GLM-5.1 744B OSS MoE: #1 OSS/#3 global SWE-Bench Pro vs Opus/GPT, 8hr autonomous agents

GLM-5.1 744B OSS MoE: #1 OSS/#3 global SWE-Bench Pro vs Opus/GPT, 8hr autonomous agents

Key Questions

What is GLM-5.1 and its key specs?

Zhipu AI's GLM-5.1 is a 754B (or 744B) parameter MoE model under MIT license on Hugging Face. It's optimized for agentic tasks with PEFT and local deploys.

How does GLM-5.1 perform on benchmarks?

It tops open-source and ranks #3 globally on SWE-Bench Pro (1700 steps), VectorDBBench, KernelBench, outperforming closed leaders like Opus/GPT.

What makes GLM-5.1 fit the agentic OSS trend?

Supports 8-hour autonomous agents, aligning with Qwen/Gemma/Hermes boom; excels in Terminal-Bench for long-running, complex engineering tasks.

Zhipu AI GLM-5.1 744B MoE MIT HF tops open source and ranks #3 globally on SWE-Bench Pro/Terminal-Bench/NL2Repo with 1700+ step autonomous runs; reinforces agentic/OSS boom like Qwen/Gemma/Hermes; HF quants/PEFT/local deploys accelerating.

Sources (2)
Updated Apr 8, 2026