Mistral Medium 3.5 OSS dense crushes SWE-Bench
Key Questions
What are the key features of Mistral Medium 3.5?
Mistral Medium 3.5 is a 128B dense MIT-licensed open-source model with 77.6% on SWE-Bench, outperforming Devstral and Qwen3.5. It supports 256k context length, vision capabilities, agentic tasks, and long tasks. It can be quantized for self-hosting on 4x GPUs with 32-64GB VRAM.
How does Mistral Medium 3.5 perform on SWE-Bench?
It achieves 77.6% on SWE-Bench, beating models like Devstral and Qwen3.5. This positions it as a leader in software engineering benchmarks among open-source models.
What is Mistral's strategy with Medium 3.5 amid the MoE trend?
Mistral is pushing enterprise-grade open-weight AI with Medium 3.5's launch alongside Vibe 2.0, emphasizing reliability despite the surge in Mixture of Experts (MoE) models.
128B dense MIT OSS 77.6% SWE-Bench (beats Devstral/Qwen3.5), 256k ctx vision agentic long tasks, quantized self-host 4xGPU 32-64GB VRAM, enterprise-grade push amid MoE surge.