**Mistral Medium 3.5-128B Multimodal Agentic HF Launch**
Key Questions
What is Mistral Medium 3.5?
Mistral Medium 3.5 is a 128B parameter dense multimodal agentic model with a 256k context length. It excels in function calling and reasoning, achieving 77.6% on SWE-Bench. The model is launched on Hugging Face in GGUF, Ollama, and vLLM formats.
How does Mistral Medium 3.5 perform compared to other models?
It outperforms Magistral and Devstral on 18 tasks, particularly in coding and reasoning benchmarks. Independent tests confirm its superiority in these areas. This makes it a strong contender aligning with models like DeepSeek and Qwen.
Where is Mistral Medium 3.5 available?
The model is hosted on Hugging Face for easy access. It supports formats like GGUF, Ollama, and vLLM for deployment. A modified MIT license enables low-cost indie B2C/B2B applications.
What applications does Mistral Medium 3.5 power?
It powers async coding agents in Vibe and Le Chat Work modes. These enable complex tasks like coding, UI, and multimodal SaaS. It supports indie B2C/B2B wrappers for agentic workflows.
What license does Mistral Medium 3.5 use?
It features a modified MIT license. This allows for low-cost development of indie B2C/B2B coding, UI, and multimodal SaaS. It facilitates alignment with open models like DeepSeek and Qwen.
Mistral Medium 3.5-128B dense multimodal agentic 256k ctx 77.6% SWE-Bench function calling/reasoning beats Magistral/Devstral on 18 tasks on HF (GGUF/Ollama/vLLM); powers async Vibe/Le Chat Work; Modified MIT enables low-cost indie B2C/B2B coding/UI/multimodal SaaS aligning DeepSeek/Qwen.