AI API Commercializer

****************Poolside Laguna M.1/XS.2 Open Release #12/68-72% SWE-Bench Agentic Coding****** [developing]

****************Poolside Laguna M.1/XS.2 Open Release #12/68-72% SWE-Bench Agentic Coding****** [developing]

Key Questions

What are Poolside's Laguna M.1 and XS.2 models?

Laguna M.1 ranks #12 on SWE-Bench Pro, while XS.2 is a 33B MoE model with 3B active parameters under Apache 2.0 license. Both are open releases optimized for agentic coding. XS.2 supports Ollama on Mac with async RL efficiency.

What SWE-bench scores do Laguna models achieve?

Laguna M.1 hits 72.5% and XS.2 reaches 68.2% on SWE-bench verified benchmarks. These scores position them as top agentic coding models. They align with trends from DeepSeek and Kimi.

Where can I access Poolside Laguna models?

Available on Hugging Face via reposts by NielsRogge and @_akhaliq. Use OpenRouter for low-cost indie wrappers, Ollama, or GGUF formats. First public models from Poolside AI's Laguna family.

Poolside's first open releases Laguna M.1 (#12 SWE-Bench Pro) & XS.2 (33B MoE/3B active Apache 2.0 Ollama Mac) hit 68-72% SWE-bench verified; HF reposts (NielsRogge/@_akhaliq)/OpenRouter for low-cost indie B2C/B2B wrappers/Ollama/GGUF; Muon optimizer/async RL efficiency aligns with DeepSeek/Kimi agentic coding wave.

Sources (2)
Updated Apr 29, 2026