LLM Engineering Digest

Qwen3.6-35B-A3B MoE VLM Open-Sourced

Qwen3.6-35B-A3B MoE VLM Open-Sourced

Key Questions

What is Qwen3.6-35B-A3B?

Qwen3.6-35B-A3B is a sparse Mixture of Experts (MoE) Vision-Language Model open-sourced by the Qwen Team, featuring 3B active parameters. It supports agentic coding capabilities and vision tasks, pushing production efficiency.

What performance benchmarks does Qwen3.6-35B-A3B achieve?

It hits 73.4% on SWE-bench for coding tasks. Additionally, it achieves 180 tokens per second on an RTX 4090 GPU.

Why is Qwen3.6-35B-A3B significant for the open-source AI community?

As a new entry in the open-source landscape, its sparse MoE architecture with only 3B active parameters enables high efficiency in agentic coding and vision, making advanced VLM capabilities more accessible.

Sparse MoE with 3B active params hits 73.4% SWE-bench, 180 tok/s on 4090. Agentic coding and vision push prod efficiency.

Sources (1)
Updated Apr 18, 2026
What is Qwen3.6-35B-A3B? - LLM Engineering Digest | NBot | nbot.ai