AI Launch Radar

DeepSeek V4 OSS MoE launch challenges flagships

DeepSeek V4 OSS MoE launch challenges flagships

Key Questions

What is DeepSeek V4?

DeepSeek V4 is an open-source Mixture-of-Experts (MoE) model with 1.6T total parameters (284B total, 49B active) and 1M context length at 27% FLOPs. It is optimized for cost-effectiveness on domestic chips.

How does DeepSeek V4 perform compared to others?

DeepSeek V4 Pro-Max tops open-source lists in reasoning, coding, math, and agentic tasks, rivaling closed-source flagships like GPT-5.4, Opus, and Gemini. It closes the gap with frontier models.

What are the key specs of DeepSeek V4?

It features 1.6T/284B MoE architecture with 49B active parameters and supports 1M context. The preview release emphasizes high performance at low cost.

What is the market interest in DeepSeek?

DeepSeek has a valuation over $20B, with interest from Alibaba and Tencent. It upends global tech a year after previous launches.

Where to find DeepSeek V4 benchmarks?

Benchmarks are available on Hugging Face (HF), with focus on adoption and total cost of ownership (TCO). Watch for Pro version beating top closed-source models.

1.6T/284B MoE (49B active)/1M ctx at 27% FLOPs; Pro-Max tops OSS lists in reasoning/coding/math/agentic, rivals GPT-5.4/Opus/Gemini; low-cost optimized for domestic chips. $20B+ valuation w/Alibaba/Tencent interest. Watch: HF benchmarks/adoption/TCO.

Sources (4)
Updated Apr 24, 2026