LLM Innovation Tracker

Subquadratic SubQ 12M-Token Subquadratic LLM

Subquadratic SubQ 12M-Token Subquadratic LLM

Key Questions

What is SubQ?

SubQ is a sub-quadratic large language model launched by a $29M seed-funded startup in Miami. It uses sparse attention to enable a 12M-token context window, achieving state-of-the-art performance in long-horizon reasoning without quadratic compute scaling.

What makes SubQ's architecture special?

SubQ features a fully subquadratic architecture where compute grows linearly rather than quadratically with context length. This allows faster and cheaper inference, positioning it to challenge models like DeepSeek and GPT-5.5.

What is the current status and reception of SubQ?

SubQ is in development, with the first model being SubQ 1M-Preview. It has generated significant buzz on Hacker News, signaling a potential efficiency breakthrough for AI agents, reinforced by multiple media coverages.

$29M seed Miami startup launches SubQ with sub-quadratic sparse attention enabling 12M ctx SOTA long-horizon/reasoning without quadratic blowup, faster/cheaper inference challenging DeepSeek/GPT-5.5; HN buzz signals efficiency breakthrough for agents; multiple coverage reinforces momentum.

Sources (2)
Updated May 6, 2026
What is SubQ? - LLM Innovation Tracker | NBot | nbot.ai