NVIDIA memory-hierarchy signals: SRAM-first reports vs HBM4-heavy product signals
Key Questions
What is the outlook for hyperscaler memory spending?
Memory will consume 30% of hyperscaler data center capex in 2026, a 4x increase from 2023, with shortages through 2027. Hyperscalers face HBM constraints amid high demand. This drives expansions like SK Hynix's 30T capex in Yongin.
What are the HBM production plans from key suppliers?
Micron projects $24B Q2 revenue with 81% HBM utilization through 2027-28 and HBM4 for Rubin; Samsung eyes $38B Q1 profit, HBM4E at 16Gbps for AMD/Rebellions. SK Hynix leads shortages to 2030; FuriosaAI's Renegade hits 72GB/1.5TB/s in July production. China CXMT starts 12-layer HBM production.
How is NVIDIA's Rubin Ultra configured for memory?
NVIDIA Rubin Ultra uses a 2-die CoWoS package with 1TB HBM, not 4-die as rumored, per UBS reports, for 450-500kW racks. TSMC's N2/CoWoS exceeds $10B capacity. CoPoS is slated for 2028 with UCIe/HBM4 packaging.
What challenges exist in HBM and packaging supply?
HBM shortages persist through 2027-2030, with ABF/CoWoS/helium constraints and TEL HBM Synapse issues. Samsung's AI memory dominance faces SK Hynix scalability rivalry. TSMC N2 production and advanced packaging like CoWoS are frontline expansions.
What role does Storage Class Memory (SCM) play?
SCM fills the gap between DRAM and NAND, addressing AI data movement issues alongside HBM4 validations for next-gen systems. Potential HBF NAND/SCM integrations are explored. Phononics TECs target hotspots in high-power racks.
What is FuriosaAI's contribution to AI memory?
FuriosaAI's second-gen Renegade AI chip offers 72GB HBM at 1.5TB/s, mass-produced from July for Samsung SDS Cloud, with SK Hynix at 53% and Samsung 35% supply shares. This signals HBM4-heavy product trends. It contrasts SRAM-first reports.
How does Tesla factor into these memory developments?
Tesla's AI6 tapes out in December, tying into UCIe/HBM4 packaging and advanced needs like CoWoS. Elon Musk's Terafab aims for 200B AI chips annually. Glass substrates are tested by Apple for broad prospects.
What packaging innovations support HBM4?
TSMC N2/CoWoS, UCIe chiplets, and LINTEC's 2026 back-grinding tape for high-yield PCBL address XPU integrations. Early HBM4 validations point to AI/HPC systems. These counter memory hierarchy signals favoring HBM4 over SRAM-first.
Hyperscaler memory 30% capex '26 (4x '23), shortages thru '27. Micron Q2 $24B/81% HBM thru2027-28/HBM4 Rubin; Samsung Q1 $38B profit/HBM4E 16Gbps/AMD/Rebellions 144GB + FuriosaAI Renegade 72GB/1.5TB/s July prod (SK 53%/Sam 35%); SK short-2030/30T capex Yongin; TSMC N2/CoWoS $10B+/Nvidia Rubin Ultra 2-die CoWoS 1TB HBM/450-500kW racks/CoPoS 2028; Tesla AI6 Dec tapeout; UCIe/HBM4 pkg; HBF NAND/SCM potential; ABF/CoWoS/He short + TEL HBM Synapse. Phononics TECs hotspots.