Localization phase transition & heavy-tailed spectral scaling in deep nets
Key Questions
What is the main focus of Highlight HL001?
The highlight converges on heavy-tailed eigenspectra and localization phase transitions in deep networks. It covers recent advancements like Mirror Descent for GD bias on manifolds, MUD for muon momentum extension, Stable Step Size Bounds for CE singularities, and Neural Char Functions for spectral alignment. Active research includes operator proofs and finite-width effects.
What does the 'Stable Step Size Bounds for Cross-Entropy Loss' cover?
This is a 5:27 YouTube video exploring stable step size bounds for cross-entropy loss, focusing on singularities. It ties into the highlight's theme of spectral scaling and localization transitions in deep nets.
How do Neural Characteristic Functions relate to this research?
The arXiv paper 'Learning Adaptive Distribution Alignment with Neural ...' uses neural characteristic functions in the spectral domain to encode feature-structure dependencies. This supports spectral alignment and sharpens proofs for initialization priors and noise in gating mechanisms.
Multiple studies converge on heavy-tailed eigenspectra and localization transitions. New 2026-03-19: Mirror Descent (GD bias on manifolds), MUD (Muon momentum extension), Stable Step Size Bounds (CE singularities), Neural Char Functions (spectral alignment)—sharpen proofs for init priors + noise → gating. Active: operator proofs, finite-width.