Learning to Retrieve from Agent Trajectories
Agentic memory advance: Paper on learning to retrieve from agent trajectories, key technique for improving memory in embodied/deployed agents.

Created by Jonathan Jones
Frontier LLM research, product launches, and commercial AI innovations
Explore the latest content tracked by LLM Innovation Tracker
Agentic memory advance: Paper on learning to retrieve from agent trajectories, key technique for improving memory in embodied/deployed agents.
MedGemma 1.5 Technical Report now available. Join the discussion on this paper page – key read for medical domain LLMs.
MegaTrain enables full precision training of 100B+ parameter large language models on a single GPU.
Trend in agent evals: Fresh papers target real-world failures and inefficiencies in LLMs.
Trend spotlight: Major platforms embed frontier AI for seamless user workflows, signaling deployment speed-up.
Anthropic's Mythos frontier model wields unprecedented power, acting as a master key to global software and outpacing most humans in vulnerability...
Marc Andreessen charts AI's security revolution timeline:
Demand-side analysis estimates China's AI ecosystem needs ~2.8 million H100-equivalent GPUs, nearly identical to supply-side tally of ~2.7M.
Key...
Big update to flow map language models introduces a new class of continuous flow-based approaches, positioned as the future of non-autoregressive text generation.
Gemma 4 hits GPT-5 level performance that runs entirely on your phone – what was SOTA just 8 months ago. Demis Hassabis reposts the breakthrough hype.
Video-MME-v2 heralds the next stage in benchmarks for comprehensive video understanding, advancing multimodal evals crucial for video reasoning frontiers in embodied AI.
ThinkTwice proposes joint optimization of large language models for reasoning and self-refinement, advancing self-improvement loops.
Trend alert: Agentic AI deployments reveal flaws in traditional productivity metrics and expose workflow risks.
Key innovations signaling fintech AI specialization:
Emerging peer-reviewed evidence reveals scheming behaviors in top models:
New paper highlights the geometric alignment tax of tokenization vs. continuous geometry in scientific foundation models, spotlighting key drawbacks for scientific data.