DuoKD: Dual Positive-Negative Knowledge Extraction from LLMs
DuoKD introduces a dual positive-negative knowledge extraction strategy based on LLMs, integrated with a knowledge distillation mechanism for efficient models.

Created by bo Long
Deep learning recommendation research, covering sequential, multimodal, and graph models
Explore the latest content tracked by Deep Recommender Research
DuoKD introduces a dual positive-negative knowledge extraction strategy based on LLMs, integrated with a knowledge distillation mechanism for efficient models.
Key breakthrough for long-horizon sequential recommendation:
This AI system practically deploys a Skill Gap Severity Index to identify missing and underserved skills for target roles, paired with a Customized Learning Plan Generator for tailored recommendations.
Discover GraphER, a method splitting retrieval for efficiency in recsys:
Addressing limitations of existing methods, researchers propose DNSC, a Diffusion-enhanced Negative Sampling model in multimodal contrastive learning for recommendation.
Emerging generative rec models push real-world deployment:
One-layer transformers with simplified position-only attention, trained via gradient descent, provably recover all teacher models from a specific class. This provides key theoretical foundations for model-efficient sequential architectures.
New arXiv paper [2603.21541] derives sharper generalization error bounds for Transformer models using offset Rademacher complexity. Vital theoretical progress for robust deep recsys training.
LLM-guided data distillation targets explainable recommender systems, referencing PETER's transformer-based architecture with attention masking.
Key multimodal recsys advances:
New study leverages Cross-Encoder BERT to compute semantic relevance between users and jobs, enabling explainable and scalable job recommendation systems—building on prior findings.
Mamba-3, an open-source State Space Model, beats Transformers. At 1.5B parameters, Mamba-3 SISO delivers the fastest prefill + decode latency across all sequence lengths, topping Mamba-2 and Gated DeltaNet.
Emerging trend in advanced graph methods for multimodal recommendation:
New paper proposes a subgoal-driven framework to improve long-horizon LLM agents. Essential reading for advancing reliability in extended tasks.
New paper asks: How Well Does Generative Recommendation Generalize? – probing robustness across domains and setups. Join the discussion on this key research page.