AI Diffusion Lab · Mar 19 Daily Digest
Hands-On Tutorials
- 🔥 ComfyUI Flux.2 Klein Outpaint + 4K Upscale: YouTube tutorial demonstrates image expansion, inpainting, reference latent...

Created by לול טים
Hands-on tutorials, model releases, and community showcases for AI generative media
Explore the latest content tracked by AI Diffusion Lab
New MIT course launches with Lecture 01 on flow and diffusion models, powering SOTA generative AI for images, videos, and more.
Zero-setup ComfyUI workflow for Flux.2 Klein lets you outpaint, inpaint, and upscale images to 4K right in your browser via Floyo.
Key features:
-...
WeryAI unifies video generation, image creation, and editing in a single workflow – no tool switching needed.
Key demos for fast results:
-...
Diffusion models target joint audio-video generation, noting remarkable progress in single-modality video and audio synthesis yet a gap in truly joint models – key for multimodal I2V experiments.
New alphaXiv paper Demystifing Video Reasoning uncovers an unexpected phenomenon: diffusion-based video models exhibit emergent capabilities amid recent advances – vital insights for Stable Diffusion I2V workflows.
New paper Rethinking UMM Visual Generation explores masked modeling for efficient image-only pre-training. Join the discussion on this paper page.
AI Lab plugin supercharges Photoshop for generative AI fixes:
Hands-on Seedance 2.0 guide generates movie-level realistic short dramas with perfect character consistency, seamless transitions, lip sync, and...
Next-frame decoding uses video diffusion models to evolve a compact anchor frame into the target image, enabling ultra-low-bitrate compression. Ideal low-resource baseline for I2V experimentation.
Hands-on workflow for Grok's new Animate Images feature:
LoRA is one of the most popular methods for training and fine-tuning Diffusion Transformers, freezing original weights while injecting small, trainable modules. Ideal starter for custom local experiments.
SSD unifies diffusion with scale-space theory, proving high-noise states equal low-res images info-wise.
Key for ComfyUI/SD workflows:
Struggling with Out of Memory errors in Stable Diffusion? This YouTube guide delivers:
Under-the-radar Google tools perfect for hands-on image/video experimentation:
Breakthrough in diffusion: New paper enables diffusion directly in pixel space (no latents!), with math-derived NN for simultaneous denoising and...
Dive into Pixverse R1 for quick image-to-image video experiments: