P13N: Personalization in Generative AI Workshop

CVPR 2026

June 2026, (half-day) in TBD


Teaser coming soon...


Workshop Overview


The P13N: Personalization in Generative AI workshop aims to unite researchers, practitioners, and artists from academia and industry to explore the challenges and opportunities in personalized generative systems.

Generative AI has revolutionized creativity and problem-solving across domains, yet personalization remains one of the most challenging and underexplored frontiers. Building systems that understand and adapt to individual users’ preferences, identities, or contexts raises profound technical, ethical, and societal questions. Through invited talks, panel discussions, poster sessions, and hands-on challenges, P13N serves as a platform to foster new directions in model design, evaluation, and governance for personalized generative systems.

Call for Papers

Call for papers coming soon! The workshop will invite submissions on all aspects of personalization in generative AI.

We invite contributions addressing topics such as user modeling, controllability, fairness, interpretability, evaluation of personalization, human-AI co-creation, and responsible deployment. We welcome talks and paper submissions on varied aspects of personalization, including:

Topics:

  • Advanced optimization methods for personalizing generative models
  • Multi-subject composition: handling multiple entities in a single scene
  • Cross-modal personalization: bridging text, images, video, and 3D
  • AR/VR personalization for handling immersive experiences
  • Dataset curation for benchmarking personalized generative models
  • Benchmark and evaluation metrics for personalization quality, style consistency, and identity preservation
  • New methods for personalized video generation
  • Ethical and privacy considerations (user consent, data ownership, transparency)
  • Personalized storytelling and narrative visualization
  • Style adaptation for digital art and illustration
  • Emerging applications in gaming, e-commerce, and digital marketing
  • Adapting LLM-based personalization approaches to vision tasks
  • Personalization on edge devices



Important Dates


Description Date Countdown (AoE)
Submissions Open March 1, 2026
Paper Submission Deadline March 31, 2026 23:59 AoE
Notification to Authors April 30, 2026
Camera-ready Deadline May 15, 2026


Personalization Challenge

We will host two key challenges: multi-concept image personalization and video personalization.

We have secured a $5,000 USD sponsorship from Fal.ai to support challenge winners.

Timeline: Announced on April 1, 2026, and conclude by May 1, 2026.


Invited Speakers


Nataniel Ruiz

Nataniel Ruiz is a Research Scientist at Google Deepmind and the lead author of DreamBooth, which was selected for a Best Paper Award at CVPR 2023. His main research interests revolve around generative models, and he has authored other works in the areas of controllability and personalization of diffusion models, including StyleDrop, ZipLoRA, and HyperDreamBooth.

Ishan Misra

Ishan Misra (Tentative) is a Research Scientist in the GenAI group at Meta where he led the research efforts on video generation models. He was the tech lead for Meta's Movie Gen project for foundation models in video generation, video editing, video personalization, and audio generation.

Kfir Aberman

Kfir Aberman is a founding member of Decart AI, leading the innovation in real-time, interactive generative video models. Previously, as Principal Research Scientist at Snap Research, he led the company’s Personalized Generative AI effort. His research, including breakthroughs like DreamBooth and Prompt-to-Prompt, has become foundational to how people and creators today interact with generative AI.

Tali Dekel

Tali Dekel is an Associate Professor at the Weizmann Institute of Science, and a Staff Research Scientist at Google DeepMind. Her groundbreaking work in video personalization and generation includes Still-Moving, TokenFlow, and Lumiere.

Or Patashnik

Or Patashnik is an assistant professor at Tel Aviv University. Her research lies at the intersection of computer graphics, computer vision, and machine learning, with a focus on generative models. She works on image and video generation, semantic editing, and personalization, driven by the goal of making visual content creation more controllable and expressive.


Schedule


9:00–9:10 Opening remarks
9:10–9:40 Invited talk 1
9:40–10:10 Invited talk 2
10:10–10:20 Coffee Break
10:20–10:50 Invited talk 3
10:50–11:20 Invited talk 4
11:20–12:30 Panel
12:30–13:30 Closing Remarks


Invited Panelists


David Bau
David Bau
Northeastern University
Varun Jampani
Varun Jampani
Arcade AI
Fatih Porikli
Fatih Porikli
Qualcomm AI Research


Organizers


Pinar Yanardag
Pinar Yanardag
Virginia Tech
Daniel Cohen-Or
Daniel Cohen-Or
Tel Aviv University
Tuna Han Salih Meral
Tuna Han Salih Meral
Virginia Tech
Nupur Kumari
Nupur Kumari
Carnegie Mellon University
Enis Simsar
Enis Simsar
ETH Zurich


Program Committee


Yusuf Dalva
Yusuf Dalva
Virginia Tech
Tahira Kazimi
Tahira Kazimi
Virginia Tech
Ayşegül Dündar
Ayşegül Dündar
Bilkent University
Rinon Gal
Rinon Gal
Black Forest Labs
Hidir Yesiltepe
Hidir Yesiltepe
Virginia Tech


Contact

To contact the organizers please use [email protected]




Acknowledgments

Thanks to languagefor3dscenes for the webpage format.