Byungjun Kim

I am currently a MS/PhD student at Seoul National University (SNU), advised by Hanbyul Joo. My research primarily focuses on 3D digital human modeling through the lens of generative models, with a particular interest in compositional modeling.
Recently, I’ve been expanding my interest toward human-environment interaction, which naturally extends to robotics. My long-term goal is to bridge digital human simulation with real-world robotics to enable collaborative agents.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo

News

  • Oct. 2025: I was selected as an Outstanding Reviewer at ICCV 2025.
  • Aug. 2025: I gave a talk at the SNU AI Lunch Seminar on Compositional Human Modeling.
  • Jun. 2025: Our work HairCUP was accepted as an oral presentation at ICCV 2025.
  • Mar. 2024: I will start my intership in Codec Avatars Lab, Meta (Pittsburgh) this summer!

Research

Dexterous World Models
Byungjun Kim*, Taeksoo Kim*, Junyoung Lee, Hanbyul Joo
preprint, 2025
Project Page arXiv Code (Coming Soon)

We present DWM, a scene-action-conditioned video diffusion model that simulates dexterous human interactions in static 3D scenes.

Durian: Dual Reference Image-Guided Portrait Animation with Attribute Transfer
Hyunsoo Cha, Byungjun Kim, Hanbyul Joo
arXiv, 2025
Project Page arXiv

The first method for generating portrait animation videos with facial attribute transfer from a given reference image to a target portrait in a zero-shot manner.

HairCUP: Hair Compositional Universal Prior for 3D Gaussian Avatars
Byungjun Kim, Shunsuke Saito, Giljoo Nam, Tomas Simon, Jason Saragih,
Hanbyul Joo, Junxuan Li
ICCV, 2025   (Oral Presentation)
Project Page arXiv

We present HairCUP, a universal prior model for 3D head avatars with hair compositionality, which enables hairstyle swapping and efficient personalization.

GALA GALA: Generating Animatable Layered Assets from a Single Scan
Taeksoo Kim*, Byungjun Kim*, Shunsuke Saito, Hanbyul Joo
CVPR, 2024
Project Page Code arXiv

We present GALA, a framework that takes as input a single-layer clothed 3D human mesh and decomposes it into complete multi-layered 3D assets.

PEGASUS: Personalized Generative 3D Avatars with Composable Attributes
Hyunsoo Cha, Byungjun Kim, Hanbyul Joo
CVPR, 2024
Project Page Code arXiv

We present, PEGASUS, a method for constructing personalized generative 3D face avatars from monocular video sources.

gtu Guess The Unseen: Dynamic 3D Scene Reconstruction from Partial 2D Glimpses
Inhee Lee, Byungjun Kim, Hanbyul Joo
CVPR, 2024
Project Page Code arXiv

We present Guess The Unseen, a method to reconstruct the world and multiple dynamic humans in 3D from a monocular video input.

Chupa Chupa: Carving 3D Clothed Humans from Skinned Shape Priors using 2D Diffusion Probabilistic Models
Byungjun Kim*, Patrick Kwon*, Kwangho Lee, Myunggi Lee, Sookwan Han, Daesik Kim, Hanbyul Joo
ICCV, 2023   (Oral Presentation)
Project Page Code arXiv

We propose Chupa, a 3D human generation pipeline that combines the generative power of diffusion models and neural rendering techniques to create diverse, realistic 3D humans.

SLiDE SLiDE: Self-supervised LiDAR De-snowing through Reconstruction Difficulty
Gwangtak Bae, Byungjun Kim, Seongyong Ahn, Jihong Min, Inwook Shim
ECCV, 2022
arXiv

We propose a novel self-supervised learning framework for snow points removal in LiDAR point clouds.

Template from Jon Barron's website