Research
I'm interested in building learning systems that
are reliable in the real world—systems that
can continuously learn from experience and adapt as the world changes.
More concretely, I'm drawn to problems in representation learning,
online adaptation, and active learning,
motivated by a simple question: can we enable models to learn effectively from ongoing experience,
in a way that resembles how humans learn?
Selected works are
listed below (*equal contribution).
|
|
Continuous 3D Perception Model with Persistent State
Qianqian Wang*, Yifei Zhang*, Aleksander Holynski, Alexei A Efros, and Angjoo Kanazawa
CVPR 2025 Oral
arxiv /
code /
website /
A new framework for reasoning about the 3D world in an online, sequential manner. Given an input image stream, our method simultaneously updates an inner state with the current observation and reads from it to make predictions of 3D geometry & camera pose for the current view, as well as infer unseen portions of the scene.
|
|
Sparse Diffusion Policy: A Sparse, Reusable, and Flexible Policy for Robot Learning
Yixiao Wang*, Yifei Zhang*, Mingxiao Huo*, Ran Tian, Xiang Zhang, Yichen Xie, Chenfeng Xu, Pengliang Ji, Wei Zhan, Mingyu Ding, and Masayoshi Tomizuka
CoRL 2024
arxiv /
code /
website /
We propose a Sparse Diffusion Policy (SDP) that integrates a Mixture of Experts module specifically designed for multitask learning, continual learning and rapid adaptation to new tasks.
|
|
FastMAC: Stochastic Spectral Sampling of Correspondence Graph
Yifei Zhang, Hao Zhao, Hongyang Li, and Siheng Chen
CVPR 2024
arxiv /
code /
We propose a new technique of stochastic spectral sampling of correspondence graph and build a complete 3D registration pipeline that reaches real-time speed while leading to little to none performance drop.
|
|
Dual-frame Fluid Motion Estimation with Test-time Optimization and Zero-divergence Loss
Yifei Zhang, Huan-ang Gao, Zhou Jiang, and Hao Zhao
NeurIPS 2024
arxiv /
code /
A new fluid motion tracking method that is completely self-supervised
and notably outperforms its supervised counterparts while requiring only 1%
of the training samples (without labels) used by previous methods.
|
Selected Honors and Awards
|
- 2024: SenseTime Scholarship (Only 25 winners nationwide)
- 2023-2024: China National Scholarship (Top 0.01%, Highest Honor for
undergraduates in China)
- 2022-2024: First-Level Scholarship of UCAS (Top 1% in
UCAS)
- 2023: Honorable Mention in Mathematical Contest in Modeling (Top
10% around the world)
- 2023: 3V3 National Second Prize in RoboMaster University League
- 2023: National First Prize in APMCM (Top 5% in China)
|