Bio

Dr. Yu Zeng’ research addresses visual learning problems where standard assumptions fail, especially when supervision is limited or unavailable. She studies the intrinsic structure of visual data—statistical, geometric, or semantic—and how mismatches with model assumptions lead to failure modes. By leveraging these structures and examining flaws in problem formulation, her work aims to develop scalable methods and better understand what makes visual learning problems solvable. Her PhD research focused on visual synthesis with multimodal and hierarchical inputs, and her recent work explores visual generative models in embodied AI and autonomous driving. She received her PhD from Johns Hopkins University and currently works at Toyota Research Institute as a researcher. Prior to this, she also worked for NVIDIA and Lightspeed Studios as a researcher and interned at Adobe Research.