Cristian Sminchisescu

Professor · Research Scientist · Engineering Manager

Portrait of Cristian Sminchisescu

I am an Engineering Manager at Google DeepMind and a Professor at the Romanian Academy and Lund University. I obtained my doctorate in applied mathematics with a specialization in imaging, vision, and robotics from INRIA under an Eiffel Excellence Fellowship of the French Ministry of Foreign Affairs and completed postdoctoral work in the Artificial Intelligence Laboratory at the University of Toronto. I subsequently held faculty positions at the Toyota Technological Institute at the University of Chicago and in the Mathematics Department at Bonn University.

Over time my research has been funded by the US National Science Foundation, the German Science Foundation, the Romanian Science Foundation, the Swedish Science Foundation, the European Commission under a Marie Curie Excellence Grant, and the European Research Council under an ERC Consolidator Grant. Work on semantic segmentation and visual recognition was winner of the PASCAL VOC Challenge for four consecutive years (2009–12), and the Reconstruction Meets Recognition Challenge (2013–14). My work on deep learning for graph matching received the Best Paper Award Honorable Mention at IEEE/CVF CVPR 2018.

I have served as Area Chair for most major AI conferences, was a Program Chair for ECCV 2018, General Chair for CVPR 2025, and will be General Chair for ECCV 2028. My computer vision expertise spans 3D human body modeling, human motion and shape reconstruction from sensor data, photorealistic human synthesis, and more recently multimodal representations. I have also worked on the computational modeling of eye movements, image segmentation, and visual recognition. My work in machine learning has focused on optimization, statistical models, kernel methods, neural networks and their generalization properties.

I have management experience in both academia and industry, including technology transfer, laboratory infrastructure creation, and large-scale data acquisition and annotation automation. I've led several complete technology stacks in industry, including end-to-end product transfer. The 3D human modeling and sensing technology developed by our team powers advanced algorithms for Gemini, Veo, Android, YouTube, Pixel, Meet, Fitbit, Commerce/Search, and Waymo.

To support the wider community, we have also released widely used pose estimation and avatar modeling libraries, available on Android, iOS, and TensorFlow.js: BlazePose GHUM (real-time 3D human pose estimation), BlazeHands GHUM (real-time 3D hand tracking of both hands simultaneously), and Blendshapes GHUM v2 (facial expression and avatar control, with applications to virtual humans). Recent highlights include Virtual Try-On (launched in Google Search and presented at I/O 2025, providing accurate and photorealistic clothing try-on based on 3D user modeling) and Fall Detection for Pixel Watch (a real-world health and safety application of our simulation-driven human modeling technology).

LinkedIn Google Scholar Contact