Jacob Levy

I am a 4th year robotics PhD student at the University of Texas at Austin, where I am advised by David Fridovich-Keil. I am interested in developing algorithms for robotic systems which learn from past experiences to rapidly adapt to unmodeled dynamics and unseen environments. This includes research in machine learning (ML), reinforcement learning (RL), and adaptation techniques which will enable effective real-world performance where only uncertain dynamics models of the system and environment are available. My research is funded though the NASA NSTRGO fellowship.

Prior to starting my PhD, I worked for 10 years at Parker Aerospace as an Engineering Test Lab Manager and Test Engineer. I have a M.S. in Aerospace Engineering from the University of Texas at Austin and a B.S. in Aerospace Engineering from the University of Texas at Arlington.

Email  /  CV  /  LinkedIn  /  Google Scholar

profile pic

Research

Simulation Distillation: Pretraining World Models in Simulation for Rapid Real-World Adaptation
Jacob Levy*, Tyler Westenbroek*, Kevin Huang, Fernando Palafox, Patrick Yin, Shayegan Omidshafiei, Dong-Ki Kim, Abhishek Gupta, David Fridovich-Keil
2025
arxiv / website / code

We introduce a scalable framework that distills structural priors from a simulator into a latent world model and enables rapid real-world adaptation via online planning and supervised dynamics finetuning.

Meta-Learning Online Dynamics Model Adaptation in Off-Road Autonomous Driving
Jacob Levy, Jason Gibson, Bogdan Vlahov, Erica Tevere, Evangelos Theodorou, David Fridovich-Keil, Patrick Spieler
RSS 2025
arxiv / video

We develop an online adaptation algorithm for autonomous off-road vehicles in unknown environments and show how meta-learning improves adaptation speed and robustness.

Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models
Jacob Levy*, Tyler Westenbroek*, David Fridovich-Keil
CoRL 2024
arxiv / website / code

We train a quadruped to walk with only 3 minutes of real-world data by leveraging known Lagrangian dynamics and learned contact models with model-based RL.

Enabling Efficient, Reliable Real-World Reinforcement Learning with Approximate Physics-Based Models
Tyler Westenbroek, Jacob Levy, David Fridovich-Keil
CoRL 2023
arxiv / code

We develop a real-world reinforcement learning framework that leverages approximate physics models and embedded feedback control to learn robot policies with minutes of real-world data.

Other

go2_isaac_ros2
code

This package allows low-level (or joint-level), ROS2 control of a Unitree Go2 quadruped robot being simulated in Isaac Sim.


Template adapted from jonbarron.github.io.