Bernard Lange

I am a PhD student at Stanford Intelligent Systems Laboratory at Stanford University, where I work on environment prediction and their application as foundational models for autonomous driving.

At Stanford, I am pursuing a PhD in AeroAstro with a PhD Minor in Computer Science. I am privileged to be advised by Prof. Mykel Kochenderfer. My work includes projects like Scene Informer, LOPR: Latent Occupancy PRediction, and POMDPs for Safe Visibility Reasoning in Autonomous Vehicles. In the course of my research, I have had the opportunity to collaborate with Qualcomm, Ford, NASA JPL, and MIT Lincoln Lab. Before joining Stanford, I earned my bachelor's degree from the University of Bristol, where I worked on the application of Model Predictive Control in Autonomous Racing. My practical experiences include internships with Cruise AI Research and the Nissan Autonomous Driving Research Team.

Email  /  CV  /  Google Scholar  /  Github  /  Twitter

profile photo
Research

My research focuses on applying deep learning to autonomous driving and robotics, emphasizing robustness and safety within integrated systems. I primarily develop environment prediction models, including self-supervised sensor-conditioned prediction and vectorized motion forecasting. Currently, I'm working on foundational autonomous driving models and frameworks leveraging language-vision models (LVLM) for semantic and spatial reasoning in unknown environments.

Sensor Configuration Agnostic World Model and Novel View Synthesis for Autonomous Driving
Bernard Lange, Mansur Arief, Mykel J. Kochenderfer
Ongoing, 2025

Developing an open-source, sensor-agnostic foundational model for autonomous driving, capable of environment prediction and novel view synthesis. The model supports configurations from a single dashcam to a full 360° setup with 8 cameras and 4 LiDARs. Training uses over 4000 hours of camera footage and 600 hours of LiDAR data. Open-source release is planned in the coming months.

Unified Perception, Reasoning, and Acting in General-Purpose Navigation Tasks via Language Vision Models
Bernard Lange, Anil Yildiz, Mansur Arief, Shehryar Khattak, Mykel J. Kochenderfer, Georgios Georgakis
Under review, 2025

We propose a general-purpose embodied navigation agent that integrates LVLMs with multi-dimensional scene graphs and sensor measurements, enabling autonomous navigation and problem-solving in unknown environments. Agentic principles are used to generate and execute any navigational and logical plan, access any collected information via tool use, and carry over findings as part of our spatial belief across timesteps. The framework enables adaptive decision-making, efficient plan execution, and robust generalization across diverse text-defined tasks.

Self-supervised Multi-future Occupancy Forecasting for Autonomous
Bernard Lange, Masha Itkina, Jiachen Li, Mykel J. Kochenderfer
Robotics: Science and Systems, 2025
arXiv

We propose a framework that performs stochastic Lidar-based Occupancy Grid Map (L-OGM) prediction in the latent space of a generative model. It allows conditioning on RGB camera inputs, map data, and planned trajectories for enhanced performance. It offers two decoding approaches: 1) A single-step decoder for high-quality, real-time predictions. 2) A diffusion-based batch decoder for refined predictions that improve temporal consistency and reduce compression artifacts.

ASTPrompter: Weakly Supervised Automated Language Model Red-Teaming to Identify Likely Toxic Prompts
Amelia F. Hardy, Houjun Liu, Bernard Lange, Mykel J. Kochenderfer
Preprint, 2024
arXiv

We propose ASTPrompter, which automatically identifies likely-sounding prompts that elicit toxic entailment trajectories, even when conditioned on normal, non-toxic conversation. We solve it by using two key LLM alignment approaches: (1) an online IPO formulation, (2) a novel weak supervision step to help the model converge more rapidly upon failure modes.

Scene Informer: Anchor-based Occlusion Inference and Trajectory Prediction in Partially Observable Environments
Bernard Lange, Jiachen Li, Mykel J. Kochenderfer
IEEE International Conference on Robotics and Automation, 2024
arXiv / code

We introduce the Scene Informer, a unified approach for predicting both observed agent trajectories and inferring occlusions in a partially observable setting. Our approach outperforms existing methods in both occupancy prediction and trajectory prediction in partially observable setting on the Waymo Open Motion Dataset.

LOPR: Latent Occupancy PRediction using Generative Models
Bernard Lange, Masha Itkina, Mykel Kochenderfer
Preprint, 2023
arXiv / code

We propose a framework that decomposes occupancy grid prediction into task-independent low-dimensional representation learning and task-dependent prediction in the latent space. We demonstrate that our approach achieves state-of-the-art performance on the real-world autonomous driving dataset, NuScenes.

How Do We Fail? Stress Testing Perception in Autonomous Vehicles
Harrison Delecki, Masha Itkina, Bernard Lange, Ransalu Senanayake, Mykel Kochenderfer
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022
arXiv / code

Adaptive Stress Testing (AST) of Lidar-based perception systems for autonomous vehicles under adverse weather conditions. We formulate Perception Adaptive Stress Testing (PAST) and validate a sample Lidar-based perception system over the NuScenes driving dataset.

Attention Augmented ConvLSTM for Environment Prediction
Bernard Lange, Masha Itkina, Mykel Kochenderfer
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021
arXiv / code

Safe and proactive planning in robotic systems generally requires accurate predictions of the environment. ConvLSTM-based frameworks used previously often result in significant blurring and vanishing of moving objects, thus hindering their applicability for use in safety-critical applications. We propose extensions to the ConvLSTM to address these issues.

POMDPs for Safe Visibility Reasoning in Autonomous Vehicles
Kyle Hollins Wray, Bernard Lange, Arec Jamgochian, Stefan J. Witwicki, Atsuhide Kobashi, Sachin Hagaribommanahalli, David Ilstrup
IEEE International Conference on Intelligence and Safety for Robotics, 2021
paper

We present solutions for autonomous vehicles in limited visibility scenarios, such as traversing T-intersections, as well as detail how these scenarios can be handled simultaneously.

ROS Occupancy Grid Prediction Package
Bernard Lange,
Github repo, 2021
code

We created a ROS C++ Occupancy Grid Prediction framework which includes all needed point cloud processing and occupancy grid prediction in PyTorch and Tensorflow. The package is fully compatible with Ford AV Dataset. Lidar pointcloud can be provided in the form of a rosbag or directly from the robot's Lidar sensors.

POMDP Autonomous Vehicle Visibility Reasoning
Kyle Hollins Wray, Bernard Lange, Arec Jamgochian, Stefan J. Witwicki, Atsuhide Kobashi, Sachin Hagaribommanahalli, David Ilstrup
RSS Interaction and Decision-Making in Autonomous Driving Workshop, 2020
paper / video

We present solutions for autonomous vehicles in limited visibility scenarios, such as traversing T-intersections, as well as detail how these scenarios can be handled simultaneously.

Class Projects
Learning Offline Driving Policy with Decision Transformer in Latent Space
Covid Chatbot
Imitation Learning: Modeling Driver Behaviour
Probability Hypothesis Density Filter
Hyperparameter Tuning using Gaussian Process Multi-Arm Bandits
Autonomous Robot Stack
Optimal Obstacle Avoidance using a Quadrotor UAV

This page template was cloned with permission from Jon Barron.