Skip to content

yassineCh/O3SRL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Online Optimization for Offline Safe Reinforcement Learning

This repository contains the implementation of O3SRL.

Installation

Please follow the OSRL installation guide.

Training

To train an O3SRL agent, run the following command:

python train_o3srl.py --task <env_name> 

The default cost limit is 5 for BulletGym and 10 for SafetyGym. You can also use the --cost_limit parameter for a different cost limit.

Evaluation

To evaluate a trained agent, use the following command:

python eval_o3srl.py --path path_to_model --cost_limit 5 --eval_episodes 20

Acknowledgements

Our implementation of O3SRL follows the OSRL repository design. We thank the authors for their well-structured codebase.

About

Implementation of O3SRL: Online Optimization for Offline Safe Reinforcement Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages