Code for the ECAI 2025 paper "Minimizing Surrogate Losses for Decision-Focused Learning using Differentiable Optimization"
This repository contains the implementation of our novel approach to Decision-Focused Learning (DFL) using differentiable optimization layer DYS-Net. We explore various surrogate losses and demonstrate their effectiveness across multiple LP/ILP/MILP optimization problems.
If you use this code in your research, please cite our paper (will update after the proceeding is publicly available):
@misc{mandi2025minimizingsurrogatelossesdecisionfocused,
title={Minimizing Surrogate Losses for Decision-Focused Learning using Differentiable Optimization},
author={Jayanta Mandi and Ali İrfan Mahmutoğulları and Senne Berden and Tias Guns},
year={2025},
eprint={2508.11365},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2508.11365},
}We recommend using a virtual environment to avoid conflicts with other Python packages:
# Create a virtual environment
python3 -m venv env_dfl
# Activate the virtual environment
# On Linux/Mac
source env_dfl/bin/activate
# On Windows
# env_dfl\Scripts\activate
# Install dependencies
pip install -r requirements.txtTo run the experiments, execute the Exp_run.sh script from the command line:
./Exp_run.shMake sure the script has executable permissions:
chmod +x Exp_run.shThe Exp_run.sh script will execute all the necessary experiment scripts in the correct order, running various models with different parameters.
Here's how the math notation in our paper translates to actual model names in the code:
-
CVX Models
-
$Regret^{CVX}$ corresponds toCVX-Regret -
$SqDE^{CVX}$ corresponds toCVX-Squared -
$SPO_{+}^{CVX}$ corresponds toCVX-SPO -
$SCE^{CVX}$ corresponds toCVX-SCE
-
-
DYS-Net Models
-
$Regret^{DYS}$ corresponds toDYS-Regret -
$SqDE^{DYS}$ corresponds toDYS-Squared -
$SPO_{+}^{DYS}$ corresponds toDYS-SPO -
$SCE^{DYS}$ corresponds toDYS-SCE
-
To run experiments with different configurations, you can modify the parameter values directly in the Exp_Run.sh file:
- For Shortest Path experiments, modify the
--grid_sizeparameter (e.g., change from 15 to another value) - For Knapsack experiments, modify the
--num_itemsparameter (e.g., change from 400 to another value) - For Facility Location experiments, modify the
--num_customersparameter (e.g., change from 200 to another value) and--num_facilitiesif needed
Important: The experiment scripts read hyperparameter configurations from the pkg/configs folder. These configuration files include:
shortestpath_config.jsonandshortestpath_DYSconfig.json: For Shortest Path experimentsknapsack_config.jsonandknapsack_DYSconfig.json: For Knapsack experimentsfacilitylocation_config.jsonandfacilitylocation_DYSconfig.json: For Facility Location experiments
If you need to modify hyperparameters such as learning rates, model architectures, or optimization settings, edit these JSON configuration files rather than changing the command-line arguments in the Exp_Run.sh script.
Our implementation builds upon PyEPO, a benchmarking library for End-to-End Predict-then-Optimize techniques.