Skip to content

Baijiong-Lin/PARM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PARM

Baijiong Lin, Weisen Jiang, Yuancheng Xu, Hao Chen, and Ying-Cong Chen. PARM: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model. In ICML, 2025.

Installation

Our code is based on TRL and PEFT for training and Model_Arithmetic for inference.

conda create -n parm python=3.10
conda activate parm

cd language-model-arithmetic/
pip install -e .

cd ../peft/
pip install -e .

conda install -c nvidia cuda-compiler

cd ..
git clone https://github.com/PKU-Alignment/safe-rlhf.git
cd safe-rlhf
pip install .

cd ..
pip install -r requirements.txt

Preparing Data

cd code/data
python relabel.py

Training

cd code/training
bash run.sh

Evaluation

cd code/evaluation
python generate_outputs.py --model_parm_both_name_or_path /path --alpha_helpfulness 0.5 --alpha_harmlessness 0.5
python compute_reward.py --path /path

Acknowledgement

This codebase is heavily based on GenARM.

Citation

If you find this work/code useful for your research, please cite the following:

@inproceedings{lin2025parm,
  title={{PARM}: Multi-Objective Test-Time Alignment via Preference-Aware Autoregressive Reward Model},
  author={Lin, Baijiong and Jiang, Weisen and Xu, Yuancheng and Chen, Hao and Chen, Ying-Cong},
  booktitle={International Conference on Machine Learning},
  year={2025}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages