📖 Paper: Unified Parameter-Efficient Unlearning for LLMs (ICLR 2025). Paper Link:https://arxiv.org/pdf/2412.00383.
✍️ Authors: Chenlu Ding, Jiancan Wu, Yancheng Yuan, Jinda Lu, Kai Zhang, Xiang Wang, Alex Su, and Xiangnan He
🌸 This code draws on the code of https://github.com/ljy0ustc, including the implementation of LLaRA (Liao et al. 2024). Thanks for their code.
-
Prepare the environment:
git clone https://github.com/oceanoceanna/LLMEraser.git cd LLMEraser pip install -r requirements.txt -
Prepare the pre-trained huggingface model of LLaMA2-7B (https://huggingface.co/meta-llama/Llama-2-7b-hf).
-
Download the data and checkpoints.
-
Prepare the data and checkpoints:
Put the data to the dir path
data/ref/and the checkpoints to the dir pathcheckpoints/. We provide the clean and noisy data and the corresponding checkpoints on Movielens dataset.
sh test_movielens.shUsing influence function to calculate the parameter changes with a single A100 GPU on Movielens dataset:
sh test_attack_movielens.shNote that: set the llm_path argument with your own directory path of the Llama2 model; set the correct ckpt_path.
-
$x_{lr}$ : Learning rate of the optimization algorithm. -
$x_{init}$ : Initial value of the parameter change. -
$x_{adjust}$ : Regularization term. -
$ratio$ : Attach ratio.
If you have any questions, feel free to submit an issue or contact me at [email protected].
