This repo is the official python implementation for the paper InferDPT: Privacy-preserving Inference for Black-box Large Language Models (TDSC' 25).
conda create -n InferDPT python=3.10
conda activate InferDPT
git clone https://github.com/mengtong0110/InferDPT
cd InferDPT
pip install -r requirements.txtstep2 Install FastChat
pip install "fschat[model_worker,webui]"
step3 Install GPTQ
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git repositories/GPTQ-for-LLaMa
cd repositories/GPTQ-for-LLaMa
git switch fastest-inference-4bit
python setup_cuda.py install
pip install texttable
You can skip steps 2 and 3 using Ollama.
Download the embedding files from the sharing link (https://drive.google.com/drive/folders/1mshI2yoJyx8LOLpAx7RB31VQkj-lvV1u?usp=sharing) and store them in the folder InferDPT/data.
We obtain embedding files of 11000 English words from the model text-embedding-ada-002. You can also use others embedding files.
Run the following code to get the Perturbed Generation:
python main.py --eps 6.0 --model gpt-4 #You need to modify the variable to your input data (Prefix Text) and get the Perturbed Generation.
Deploy a model locally and use the following prompt to complete the text generation task:
Your task is to extend the “Prefix Text”. Use the “Perturbed Generation” as your primary writing material for your extension. Extract
coherent and consistent text from the “Perturbed Generation” and
integrate them into your continuation. Ensure a seamless alignment
with the context established by the “Prefix Text”. Provide only your
“Extended Text”
——“Prefix Text”:
——“Perturbed Generation”:
——“Extended Text”:
For information about model deployment, please refer to FastChat and GPTQ.
If you find this repository useful for your work, please consider citing it as follows:
@ARTICLE{10922117,
author={Tong, Meng and Chen, Kejiang and Zhang, Jie and Qi, Yuang and Zhang, Weiming and Yu, Nenghai and Zhang, Tianwei and Zhang, Zhikun},
journal={IEEE Transactions on Dependable and Secure Computing},
title={InferDPT: Privacy-Preserving Inference for Closed-Box Large Language Models},
year={2025},
volume={22},
number={5},
pages={4625-4640},
keywords={Privacy;Differential privacy;Closed box;Perturbation methods;Protection;Large language models;Computational modeling;Writing;Chatbots;Vocabulary;Differential privacy;closed-box;inference;large language model},
doi={10.1109/TDSC.2025.3550389}}