This is the official code for the paper titled "Probing the Decision Boundaries of In-context Learning in Large Language Models."
📄 arXiv | 🧵 Twitter summary post
Install required packages:
pip install -r requirements.txtTo get the decision boundary of Llama-3-8B on a linear binary classification task with 128 in-context examples per class, run:
python get_llm_decision_boundary.py --grid_size=50 --model_name=Llama-3-8B --num_in_context=128 --data_type=linearExpected output:
An example finetuning script on synthetic data is available here:
finetune_icl.py
For TNP model training code, please refer to: https://github.com/tung-nd/TNP-pytorch
If you find our work helpful, please consider citing:
@inproceedings{zhao2024probing,
title={Probing the decision boundaries of in-context learning in large language models},
author={Zhao, Siyan and Nguyen, Tung and Grover, Aditya},
booktitle={Proceedings of the 38th International Conference on Neural Information Processing Systems},
pages={130408--130432},
year={2024}
}
