TrGLUE is a NLU benchmarking dataset for Turkish. As the name suggests, it's GLUE benchmarking dataset for Turkish language. You can download the datasets from the HuggingFace repo. For more information about the dataset, the tasks, data curation and more please visit the HF repo.
Benchmarking code can be find under scripts/. To run a single task run run_single.sh:
#!/bin/bash
# Pick the task accordingly, here we pick COLA for an example
python3 run_trglue.py \
--model_name_or_path dbmdz/bert-base-turkish-cased \
--task_name cola \
--max_seq_length 128 \
--output_dir berturk \
--num_train_epochs 5 \
--learning_rate 2e-5 \
--per_device_train_batch_size 128 \
--per_device_eval_batch_size 128 \
--do_train \
--do_eval \
--do_predict
Available task names are:
- cola
- mnli
- sst2
- mrpc
- qnli
- qqp
- rte
- stsb
To run all the tasks in order, please run run_all.sh. Benchmarking for BERTurk model and a handful LLMs can be found under the HF repo and the research paper. Here are the batch size and learning rates to replicate the paper results:
RTE, STS-B, MRPC: batch size 16, lr 3e-5
All other datasets: batch size 128, lr 2e-5
We made another script run_repro.sh that uses the above parameters, if you wanna reproduce the paper results we recommend you run this script directly.
We averaged the results over the runs with 5 different seeds: 1, 4, 21, 40, 124.
available under our blog page
Medium blog post published at GDE program Medium blog.
Preprint available at: https://www.arxiv.org/abs/2512.22100
Cite the preprint:
@misc{altinok2025introducingtrgluesentiturcacomprehensive,
title={Introducing TrGLUE and SentiTurca: A Comprehensive Benchmark for Turkish General Language Understanding and Sentiment Analysis},
author={Duygu Altinok},
year={2025},
eprint={2512.22100},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.22100},
}
