π Automatically annotate papers using LLMs
894
Automatically annotate papers using LLMs

annotateai automatically annotates papers using Large Language Models (LLMs). While LLMs can summarize papers, search papers and build generative text about papers, this project focuses on providing human readers with context as they read.

A one-line call does the following:
The easiest way to install is via pip and PyPI
pip install annotateai
Python 3.10+ is supported. Using a Python virtual environmentβ is recommended.
annotateai can also be installed directly from GitHub to access the latest, unreleased features.
pip install git+https://github.com/neuml/annotateai
annotateai can annotate any PDF but it works especially well for medical and scientific papers. The following shows a series of examples using papers from arXivβ .
This project also works well with papers from PubMedβ , bioRxivβ and medRxivβ !
The primary input parameter is the path to the LLM. This project is backed by txtaiβ and it supports any txtai-supported LLMβ .
from annotateai import Annotate
# Lightweight but powerful default model
annotate = Annotate("Qwen/Qwen3-4B-Instruct-2507")
# The previous default model uses the now deprecated AutoAWQ library
# Run pip install autoawq to enable
# Note as time goes on, this may require pinning to older versions of transformers & torch
annotate = Annotate("NeuML/Llama-3.1_OpenScholar-8B-AWQ")
# llama.cpp version of the above model
# Run pip install llama-cpp-python to enable
annotate = Annotate(
"bartowski/Llama-3.1_OpenScholar-8B-GGUF/Llama-3.1_OpenScholar-8B-Q4_K_M.gguf"
)
This paper proposed RAG before most of us knew we needed it.
annotate("https://arxiv.org/pdf/2005.11401")

Source: https://arxiv.org/pdf/2005.11401β
This paper builds the largest open-source video generation model as of Dec 2024.
annotate("https://arxiv.org/pdf/2412.03603v2")

This paper was presented at the 38th Conference on Neural Information Processing Systems (NeurIPS 2024) Track on Datasets and Benchmarks.
annotate("https://arxiv.org/pdf/2406.14657")

Source: https://arxiv.org/pdf/2406.14657β
As mentioned earlier, this project supports any txtai-supported LLMβ . Some examples below.
pip install txtai[pipeline-llm]
# LLM API services
annotate = Annotate("gpt-5.1")
annotate = Annotate("claude-opus-4-5-20251101")
annotate = Annotate("gemini/gemini-3-pro-preview")
# Ollama endpoint
annotate = Annotate("ollama/gpt-oss")
# llama.cpp GGUF from Hugging Face Hub
annotate = Annotate(
"unsloth/gpt-oss-20b-GGUF/gpt-oss-20b-Q4_K_M.gguf"
)
The default mode for an annotate instance is to automatically generate the key concepts to search for. But these concepts can be provided via the keywords parameter.
annotate("https://arxiv.org/pdf/2005.11401", keywords=["hallucinations", "llm"])
This is useful for situations where we have a large batch of papers and we want it to identify a specific set of concepts to help with a review.
The progress bar can be disabled as follows:
annotate("https://arxiv.org/pdf/2005.11401", progress=False)

neuml/annotateaiβ is a web application available on Docker Hub.
This can be run with the default settings as follows.
docker run -d --gpus=all -it -p 8501:8501 neuml/annotateai
The LLM can also be set via ENV parameters.
docker run -d --gpus=all -it -p 8501:8501 -e LLM=unsloth/gpt-oss-20b-GGUF/gpt-oss-20b-Q4_K_M.gguf -e MAXLENGTH=10000 -e n_ctx=4096 neuml/annotateai
The code for this application can be found in the app folderβ .
Content type
Image
Digest
sha256:ba359effdβ¦
Size
4.5 GB
Last updated
5 months ago
Requires Docker Desktop 4.37.1 or later.