This repository contains experiments and example code for using LangChain with Llamafile, running local LLMs in Python.
- Python 3 project with Pipenv for easy dependency management
- All required libraries for LangChain and Llamafile are specified in the included
Pipfile - Example code for interacting with a local LLM via LangChain in the
srcdirectory - Ready-to-use VSCode workspace for convenient development
- Easily switch between different local LLMs downloaded from HuggingFace, e.g. Mozilla/Llama-3.2-1B-Instruct-llamafile
- Python 3.12 (or compatible version)
- pipenv (recommended, but you can use any Python environment manager)
-
Clone the repository:
git clone https://github.com/brakmic/langchain-experiments.git cd langchain-experiments -
Install dependencies:
pipenv install
Or, if you prefer another environment manager, use the
Pipfileas a reference. -
Download a Llamafile model:
- Visit HuggingFace or another LLM provider.
- Download the
.llamafilemodel and run it locally (see the model's README for instructions).
-
Set the Llamafile API URL (optional):
- By default, the code expects the Llamafile server at
http://localhost:8080. - You can override this by setting the
LLAMAFILE_URLenvironment variable.
- By default, the code expects the Llamafile server at
-
Source code is in the
srcdirectory. For example, to run a simple prompt:python src/llamafile.py
-
You can open the project in VSCode using the included workspace file:
code langchain.code-workspace
- The
.venvandlocal/directories are excluded via.gitignore. - You can use any Python 3 environment manager if you prefer not to use Pipenv.
- Example models and prompts are provided, but you can adapt them for your own experiments.