A pattern for running SQLite safely under a multi-process Bun server. Multiple worker processes handle reads concurrently via WAL mode, while all writes are serialised through a single queue owned by the primary process — eliminating lock contention without a separate database server.
SQLite allows unlimited concurrent readers but only one writer at a time. When you run a clustered Node/Bun server, each worker process competes for the write lock, producing SQLITE_BUSY errors under any meaningful write load.
┌─────────────────────────────────────────────────────────┐
│ Primary process │
│ │
│ WriteQueueServer │
│ ┌─────────────────────────────────────────────┐ │
│ │ better-queue (concurrent: 1) │ │
│ │ → single PrismaClient → single SQLite conn │ │
│ └─────────────────────────────────────────────┘ │
│ ▲ IPC (WriteRequest) │
│ ▼ IPC (WriteResponse) │
├──────────────┬──────────────┬──────────────────────────-┤
│ Worker 1 │ Worker 2 │ Worker N │
│ │ │ │
│ HTTP server │ HTTP server │ HTTP server (port 3000) │
│ own read DB │ own read DB │ own read DB (WAL) │
│ WriteQueue │ WriteQueue │ WriteQueue (IPC client)│
└──────────────┴──────────────┴───────────────────────────┘
- Reads — each worker keeps its own
PrismaClientconnection. SQLite WAL mode lets all of them read simultaneously with no contention. - Writes — workers send a named op + args to the primary over Node cluster IPC. The primary runs them one at a time through
better-queueand sends the result back.
Key properties:
- Zero
SQLITE_BUSYerrors under write load - No separate process, daemon, or network hop
- Full type inference — wrong op names or arg shapes are caught at compile time
- Back-pressure: the queue rejects immediately when
MAX_PENDINGis exceeded (default 500) - Client-side and server-side timeouts prevent stale work from blocking the queue
- Bun — runtime, HTTP server (
Bun.serve), SQLite adapter - Prisma (v7, Bun client engine) — schema, migrations, typed query builder
- prisma-adapter-bun-sqlite — Prisma driver adapter for
bun:sqlite - better-queue — serial task queue with timeouts and observability events
.
├── src/
│ ├── index.ts # Entry point: forks workers (primary) or starts HTTP (worker)
│ ├── db.ts # PrismaClient factory with tuned SQLite PRAGMAs
│ ├── ipc-types.ts # Shared IPC message types (WriteRequest / WriteResponse)
│ ├── write-queue.ts # Worker-side IPC client (WriteQueue class)
│ ├── write-queue-server.ts # Primary-side queue server (WriteQueueServer class)
│ └── write-ops/
│ ├── index.ts # Barrel: merges all op files, exports OpName/OpArgs/OpResult
│ └── post.ts # Post CRUD operations
├── client.ts # Load-test CLI (standalone, stays at root)
├── prisma/
│ └── schema.prisma
└── prisma.config.ts
bun install# .env
DATABASE_URL="file:./data/dev.db"bunx --bun prisma migrate devbun src/index.tsThe primary process forks up to 4 workers (capped at CPU count). All workers listen on port 3000.
| Method | Path | Description |
|---|---|---|
GET |
/posts |
List all posts (newest first) |
POST |
/posts |
Create a post { title, content? } |
PATCH |
/posts/:id |
Update a post { title?, content? } |
DELETE |
/posts/:id |
Delete a post |
Every response includes a worker field (PID) so you can see requests being distributed across workers.
client.ts is a standalone script that fires concurrent reads and writes at the server:
# Default: 5 concurrent reads, 5 concurrent writes (single batch)
bun client.ts
# 20 concurrent reads, 50 concurrent writes
bun client.ts --reads 20 --writes 50
# Sustained load: 100 req/sec for 30 seconds
bun client.ts --rate 100 --duration 30Options:
| Flag | Default | Description |
|---|---|---|
--reads <n> |
5 |
Concurrent GET requests per batch |
--writes <n> |
5 |
Concurrent POST requests per batch |
--rate <n> |
unlimited | Max requests per second |
--duration <n> |
single run | How long to run (seconds) |
Write ops live in src/write-ops/. Each file exports an as const object of op functions and their arg types.
1. Create a new op file (e.g. src/write-ops/comment.ts):
import type { PrismaClient } from "../../generated/prisma/client";
export interface CreateCommentArgs {
postId: string;
body: string;
}
export const commentWriteOps = {
createComment: (db: PrismaClient, args: CreateCommentArgs) =>
db.comment.create({
data: { postId: args.postId, body: args.body },
}),
} as const;2. Register it in src/write-ops/index.ts:
export { commentWriteOps } from "./comment.js";
import { commentWriteOps } from "./comment.js";
export const writeOps = {
...postWriteOps,
...commentWriteOps, // ← add here
} as const;OpName, OpArgs<K>, and OpResult<K> all update automatically. The new op is immediately available with full type inference:
// In a worker — TypeScript enforces args shape and infers the return type
const comment = await writeQueue.write("createComment", {
postId: "abc",
body: "Hello!",
});
// comment is typed as Prisma's Comment — no annotation neededsrc/db.ts applies these PRAGMAs on every new connection:
| PRAGMA | Value | Effect |
|---|---|---|
journal_mode |
WAL (set in schema) | Concurrent readers |
synchronous |
NORMAL | Durability with less fsync overhead |
cache_size |
-65536 (64 MB) | Larger in-memory page cache |
busy_timeout |
5000 ms | Wait instead of failing on a locked DB |
mmap_size |
128 MB | Memory-mapped I/O for reads |
temp_store |
MEMORY | Temp tables in RAM |
locking_mode |
EXCLUSIVE | Single writer holds the lock continuously |
threads |
4 | Parallel query execution |
The WriteQueueError class surfaces structured codes to HTTP handlers:
| Code | HTTP Status | Meaning |
|---|---|---|
QUEUE_FULL |
503 | Primary queue at MAX_PENDING limit |
CLIENT_TIMEOUT |
504 | Worker gave up waiting for a reply |
TIMEOUT |
504 | Task expired before the primary ran it |
UNKNOWN_OP |
500 | Op name not found in the registry |
OP_FAILED |
500 | The op function threw |
Send SIGUSR2 to the primary to print queue stats without restarting:
kill -USR2 <primary-pid>Graceful shutdown (drains the queue before disconnecting):
kill -TERM <primary-pid>