Skip to content

Releases: withcatai/node-llama-cpp

v3.18.1

17 Mar 08:38
57bea3d

Choose a tag to compare

3.18.1 (2026-03-17)

Features


Shipped with llama.cpp release b8390

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.18.0

15 Mar 21:18
c641959

Choose a tag to compare

3.18.0 (2026-03-15)

Features

  • automatic checkpoints for models that need it (#573) (c641959)
  • QwenChatWrapper: Qwen 3.5 support (#573) (c641959)
  • inspect gpu command: detect and report missing prebuilt binary modules and custom npm registry (#573) (c641959)

Bug Fixes

  • resolveModelFile: deduplicate concurrent downloads (#570) (cc105b9)
  • correct Vulkan URL casing in documentation links (#568) (5a44506)
  • Qwen 3.5 memory estimation (#573) (c641959)
  • grammar use with HarmonyChatWrapper (#573) (c641959)
  • add mistral think segment detection (#573) (c641959)
  • compress excessively long segments from the current response on context shift instead of throwing an error (#573) (c641959)
  • default thinking budget to 75% of the context size to prevent low-quality responses (#573) (c641959)

Shipped with llama.cpp release b8352

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.17.1

28 Feb 01:51
8931402

Choose a tag to compare

3.17.1 (2026-02-28)

Bug Fixes


Shipped with llama.cpp release b8179

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.17.0

27 Feb 22:38
dda5ade

Choose a tag to compare

3.17.0 (2026-02-27)

Features

Bug Fixes

  • CLI: disable Direct I/O by default (#564) (dda5ade)
  • Bun segmentation fault on process exit with undisposed Llama instance (#564) (dda5ade)
  • detect glibc inside Nix (#564) (dda5ade)

Shipped with llama.cpp release b8169

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.16.2

21 Feb 20:33
6faa5ae

Choose a tag to compare

3.16.2 (2026-02-21)

Bug Fixes


Shipped with llama.cpp release b8121

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.16.1

20 Feb 21:39
498711c

Choose a tag to compare

3.16.1 (2026-02-20)

Bug Fixes


Shipped with llama.cpp release b8117

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.16.0

19 Feb 04:08
57e8c22

Choose a tag to compare

3.16.0 (2026-02-19)

Features

Bug Fixes

  • adjust the default VRAM padding config to reserve enough memory for compute buffers (#553) (57e8c22)
  • support function call syntax with optional whitespace prefix (#553) (57e8c22)
  • change the default value of useDirectIo to false (#553) (57e8c22)
  • Vulkan device dedupe (#553) (57e8c22)

Shipped with llama.cpp release b8095

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.15.1

26 Jan 03:06
4baa480

Choose a tag to compare

3.15.1 (2026-01-26)

Bug Fixes


Shipped with llama.cpp release b7836

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.15.0

10 Jan 22:40
734693d

Choose a tag to compare

3.15.0 (2026-01-10)

Features

Bug Fixes

  • support new CUDA 13.1 archs (#538) (734693d)
  • build the prebuilt binaries with CUDA 13.1 instead of 13.0 (#538) (734693d)

Shipped with llama.cpp release b7698

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)

v3.14.5

10 Dec 23:38
7e467cc

Choose a tag to compare

3.14.5 (2025-12-10)

Bug Fixes


Shipped with llama.cpp release b7347

To use the latest llama.cpp release available, run npx -n node-llama-cpp source download --release latest. (learn more)