Skip to content

llama.cpp buildcache-cuda Public Latest

Install from the command line
Learn more about packages
$ docker pull ghcr.io/ggml-org/llama.cpp:buildcache-cuda

Recent tagged image versions

  • Published about 20 hours ago · Digest
    sha256:f856e845e06d7ba7131242a9be85aa13970384175d678366aabc9a912fe2944d
    6 Version downloads
  • Published about 20 hours ago · Digest
    sha256:e2bf3c97f7adfe37ec90be7422266f1061249245a1e5f93c6ed03c611b878513
    529 Version downloads
  • Published about 20 hours ago · Digest
    sha256:685b943f08be196721bf335b5d3a607ea0492c7b8a6b8ae0eb2b4805e1cf0df5
    4 Version downloads
  • Published about 20 hours ago · Digest
    sha256:e0901a12f71eb86e14b13c6da407f5e838bbe3bacdb52b612145d0fe2de5cf62
    72 Version downloads
  • Published about 21 hours ago · Digest
    sha256:9b5e7382e956867a56f18cb6f5af17b5f91e050c8c13999a0dfd973c7a9a4758
    0 Version downloads

Loading

Details


Last published

20 hours ago

Discussions

2.78K

Issues

969

Total downloads

765K