Skip to content

jianshu93/rust-diskann

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

65 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DiskANN: On-disk graph-based approximate nearest neighbor search πŸ¦€

Latest Version docs.rs

A Rust implementation of DiskANN (Disk-based Approximate Nearest Neighbor search) using the Vamana graph algorithm. This project provides an efficient and scalable solution for large-scale vector similarity search with minimal memory footprint, as an alternative to the widely used in-memory HNSW algorithm.

Key algorithm

This implementation follows the DiskANN paper's approach:

  • Using the Vamana graph algorithm for index construction, pruning and refinement (in parallel)
  • Memory-mapping the index file for efficient disk-based access (via memmap2)
  • Implementing beam search with medoid entry points (in parallel)
  • Supporting Euclidean, Cosine, Hamming and other distance metrics via a generic distance trait
  • Maintaining minimal memory footprint during search operations

Features

  • Single-file storage: All index data stored in one memory-mapped file in the following way:

      [ metadata_len:u64 ][ metadata (bincode) ][ padding up to vectors_offset ]
      [ vectors (num * dim * f32) ][ adjacency (num * max_degree * u32) ]
    
      `vectors_offset` is a fixed 1 MiB gap by default.
    
  • Vamana graph construction: Builds an approximate nearest-neighbor graph with robust Ξ±-pruning and multi-pass refinement. The default build uses at least two passes, with a first diversification pass at Ξ± = 1.0 and a second refinement pass at user Ξ± (default 1.2).

  • Parallel batched graph refinement: Uses rayon to parallelize candidate generation and batched symmetrization/re-pruning during construction for high build throughput.

  • Build-optimized data layout: Uses flat contiguous storage instead of Vec<Vec> during construction to improve cache locality and reduce allocation overhead.

  • Memory-mapped on-disk index: Stores vectors and fixed-degree adjacency lists in a single file and memory-maps it for low-overhead loading and search.

  • Beam-search query algorithm: Uses a medoid entry point and beam search over the graph, typically visiting only a small fraction of indexed vectors

  • Generic over vector element type and distance: Works with generic T and any anndists::Distance, supporting use cases beyond standard floating-point ANN

  • Distance metrics: Support for Euclidean, Cosine and Hamming similarity et.al. via anndists. A generic distance trait that can be extended to other distances

  • Medoid-based entry points: Uses an approximate medoid as the default search entry point

  • Parallel query processing: Supports concurrent queries with rayon; depending on access patterns, this can increase page activity in the memory-mapped index

  • Minimal memory footprint: Keeps RAM usage well below full index size by relying on mmap rather than fully loading the index into memory.

  • Extensitve benchmarks: Speed, accuracy and memory consumption benchmark with HNSW (both in-memory and on-disk)

Visualization of Vamana graph build and search

The Vamana graph build plot is in 2D with L2 distance. See diskann-vamana-viz crate for details.

For search, the final graph was used. The path from entry node (red) to nearest node (green) for the query (pink) in the graph was labeled in orange.

Usage in Rust πŸ¦€

Building a New Index

use anndists::dist::{DistL2, DistCosine}; // or your own Distance types
use diskann_rs::{DiskANN, DiskAnnParams};

// Your vectors to index (all rows must share the same dimension)
let vectors: Vec<Vec<f32>> = vec![
    vec![0.1, 0.2, 0.3],
    vec![0.4, 0.5, 0.6],
];

// Easiest: build with defaults (M=64, L_build=128, alpha=1.2)
let index = DiskANN::<f32, DistL2>::build_index_default(&vectors, DistL2, "index.db")?;

// Or: custom construction parameters
let params = DiskAnnParams {
    max_degree: 48,        // max neighbors per node
    build_beam_width: 128, // construction beam width
    alpha: 1.2,            // Ξ± for pruning
    passes: 2,             // number of initial passes 
    extra_seeds: 2,        // number of extra refinement

};
let index2 = DiskANN::<f32, DistCosine>::build_index_with_params(
    &vectors,
    DistCosine {},
    "index_cos.db",
    params,
)?;

Opening an Existing Index

use anndists::dist::DistL2;
use diskann_rs::DiskANN;

// If you built with DistL2 and defaults:
let index = DiskANN::<f32, DistL2>::open_index_default_metric("index.db")?;

// Or, explicitly provide the distance you built with:
let index2 = DiskANN::<f32, DistL2>::open_index_with("index.db", DistL2)?;

Searching the Index

use anndists::dist::DistL2;
use diskann_rs::DiskANN;

let index = DiskANN::<f32, DistL2>::open_index_default_metric("index.db")?;
let query: Vec<f32> = vec![0.1, 0.2, 0.4]; // length must match the indexed dim
let k = 10;
let beam = 256; // search beam width

// (IDs, distance)
let hits: Vec<(u32, f32)> = index.search_with_dists(&query, 10, beam);
// `neighbors` are the IDs of the k nearest vectors
let neighbors: Vec<u32> = index.search(&query, k, beam);

Parallel Search

use anndists::dist::DistL2;
use diskann_rs::DiskANN;
use rayon::prelude::*;

let index = DiskANN::<f32, DistL2>::open_index_default_metric("index.db")?;

// Suppose you have a batch of queries
let query_batch: Vec<Vec<f32>> = /* ... */;

let results: Vec<Vec<u32>> = query_batch
    .par_iter()
    .map(|q| index.search(q, 10, 256))
    .collect();

Space and time complexity analysis

  • Index Build Time: O(n * max_degree * beam_width)
  • Disk Space: n * (dimension * 4 + max_degree * 4) bytes
  • Search Time: O(beam_width * log n) - typically visits < 1% of dataset
  • Memory Usage: O(beam_width) during search
  • Query Throughput: Scales linearly with CPU cores

Parameters Tuning

Index building Parameters

  • max_degree: 32-64 for most datasets
  • build_beam_width: 128-256 for good graph quality
  • alpha: 1.2-2.0 (higher = more diverse neighbors)

Index search Parameters

  • beam_width: 128 or larger (trade-off between speed and recall)
  • Higher beam_width = better recall but slower search

Index memory-mapping

When host RAM is not large enough for mapping the entire database file, it is possible to build the database in several smaller pieces (random split). Then users can search the query againt each piece and collect results from each piece before merging (rank by distance). This is equivalent to a single big database approach (as long as K'>=K) but requires a much smaller number of RAM for memory-mapping. In practice, the Microsoft Azure Cosmos DB found that this database shard idea can improve recall. Intutively, with smaller data points for each piece, we can use large M and build beam width to further improve accuracy. See their paper here

Building and Testing

# Build the library
cargo build --release

# Run tests
cargo test

# Run demo
cargo run --release --example demo
# Run performance test
cargo run --release --example perf_test

# test MNIST fashion dataset
wget http://ann-benchmarks.com/fashion-mnist-784-euclidean.hdf5
cargo run --release --example diskann_mnist

# test SIFT dataset
wget http://ann-benchmarks.com/sift-128-euclidean.hdf5
cargo run --release --example diskann_sift

Examples

See the examples/ directory for:

  • demo.rs: Demo with 100k vectors
  • perf_test.rs: Performance benchmarking with 1M vectors
  • diskann_mnist.rs: Performance benchmarking with MNIST fashion dataset (60K)
  • diskann_sift.rs: Performance benchmarking with SIFT 1M dataset
  • bigann.rs: Performance benchmarking with SIFT 10M dataset
  • hnsw_sift.rs: Comparison with in-memory HNSW

Benchmark against in-memory HNSW (hnsw_rs crate) for SIFT 1 million dataset

wget http://ann-benchmarks.com/sift-128-euclidean.hdf5
cargo run --release --example diskann_sift
cargo run --release --example hnsw_sift

Results:

## DiskANN,  sift1m , M4 Max
DiskANN benchmark on "./sift-128-euclidean.hdf5"
neighbours shape : [10000, 100]

 10 first neighbours for first vector : 
 932085  934876  561813  708177  706771  695756  435345  701258  455537  872728 
 10 first neighbours for second vector : 
 413247  413071  706838  880592  249062  400194  942339  880462  987636  941776  test data, nb element 10000,  dim : 128

 train data shape : [1000000, 128], nbvector 1000000 
 allocating vector for search neighbours answer : 10000
Train size : 1000000
Test size  : 10000
Ground-truth k per query in file: 100

Building DiskANN index: n=1000000, dim=128, max_degree=64, build_beam=128, alpha=1.2, passes=1, extra_seeds=1
Build complete. CPU time: 3906.747201985s, wall time: 3916.977265733s

Searching 10000 queries with k=10, beam_width=512 …

 mean fraction nb returned by search 1.0

 last distances ratio 1.000009

 recall rate for "./sift-128-euclidean.hdf5" is 0.99971 , nb req /s 24379.064
 total cpu time for search requests 34.90886072s , system time 410.188357ms

 

###  sift1m, hnsw_rs, M4 Max
test_load_hdf5 "./sift-128-euclidean.hdf5"
neighbours shape : [10000, 100]

 10 first neighbours for first vector : 
 932085  934876  561813  708177  706771  695756  435345  701258  455537  872728 
 10 first neighbours for second vector : 
 413247  413071  706838  880592  249062  400194  942339  880462  987636  941776  test data, nb element 10000,  dim : 128

 train data shape : [1000000, 128], nbvector 1000000 
 allocating vector for search neighbours answer : 10000
No saved index. Building new one: N=1000000 layers=16 ef_c=256

  Current scale value : 2.58e-1, Scale modification factor asked : 5.00e-1,(modification factor must be between 2.00e-1 and 1.)
 
 parallel insertion
 setting number of points 50000 
 setting number of points 100000 
 setting number of points 150000 
 setting number of points 200000 
 setting number of points 250000 
 setting number of points 300000 
 setting number of points 350000 
 setting number of points 400000 
 setting number of points 450000 
 setting number of points 500000 
 setting number of points 550000 
 setting number of points 600000 
 setting number of points 650000 
 setting number of points 700000 
 setting number of points 750000 
 setting number of points 800000 
 setting number of points 850000 
 setting number of points 900000 
 setting number of points 950000 
 setting number of points 1000000 
HNSW index saved as: sift1m_l2_hnsw.hnsw.graph / .hnsw.data

 hnsw data insertion cpu time  3014.390977098s  system time Ok(36.777338122s) 
 debug dump of PointIndexation
 layer 0 : length : 999557 
 layer 1 : length : 443 
 debug dump of PointIndexation end
 hnsw data nb point inserted 1000000
searching with ef = 64


 ef_search : 64 knbn : 10 
searching with ef : 64
 
 parallel search
total cpu time for search requests 14137829.0 , system time Ok(156.329364ms) 

 mean fraction nb returned by search 1.0 

 last distances ratio 1.0004259 

 recall rate for "./sift-128-euclidean.hdf5" is 0.98743 , nb req /s 63968.066

License

MIT This project is licensed under the MIT License - see the LICENSE file for details.

References

Jayaram Subramanya, S., Devvrit, F., Simhadri, H.V., Krishnawamy, R. and Kadekodi, R., 2019. Diskann: Fast accurate billion-point nearest neighbor search on a single node. Advances in neural information processing Systems, 32.

Acknowledgments

This implementation is based on the DiskANN paper and the official Microsoft implementation. It was also largely inspired by the implementation here.