Skip to content

v1.17.0

Choose a tag to compare

@timvisee timvisee released this 20 Feb 11:11
· 215 commits to master since this release
v1.17.0
4ab6d2e

Change log

Features 🏋️

  • milestone#38 - Relevance Feedback (docs)
  • milestone#44 - API for detailed report on optimization progress and stages (docs)
  • milestone#40 - API for aggregated telemetry of the whole cluster (docs)
  • milestone#43 - Unlimited update queue to gracefully smooth update spikes (docs)
  • #8071 - Add Audit Access Logging (docs)
  • #8063 - Add Weighted RRF (docs)
  • #7643 - Add config option to control update throughput and prevent unoptimized searches (docs)
  • #7929 - Add configurable read fan-out delay for dealing with tail latency in distributed clusters (docs)
  • #7963 - For upserts, add update_mode parameter to either upsert, update or insert (docs)
  • #7835 - Add secondary API key configuration for zero downtime key rotation in distributed clusters
  • #7838 - Add dedicated HTTP port for /metrics endpoint for internal monitoring
  • #7615 - Add API to list shard keys (docs)

Improvements 🤸

  • #7802 - Improve timeout handling on read operations
  • #7750 - Improve timeout handling in update operations, prevent shard failures in case of timed out updates after WAL
  • #8025 - Recover snapshot without creating intermediate files, greatly improves recovery time and disk usage
  • #8059 - Recover snapshots directly into target file system to avoid expensive file movements
  • #7883 - Flush after snapshot unpack with syncfs to persist a large number of files much more efficiently
  • #8072 - Don't lock shard holder structure during creation of a snapshot, previously blocking shard level operations
  • #8166 - Add timeout to snapshot downloads, abort if connection gets stuck for more than a minute
  • #8007, #8056 - Improve segments locking approach to minimize lock contention
  • #8105 - Limit number of parallel updates on a shard to 64 to prevent order tracking overhead
  • #8169 - Reduce locking in Gridstore to lower search tail latencies
  • #8164 - Actively free cache memory for closed WAL segments to reduce memory pressure
  • #7952 - Disable in-place payload updates on unindexed fields, improve immutability guarantees of indexed segments improving partial snapshots
  • #7887 - Add ability to disable extra HNSW links construction for specific payload indices (docs)
  • #7971 - Enable missing option for vector storage to populate single-file mmap
  • #7928 - Enable io_uring when reading batch of vectors
  • #7919 - Improve error message for datetime parse failures
  • #8053 - Allow to configure load concurrency for collections, shards and segments
  • #7809 - Add more convenient way to provide API-keys for external inference providers (docs)
  • #8093 - Don't lock WAL during serialization of new updates, which was costly for large operations
  • #7834 - Extend WAL retention when replicas are dead, prevent full shard transfers in case of peer failures
  • #7565 - Disable old shard key format deprecated in 1.15.0
  • #8125 - Skip building extra HNSW links for deleted vectors
  • #8163 - Improve search result processing to use less CPU with a high search limit
  • #8175 - Use less allocations for HNSW plain filtered search

Bug Fixes 🤹

  • #7850 - Fix flush ordering to follow segment dependencies, prevents dataloss by CoW on flush interruption
  • #8103 - Fix data race in stream records transfer potentially missing ongoing updates
  • #7983 - Fix interlocking problem on creation of payload index
  • #7999 - Fix interlocking problem on collection-level update operations
  • #8131 - Fix deadlock during snapshot with concurrent updates
  • #8128 - Fix gRPC/HTTP2 too_many_internal_resets error due to how we internally cancel ongoing requests
  • #8019 - Improve handling of HTTP2 channels closing in connection pool
  • #8104 - Fix data race in WAL and shard clocks snapshot, ensure they remain consistent
  • #7961 - Fix using incorrect versions in partial snapshot manifest construction
  • #8095 - Fix incorrect internal protocol usage for shard snapshot transfers
  • #7950 - Fix integer overflow in query batch when using high limits
  • #7972 - Fix search aggregator panic with limit 0
  • #8100 - Fix round floats not used in integer index, JSON doesn't distinguish between integers and floats
  • #8097 - Fix score_threshold not being used in score boosting queries
  • #7877 - Fix Corrupted ID tracker mapping storage bug when disk is full
  • #7944 - Fix gRPC API response status counting in telemetry and metrics
  • #7857 - Fix total count in progress tracker for replicate points with filter
  • #7856 - Fix creation of payload index in empty collection using user-defined sharding
  • #8099 - Fix ignoring CA certs for internal requests if configured
  • #8176 - Add missing timeout parameter to some endpoints

Web UI 🍱

Qdrant Edge 🔪

Qdrant Edge is an in-process version of Qdrant, which shares the same internals, storage format, and points API as the server version, but designed to work locally. Qdrant Edge is compatible with server version and it can read shard snapshots created by server version of Qdrant. More documentation available here.

Deprecations ⚠️

  • Starting from v1.17.0 Qdrant changes response format for vector fields in gRPC interface. All official Qdrant clients should be already adopted to this change, so please make sure you upgrade your client libraries and check that you are not using deprecated fields. More info: #7183

  • Upcoming deprecations:

    • In Qdrant v1.18.x all deprecated search methods will be completely removed and won't be available even from old client libraries.
    • In Qdrant v1.17.x we will completely remove RocksDB support in favor of gridstore, that means that direct upgrade from v1.15.x into v1.17.x won't be possible. Please follow upgrade instructions and upgrade one minor version at a time to avoid unsupported storage errors. Note that Qdrant Cloud infrastructure automatically generates a proper upgrade steps, so you don't have to worry about that.