Current Behavior
In lib/collection/src/collection/query.rs, inside the do_query_batch function, the logic aggregates limit and offset values using the standard .sum() method:
// Lines 223-224 in lib/collection/src/collection/query.rs
let sum_limits: usize = requests_batch.iter().map(|s| s.limit).sum();
let sum_offsets: usize = requests_batch.iter().map(|s| s.offset).sum();
When the sum of limits in a batch request exceeds usize::MAX, integer overflow (wrapping) occurs. Consequently, require_transfers (Line 227) is calculated based on this wrapped, incorrectly small value. This causes the optimization check on Line 231 (is_required_transfer_large_enough) to work with incorrect data, potentially selecting an optimization path intended for small data loads when the actual load is massive.
Steps to Reproduce
- Start a Qdrant instance.
- Run the following Python script which sends a batch of 10 requests.
- The sum of limits is constructed to equal
u64::MAX + 50.
- The server accepts the request (
200 OK) because the sum wraps around to 50.
import requests
URL = "http://localhost:6333"
COLL = "overflow_test"
MAX_U64 = 2**64 - 1
# Setup
requests.delete(f"{URL}/collections/{COLL}")
requests.put(f"{URL}/collections/{COLL}", json={"vectors": {"size": 4, "distance": "Cosine"}})
requests.put(f"{URL}/collections/{COLL}/points", json={
"points": [{"id": i, "vector": [0.1]*4} for i in range(1, 101)]
})
# Exploit
limit_overflow = (MAX_U64 + 50) // 10
payload = {
"searches": [{"query": [0.1]*4, "limit": limit_overflow} for _ in range(10)]
}
print(f"Sending batch with total intended limit > 2^64...")
r = requests.post(f"{URL}/collections/{COLL}/points/query/batch", json=payload)
if r.status_code == 200:
print("FAIL: Server accepted overflowed limit.")
else:
print("PASS: Server rejected request.")
Expected Behavior
The aggregation of limits should use saturating arithmetic. If the total limit exceeds usize::MAX, it should be capped at usize::MAX (or return an error) to ensure downstream resource calculations reflect the true load.
Possible Solution
Replace the standard .sum() with a fold operation using saturating_add.
File: lib/collection/src/collection/query.rs
Current Code:
let sum_limits: usize = requests_batch.iter().map(|s| s.limit).sum();
let sum_offsets: usize = requests_batch.iter().map(|s| s.offset).sum();
Proposed Fix:
let sum_limits: usize = requests_batch.iter()
.fold(0usize, |acc, s| acc.saturating_add(s.limit));
let sum_offsets: usize = requests_batch.iter()
.fold(0usize, |acc, s| acc.saturating_add(s.offset));
Context (Environment)
- Affected Code:
lib/collection/src/collection/query.rs (Lines 223-227)
- Impact: Bypassing resource checks, wrong optimization path selection, potential DoS via memory exhaustion.
Possible Implementation
Integer overflow causes the server to underestimate the total record count, bypassing the is_required_transfer_large_enough safeguard. This forces massive queries into an unoptimized memory path, potentially leading to Denial of Service (DoS) or OOM crashes.
Current Behavior
In lib/collection/src/collection/query.rs, inside the do_query_batch function, the logic aggregates limit and offset values using the standard .sum() method:
When the sum of limits in a batch request exceeds usize::MAX, integer overflow (wrapping) occurs. Consequently, require_transfers (Line 227) is calculated based on this wrapped, incorrectly small value. This causes the optimization check on Line 231 (
is_required_transfer_large_enough) to work with incorrect data, potentially selecting an optimization path intended for small data loads when the actual load is massive.Steps to Reproduce
u64::MAX + 50.200 OK) because the sum wraps around to50.Expected Behavior
The aggregation of limits should use saturating arithmetic. If the total limit exceeds
usize::MAX, it should be capped atusize::MAX(or return an error) to ensure downstream resource calculations reflect the true load.Possible Solution
Replace the standard .sum() with a fold operation using saturating_add.
File:
lib/collection/src/collection/query.rsCurrent Code:
Proposed Fix:
Context (Environment)
lib/collection/src/collection/query.rs(Lines 223-227)Possible Implementation
Integer overflow causes the server to underestimate the total record count, bypassing the
is_required_transfer_large_enoughsafeguard. This forces massive queries into an unoptimized memory path, potentially leading to Denial of Service (DoS) or OOM crashes.