Skip to content

Conversation

@benwtrent
Copy link
Member

@benwtrent benwtrent commented Jan 22, 2025

This is a continuation and completion of the work started by @benchaplin in #14085

The algorithm is fairly simple:

  • Only score and then explore vectors that actually match the filtering criteria
  • Since this will make the graph even sparser, the search spread is increased to also include the candidate's neighbor neighbors (e.g. generally maxConn * maxConn exploration)
  • Additionally, even more scored candidates for a given NSW are considered to combat the increased sparsity

Some of the changes to the baseline Acorn algorithm are:

  • There is some general threshold of filtering that bypasses this algorithm altogether. Default suggestion is 60%.
  • The number of additional neighbors explored is predicated on the percentage of the immediate neighborhood that is filtered out
  • More extended neighborhoods will be explored if the filter is exceptionally restrictive. This attempts to find valid vectors to score and explore for every candidate.
  • Only look at the extended neighbors if less than 90% of the current neighborhood matches the filter.

Here are some numbers for 1M vectors, float32 and then int4 quantized.

https://docs.google.com/spreadsheets/d/1GqD7Jw42IIqimr2nB78fzEfOohrcBlJzOlpt0NuUVDQ

Here is the "nightly" dataset (but I merged to a single segment)

https://docs.google.com/spreadsheets/d/1gk1uybtqleVtDUfhWXActyhW8q_lgG1mlMrOohnJRJA

Since this changes the behavior significantly, and there are still some weird edge cases, I am exposing as a parameter within a new idea called KnnSearchStrategy that the collectors can provide. This strategy object can be provided to the queries.

closes: #13940

@benwtrent benwtrent added this to the 10.2.0 milestone Jan 22, 2025
@benwtrent
Copy link
Member Author

@msokolov I wonder your opinion here?

Do you think the behavior change/result change is worth waiting for a major? I do think folks should be able to use this now, but be able to opt out.

Another option I could think of is injecting a parameter or something directly into SPI loading for the hnsw vector readers. But I am not 100% sure how to do that. It does seem like it should be something that is a "global" configuration for a given Lucene instance instead of one that is provided at query time.

@benwtrent
Copy link
Member Author

benwtrent commented Jan 30, 2025

EDIT: ignore these numbers for candidate. they contain a bug...

Details

I ran this over the "nightly" dataset (8M 768 dim vectors). No force merging. I think this is the nightly behavior. I ran over various filter criteria (I think nightly is 5%).

BASELINE

recall  latency (ms)     nDoc  topK  fanout  visited  selectivity
 1.000       110.216  8000000   100      50    79846        0.010
 0.982       137.185  8000000   100      50   215393        0.050
 0.974        85.933  8000000   100      50   144953        0.100
 0.965        73.476  8000000   100      50    86333        0.200
 0.958        58.347  8000000   100      50    64055        0.300
 0.952        34.021  8000000   100      50    51634        0.400
 0.944        32.818  8000000   100      50    43643        0.500
 0.940        29.538  8000000   100      50    38200        0.600
 0.936        26.965  8000000   100      50    34205        0.700
 0.930        25.453  8000000   100      50    30989        0.800
 0.926        23.585  8000000   100      50    28482        0.900
 0.924        23.926  8000000   100      50    27318        0.950
 0.922        23.306  8000000   100      50    26481        0.990
recall  latency (ms)     nDoc  topK  fanout  visited  selectivity
 0.640        28.972  8000000   100      50    10709        0.010
 0.855        34.103  8000000   100      50    20845        0.050
 0.908        37.990  8000000   100      50    36339        0.100
 0.922        47.513  8000000   100      50    54472        0.200
 0.903        46.094  8000000   100      50    56451        0.300
 0.894        41.164  8000000   100      50    52235        0.400
 0.870        30.850  8000000   100      50    36989        0.500
 0.881        28.043  8000000   100      50    34102        0.600
 0.896        27.725  8000000   100      50    33346        0.700
 0.904        25.472  8000000   100      50    31135        0.800
 0.913        23.670  8000000   100      50    26715        0.900
 0.918        23.148  8000000   100      50    26193        0.950
 0.922        22.982  8000000   100      50    26425        0.990

The goal is generally "higher recall with lower visited", a nice single value to show this would be recall/visited, so as visited reduces or recall increases, that value is "higher" so higher is better.

I graphed this ratio (multiplying by 100_000 to make the values saner looking)

image

So, this shows on nightly, the ratio is significantly improved, by as much as 5x.

I am currently force merging and attempting to re run.

Here is some more data for candidate only at 0.05 filtering with increasing fanout:

recall  latency (ms)     nDoc  topK  fanout  visited  selectivity
 0.855        29.257  8000000   100      50    20845        0.050
 0.859        30.215  8000000   100      60    21514        0.050
 0.862        31.189  8000000   100      70    22134        0.050
 0.866        31.998  8000000   100      80    22718        0.050
 0.868        32.896  8000000   100      90    23294        0.050
 0.871        33.569  8000000   100     100    23877        0.050
 0.873        29.677  8000000   100     110    24447        0.050
 0.875        34.983  8000000   100     120    24978        0.050
 0.877        34.644  8000000   100     130    25494        0.050
 0.879        36.034  8000000   100     140    26015        0.050
 0.881        36.557  8000000   100     150    26533        0.050
 0.883        36.708  8000000   100     160    27034        0.050
 0.884        36.946  8000000   100     170    27534        0.050
 0.886        38.691  8000000   100     180    27999        0.050
 0.888        39.257  8000000   100     190    28503        0.050
 0.890        39.152  8000000   100     200    28955        0.050
 0.891        40.726  8000000   100     210    29453        0.050
 0.892        41.062  8000000   100     220    29895        0.050
 0.893        40.994  8000000   100     230    30319        0.050
 0.895        41.713  8000000   100     240    30736        0.050
 0.896        42.321  8000000   100     250    31180        0.050

@benwtrent
Copy link
Member Author

benwtrent commented Jan 31, 2025

EDIT: @benchaplin fixed a bug. Now candidate looks way better. We should test on some more datasets, but it seems to me the new way of filter can maybe be done all the way up to 90% filtered out.

I did some more testing, this time single segment of our nightly runs. The recall & latency pattern is much healthier with this change, though the recall is lower. The only reason the recall is so high for the restricted filters is that the baseline over-eagerly drops to brute-force because it spends way too much time doing vector comparisons.
image

BASELINE

recall  latency (ms)     nDoc  topK  fanout  visited  selectivity
 1.000       131.763  8000000   100      50    79814        0.010
 0.924        50.518  8000000   100      50    53003        0.050
 0.912        18.970  8000000   100      50    30095        0.100
 0.896        10.697  8000000   100      50    16942        0.200
 0.884         7.509  8000000   100      50    12057        0.300
 0.876         5.763  8000000   100      50     9476        0.400
 0.869         4.792  8000000   100      50     7905        0.500
 0.863         4.184  8000000   100      50     6777        0.600
 0.858         3.781  8000000   100      50     5966        0.700
 0.853         3.403  8000000   100      50     5351        0.800
 0.850         3.084  8000000   100      50     4855        0.900
 0.849         3.044  8000000   100      50     4645        0.950
 0.848         2.927  8000000   100      50     4492        0.990

Candidate:

recall  latency (ms)     nDoc  topK  fanout  visited  selectivity
 0.923         5.501  8000000   100      50     1471        0.010
 0.980         5.598  8000000   100      50     1286        0.050
 0.986         5.238  8000000   100      50     2001        0.100
 0.981         6.250  8000000   100      50     3158        0.200
 0.984         5.132  8000000   100      50     3742        0.300
 0.978         4.231  8000000   100      50     4354        0.400
 0.965         4.270  8000000   100      50     5018        0.500
 0.967         3.446  8000000   100      50     5689        0.600
 0.960         3.141  8000000   100      50     6265        0.700
 0.946         2.857  8000000   100      50     6766        0.800
 0.904         2.117  8000000   100      50     6898        0.900
 0.872         1.801  8000000   100      50     6954        0.950
 0.866         1.739  8000000   100      50     7153        0.990

@benwtrent
Copy link
Member Author

Yeah, with the bugfix in place, this patch is looking like we don't even need it to be configurable. Let's confirm with more datasets @benchaplin

@benwtrent
Copy link
Member Author

OK, the current implementation is about as good as I can figure it.

  • We explore greater than neighbor-neighbors if we gathered < maxConn/4 vectors to score
  • We will explore at MAX maxConn*maxConn total vectors

However, one thing that bothers me is that increasing k doesn't guarantee better results. This indicates to me that we take erroneous paths when the score threshold is low (e.g. we haven't gathered enough results).

1M cohere 16 maxConn 100 efConstruction
recall  latency(ms)     nDoc  topK  fanout  visited  selectivity
 0.717        1.340  1000000   100       0     1385        0.050
 0.755        1.790  1000000   100      20     1680        0.050
 0.775        1.950  1000000   100      40     1854        0.050
 0.786        2.270  1000000   100      60     2023        0.050
 0.825        2.560  1000000   100      80     2283        0.050
 0.841        3.160  1000000   100     100     2523        0.050
 0.859        4.030  1000000   100     120     2795        0.050
 0.859        3.810  1000000   100     140     2989        0.050
 0.896        4.220  1000000   100     160     3270        0.050
 0.880        4.550  1000000   100     180     3561        0.050
 0.906        4.670  1000000   100     200     3705        0.050
 0.888        4.810  1000000   100     220     3981        0.050
 0.921        4.820  1000000   100     240     4157        0.050
 0.896        5.700  1000000   100     260     4364        0.050
 0.925        6.140  1000000   100     280     4672        0.050
 0.920        5.380  1000000   100     300     4870        0.050
 0.906        7.050  1000000   100     320     5190        0.050
 0.914        7.390  1000000   100     340     5303        0.050
 0.920        7.570  1000000   100     360     5450        0.050
 0.919        7.420  1000000   100     380     5784        0.050
 0.923        7.550  1000000   100     400     5997        0.050
 0.939        8.080  1000000   100     420     6217        0.050
 0.936        7.170  1000000   100     440     6202        0.050
 0.936        8.550  1000000   100     460     6627        0.050
 0.921       11.800  1000000   100     480     6949        0.050
 0.916        9.630  1000000   100     500     6957        0.050

@benwtrent
Copy link
Member Author

OK, I checked the current search, and it seems to have the same issue (increasing k doesn't monotonically increase recall).

@benwtrent
Copy link
Member Author

Barring any further discussion, I am gonna merge this soon. While the filtering search still isn't perfect, this is a marked improvement.

@benwtrent benwtrent merged commit 0614fe9 into apache:main Feb 13, 2025
6 checks passed
@benwtrent benwtrent deleted the acorn_search branch February 13, 2025 16:46
benwtrent added a commit that referenced this pull request Feb 13, 2025
This is a continuation and completion of the work started by @benchaplin in #14085

The algorithm is fairly simple:

 - Only score and then explore vectors that actually match the filtering criteria
 - Since this will make the graph even sparser, the search spread is increased to also include the candidate's neighbor neighbors (e.g. generally maxConn * maxConn exploration)
 - Additionally, even more scored candidates for a given NSW are considered to combat the increased sparsity

Some of the changes to the baseline Acorn algorithm are:

 - There is some general threshold of filtering that bypasses this algorithm altogether. Default suggestion is 60%.
 - The number of additional neighbors explored is predicated on the percentage of the immediate neighborhood that is filtered out
 - More extended neighborhoods will be explored if the filter is exceptionally restrictive. This attempts to find valid vectors to score and explore for every candidate.
 - Only look at the extended neighbors if less than 90% of the current neighborhood matches the filter.

Since this changes the behavior significantly, and there are still some weird edge cases, I am exposing as a parameter within a new idea called `KnnSearchStrategy` that the collectors can provide. This strategy object can be provided to the queries.

closes: #13940
@benwtrent
Copy link
Member Author

@benwtrent
Copy link
Member Author

Annot & other potpourri I fixed during the prefiltered benchmarking for this change. mikemccand/luceneutil#337

@msokolov
Copy link
Contributor

msokolov commented Feb 17, 2025

Do you think the behavior change/result change is worth waiting for a major? I do think folks should be able to use this now, but be able to opt out.

Sorry, I did not follow the progress here - it looks like a nice contribution, though! I think we ended up with a per-query modification (rather than a global setting of some sort), is that right? If so, I think that makes sense if no index-level chanmges were required for it. Oh wait no I see, this is now the default behavior (dynamically selected) - no configuration needed. But the only question is whether we could somehow make it available in on 10.x since it would be a behavior change? I'm not sure if that question is still relevant though. If the results are uniformly better and there is no API breakage, would there be any concern with backport to 10.x?

@benwtrent
Copy link
Member Author

But the only question is whether we could somehow make it available in on 10.x since it would be a behavior change? I'm not sure if that question is still relevant though. If the results are uniformly better and there is no API breakage, would there be any concern with backport to 10.x?

@msokolov I did backport to 10.x, but due to the recall differences I kept the current default behavior in 10.x. But, folks can opt-in to use it by supplying a query time parameter.

I do think we can get uniformly better results eventually, but I thought it was "good enough" for now.

benwtrent pushed a commit that referenced this pull request Mar 25, 2025
…t return enough results (#14274)

When doing approximate knn search, it's possible that the approximate search returns less than k results. In case there is a filter, we know the filter cost so we can check if there are actually more than k results that match the filter.

In that case, we can switch to exact search to ensure all possible results are returned, as the approximate search was not able to find all results that matched the filter.

This was already done by #14160 - this PR adds tests to check this specific case.
benwtrent pushed a commit that referenced this pull request Mar 25, 2025
…t return enough results (#14274)

When doing approximate knn search, it's possible that the approximate search returns less than k results. In case there is a filter, we know the filter cost so we can check if there are actually more than k results that match the filter.

In that case, we can switch to exact search to ensure all possible results are returned, as the approximate search was not able to find all results that matched the filter.

This was already done by #14160 - this PR adds tests to check this specific case.
mrkm4ntr pushed a commit to mrkm4ntr/lucene that referenced this pull request Jun 10, 2025
This is a continuation and completion of the work started by @benchaplin in apache#14085

The algorithm is fairly simple:

 - Only score and then explore vectors that actually match the filtering criteria
 - Since this will make the graph even sparser, the search spread is increased to also include the candidate's neighbor neighbors (e.g. generally maxConn * maxConn exploration)
 - Additionally, even more scored candidates for a given NSW are considered to combat the increased sparsity

Some of the changes to the baseline Acorn algorithm are:

 - There is some general threshold of filtering that bypasses this algorithm altogether. Default suggestion is 60%.
 - The number of additional neighbors explored is predicated on the percentage of the immediate neighborhood that is filtered out
 - More extended neighborhoods will be explored if the filter is exceptionally restrictive. This attempts to find valid vectors to score and explore for every candidate.
 - Only look at the extended neighbors if less than 90% of the current neighborhood matches the filter.

Since this changes the behavior significantly, and there are still some weird edge cases, I am exposing as a parameter within a new idea called `KnnSearchStrategy` that the collectors can provide. This strategy object can be provided to the queries.

closes: apache#13940
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Look into ACORN-1, or another algorithm to aid in filtered HNSW search

3 participants