Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a HNSW collector that exits early when nearest neighbor queue saturates #14094

Open
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

tteofili
Copy link
Contributor

@tteofili tteofili commented Jan 2, 2025

This introduces a HnswKnnCollector interface, extending KnnCollector for HNSW, to make it possible to hook into HNSW execution for optimizations.
It then adds a new collector which uses a saturation-based threshold to dynamically halt HNSW graph exploration, in order to early exit when the exploration of new candidates is unlikely to lead to addition of new neighbors.
The new collector records the number of added neighbors upon exploration of a new candidate (a HNSW node) and it compares it with the number of neighbors added while exploring the previous candidate, when the rate of added neighbors plateaus for a number of consecutive iterations, it stops graph exploration (earlyTerminate returns true).

@tteofili
Copy link
Contributor Author

Screenshot 2025-01-15 at 14 40 16
this sample graph (from Cohere-768) shows how the collection of nearest neighbors saturates and hence it makes sense to stop visiting the graph "earlier", e.g., when the saturation counter exceeds a given threshold.

Comment on lines 20 to 24
public interface HnswKnnCollector extends KnnCollector {

/** Indicates exploration of the next HNSW candidate graph node. */
void nextCandidate();
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this kind of collector is OK. But it makes most sense to me to be a delegate collector. An abstract collector to KnnCollector.Delegate.

Then, I also think that the OrdinalTranslatingKnnCollector should inherit directly from HnswKnnCollector always assuming that the passed in collector is a HnswKnnCollector.

Note, the default behavior for HnswKnnCollector#nextCandidate can simply be nothing, allowing for overriding.

This might require a new HnswGraphSearcher#search interface to keep the old collector actions, but it can be simple to add a new one that accepts a HnswKnnCollector and delegate to it with new HnswKnnCollector(KnnCollector delegate).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I adjusted my refactoring for the seeded queries similarly. It seems nicer IMO: #14170

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks Ben. I'll incorporate your suggestions once #14170 is in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made HnswKnnCollector a KnnCollector.Decorator in c6dbf7e

@tteofili
Copy link
Contributor Author

tteofili commented Feb 13, 2025

updated results (Cohere-768, 200k docs, merge disabled)

baseline

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.946         1.776  200000   100      50       32        100         no    22645    12.45      16070.71            33           593.99        585.938       585.938

candidate

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.948         1.424  200000   100      50       32        100         no    19507    12.49      16014.09            33           593.98        585.938       585.938

@tteofili
Copy link
Contributor Author

reference paper

@tteofili tteofili marked this pull request as ready for review February 25, 2025 11:19
@tteofili
Copy link
Contributor Author

I've updated this and moved the early termination logic not to kick in by default but to be based on a (wrapping) PatienceKnnVectorQuery.

@tteofili
Copy link
Contributor Author

tteofili commented Feb 25, 2025

updated lucene_util benchmarks, with different parameters (Cohere-768, ndoc=200k).

maxconn=32

baseline

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.963         1.262  200000   100      50       32        100         no     7984    61.61       3246.12             3           595.73        585.938       585.938

candidate@default

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.957         1.165  200000   100      50       32        100         no     7171    65.61       3048.36             3           595.73        585.938       585.938
 0.958         1.164  200000   100      50       32        100         no     7214    63.45       3151.99             3           595.68        585.938       585.938

candidate@sat=0.995,patience=maxconn(32)

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  
 0.952         1.082  200000   100      50       32        100         no     6623    61.71       3240.76             3           595.71        585.938       585.938
 0.952         1.119  200000   100      50       32        100         no     6682    61.04       3276.65             3           595.70        585.938       585.938

candidate@sat=0.95,patience=fanout(50)

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.908         0.781  200000   100      50       32        100         no     4499    61.82       3235.30             3           595.72        585.938       585.938
 0.909         0.779  200000   100      50       32        100         no     4498    62.09       3221.34             3           595.69        585.938       585.938

candidate@sat=0.995,patience=fanout(50)

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.960         1.178  200000   100      50       32        100         no     7361    63.65       3142.33             3           595.75        585.938       585.938
 0.960         1.195  200000   100      50       32        100         no     7441    62.18       3216.42             3           595.76        585.938       585.938

maxconn=64

baseline

 0.968         1.351  200000   100      50       64        100         no     8698    63.04       3172.79             3           595.76        585.938       585.938
 0.968         1.328  200000   100      50       64        100         no     8744    62.29       3210.94             3           595.77        585.938       585.938

candidate@default

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.960         1.193  200000   100      50       64        100         no     7751    62.79       3185.27             3           595.76        585.938       585.938
 0.961         1.213  200000   100      50       64        100         no     7789    61.73       3240.02             3           595.73        585.938       585.938

candidate@sat=0.995,patience=maxconn(64)

 0.965         1.282  200000   100      50       64        100         no     8364    62.86       3181.88             3           595.72        585.938       585.938
 0.964         1.274  200000   100      50       64        100         no     8361    62.42       3203.90             3           595.79        585.938       585.938

candidate@sat=0.95,patience=fanout(50)

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.913         0.863  200000   100      50       64        100         no     4945    63.00       3174.80             3           595.81        585.938       585.938
 0.916         0.797  200000   100      50       64        100         no     4965    62.70       3189.89             3           595.78        585.938       585.938

candidate@sat=0.995,patience=fanout(50)

recall  latency (ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index s  index docs/s  num segments  index size (MB)  vec disk (MB)  vec RAM (MB)
 0.962         1.226  200000   100      50       64        100         no     7991    64.13       3118.52             3           595.82        585.938       585.938
 0.963         1.236  200000   100      50       64        100         no     7859    62.52       3199.23             3           595.80        585.938       585.938

@tteofili
Copy link
Contributor Author

tteofili commented Mar 20, 2025

additional experiments with different quantization levels and filtering:

No-fitlering

Baseline

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  index(s)  index_docs/s  num_segments  index_size(MB)  vec_disk(MB)  vec_RAM(MB)  indexType
 0.985        4.620  200000   100      50       64        250         no    106.46       1878.64             3          600.08       585.938      585.938       HNSW
 0.899        3.657  200000   100      50       64        250     7 bits     67.74       2952.47             5          746.34       733.185      147.247       HNSW
 0.585        2.328  200000   100      50       64        250     4 bits     46.86       4268.03             3          675.33       659.943       74.005       HNSW
 0.983        9.212  500000   100      50       64        250         no    235.68       2121.56             8         1501.44      1464.844     1464.844       HNSW
 0.900        7.562  500000   100      50       64        250     7 bits    165.99       3012.30             9         1867.29      1832.962      368.118       HNSW
 0.580        4.934  500000   100      50       64        250     4 bits    130.65       3826.96             8         1689.29      1649.857      185.013       HNSW

Candidate

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  selectivity  vec_disk(MB)  vec_RAM(MB)  indexType
 0.980        3.744  200000   100      50       64        250         no    10690    106.82       1872.29             3          600.10         1.00       585.938      585.938       HNSW
 0.896        3.473  200000   100      50       64        250     7 bits    11878     68.83       2905.54             5          746.39         1.00       733.185      147.247       HNSW
 0.585        2.032  200000   100      50       64        250     4 bits    13279     51.32       3897.12             3          675.32         1.00       659.943       74.005       HNSW
 0.982        8.549  500000   100      50       64        250         no    23079    248.29       2013.81             8         1501.32         1.00      1464.844     1464.844       HNSW
 0.898        6.733  500000   100      50       64        250     7 bits    23629    167.17       2991.02             9         1867.31         1.00      1832.962      368.118       HNSW
 0.581        3.776  500000   100      50       64        250     4 bits    21179    152.43       3280.24             5         1690.38         1.00      1649.857      185.013       HNSW

Filtering

Baseline

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  selectivity  vec_disk(MB)  vec_RAM(MB)  indexType
 1.000        0.642  200000   100      50       64        250         no     1965    109.81       1821.26             3          600.16         0.01       585.938      585.938       HNSW
 0.964        4.947  200000   100      50       64        250         no     9504    110.91       1803.33             3          600.11         0.10       585.938      585.938       HNSW
 0.983        8.417  200000   100      50       64        250         no    22193    103.13       1939.28             3          600.09         0.50       585.938      585.938       HNSW
 0.918        0.762  200000   100      50       64        250     7 bits     1981     64.33       3108.82             5          746.33         0.01       733.185      147.247       HNSW
 0.892        4.310  200000   100      50       64        250     7 bits    10302     66.23       3019.87             5          746.34         0.10       733.185      147.247       HNSW
 0.898        6.900  200000   100      50       64        250     7 bits    23394     69.09       2894.82             4          746.51         0.50       733.185      147.247       HNSW
 0.660        1.137  200000   100      50       64        250     4 bits     1695     50.01       3999.44             3          675.40         0.01       659.943       74.005       HNSW
 0.619        2.852  200000   100      50       64        250     4 bits    11021     49.88       4010.03             3          675.31         0.10       659.943       74.005       HNSW
 0.592        4.429  200000   100      50       64        250     4 bits    27121     48.72       4104.75             3          675.30         0.50       659.943       74.005       HNSW
 1.000        2.371  500000   100      50       64        250         no     5017    244.18       2047.64             8         1501.36         0.01      1464.844     1464.844       HNSW
 0.968       11.976  500000   100      50       64        250         no    21270    266.14       1878.73             8         1501.19         0.10      1464.844     1464.844       HNSW
 0.987       17.191  500000   100      50       64        250         no    44939    239.83       2084.78             8         1501.26         0.50      1464.844     1464.844       HNSW
 0.913        2.024  500000   100      50       64        250     7 bits     5075    166.55       3002.17             9         1867.19         0.01      1832.962      368.118       HNSW
 0.891       10.079  500000   100      50       64        250     7 bits    21671    168.88       2960.73             9         1867.41         0.10      1832.962      368.118       HNSW
 0.899       13.733  500000   100      50       64        250     7 bits    47517    168.22       2972.25             9         1867.22         0.50      1832.962      368.118       HNSW
 0.660        1.183  500000   100      50       64        250     4 bits     5085    153.22       3263.30             5         1690.35         0.01      1649.857      185.013       HNSW
 0.598        8.365  500000   100      50       64        250     4 bits    23514    137.45       3637.69             8         1689.26         0.10      1649.857      185.013       HNSW
 0.588        9.584  500000   100      50       64        250     4 bits    48507    137.44       3638.00             8         1689.32         0.50      1649.857      185.013       HNSW

Candidate

recall  latency(ms)    nDoc  topK  fanout  maxConn  beamWidth  quantized  visited  index(s)  index_docs/s  num_segments  index_size(MB)  selectivity  vec_disk(MB)  vec_RAM(MB)  indexType
 1.000        0.618  200000   100      50       64        250         no     1685    105.74       1891.47             3          600.11         0.01       585.938      585.938       HNSW
 0.955        4.211  200000   100      50       64        250         no     8446    104.30       1917.60             3          600.09         0.10       585.938      585.938       HNSW
 0.970        6.499  200000   100      50       64        250         no    17121    106.95       1869.98             3          600.11         0.50       585.938      585.938       HNSW
 0.918        0.813  200000   100      50       64        250     7 bits     2047     69.00       2898.68             5          746.34         0.01       733.185      147.247       HNSW
 0.883        4.271  200000   100      50       64        250     7 bits     8909     70.60       2832.98             4          746.46         0.10       733.185      147.247       HNSW
 0.893        6.104  200000   100      50       64        250     7 bits    21460     69.16       2891.72             5          746.39         0.50       733.185      147.247       HNSW
 0.684        0.763  200000   100      50       64        250     4 bits     1969     49.21       4064.54             3          675.34         0.01       659.943       74.005       HNSW
 0.613        2.752  200000   100      50       64        250     4 bits     9832     50.25       3979.78             3          675.31         0.10       659.943       74.005       HNSW
 0.592        3.430  200000   100      50       64        250     4 bits    20823     48.60       4115.06             3          675.33         0.50       659.943       74.005       HNSW
 1.000        2.346  500000   100      50       64        250         no     4996    243.49       2053.51             8         1501.29         0.01      1464.844     1464.844       HNSW
 0.964       11.287  500000   100      50       64        250         no    19991    243.30       2055.08             8         1501.34         0.10      1464.844     1464.844       HNSW
 0.984       15.180  500000   100      50       64        250         no    39049    245.65       2035.38             8         1501.41         0.50      1464.844     1464.844       HNSW
 0.894        2.064  500000   100      50       64        250     7 bits     4615    175.74       2845.05             9         1867.25         0.01      1832.962      368.118       HNSW
 0.889        9.321  500000   100      50       64        250     7 bits    20292    176.89       2826.68             9         1867.15         0.10      1832.962      368.118       HNSW
 0.898       13.142  500000   100      50       64        250     7 bits    43073    167.55       2984.20             9         1867.34         0.50      1832.962      368.118       HNSW
 0.654        1.819  500000   100      50       64        250     4 bits     5024    151.40       3302.55             5         1690.48         0.01      1649.857      185.013       HNSW
 0.598        5.857  500000   100      50       64        250     4 bits    19382    155.89       3207.37             5         1690.41         0.10      1649.857      185.013       HNSW
 0.588        5.437  500000   100      50       64        250     4 bits    29505    150.84       3314.77             5         1690.41         0.50      1649.857      185.013       HNSW

the results are mostly good. I might see if I can improve the behavior with very selective filters (0.01 case).

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for running those benchmarks. I think all the numbers look good.

My final concerns/questions are around the API.

Two ideas:

  • can we make the API more general? Seems like it could be generally useful. Maybe we kick the can here...
  • If we cannot make the API more general, or don't see the value in ever doing that, can we utilize a search strategy instead?

}

@Override
public void nextCandidate() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tteofili what do you think of making this more general? I think having a "nextCandidate" or "nextBlockOfVectors" is generally useful, and might be applicable to all types of kNN indices.

For example:

  • Flat, you just get called once, indicating you are searching ALL vectors
  • HNSW, you get called for each NSW (or in the case of filtered search, extended NSW)
  • IVF, you get called for each posting list
  • Vamana, you get called for each node before calling the neighbors

Do you think we can make this API general?

Maybe not, I am not sure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this idea Ben, I'll see if I can make up something reasonable for that ;)

*
* @lucene.experimental
*/
public abstract class HnswKnnCollector extends KnnCollector.Decorator {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, it is a little frustrating as we already have an "HNSWStrategy" and now we have an "HNSWCollector".

Could we utilize an HNSWStrategy? Or make nextCandidate a more general API?

My thought on the strategy would be that the graph searcher to indicate through the strategy object when the next group of vectors will be searched and the strategy would have a reference to the collector to which it can forward the request.

Of course, this still requires a new HnswQueueSaturationCollector, but it won't require these new base classes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants