Skip to content
This repository was archived by the owner on May 1, 2020. It is now read-only.

[MRG] Solve some pep8/flake8 issues & some code refactoring #25

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
34 changes: 34 additions & 0 deletions python/DESCRIPTION.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Locally Optimized Product Quantization

This is Python training and testing code for Locally Optimized Product Quantization (LOPQ) models, as well as Spark scripts to scale training to hundreds of millions of vectors. The resulting model can be used in Python with code provided here or deployed via a Protobuf format to, e.g., search backends for high performance approximate nearest neighbor search.

### Overview

Locally Optimized Product Quantization (LOPQ) [1] is a hierarchical quantization algorithm that produces codes of configurable length for data points. These codes are efficient representations of the original vector and can be used in a variety of ways depending on application, including as hashes that preserve locality, as a compressed vector from which an approximate vector in the data space can be reconstructed, and as a representation from which to compute an approximation of the Euclidean distance between points.

Conceptually, the LOPQ quantization process can be broken into 4 phases. The training process also fits these phases to the data in the same order.

1. The raw data vector is PCA'd to `D` dimensions (possibly the original dimensionality). This allows subsequent quantization to more efficiently represent the variation present in the data.
2. The PCA'd data is then product quantized [2] by two k-means quantizers. This means that each vector is split into two subvectors each of dimension `D / 2`, and each of the two subspaces is quantized independently with a vocabulary of size `V`. Since the two quantizations occur independently, the dimensions of the vectors are permuted such that the total variance in each of the two subspaces is approximately equal, which allows the two vocabularies to be equally important in terms of capturing the total variance of the data. This results in a pair of cluster ids that we refer to as "coarse codes".
3. The residuals of the data after coarse quantization are computed. The residuals are then locally projected independently for each coarse cluster. This projection is another application of PCA and dimension permutation on the residuals and it is "local" in the sense that there is a different projection for each cluster in each of the two coarse vocabularies. These local rotations make the next and final step, another application of product quantization, very efficient in capturing the variance of the residuals.
4. The locally projected data is then product quantized a final time by `M` subquantizers, resulting in `M` "fine codes". Usually the vocabulary for each of these subquantizers will be a power of 2 for effective storage in a search index. With vocabularies of size 256, the fine codes for each indexed vector will require `M` bytes to store in the index.

The final LOPQ code for a vector is a `(coarse codes, fine codes)` pair, e.g. `((3, 2), (14, 164, 83, 49, 185, 29, 196, 250))`.

### Nearest Neighbor Search

A nearest neighbor index can be built from these LOPQ codes by indexing each document into its corresponding coarse code bucket. That is, each pair of coarse codes (which we refer to as a "cell") will index a bucket of the vectors quantizing to that cell.

At query time, an incoming query vector undergoes substantially the same process. First, the query is split into coarse subvectors and the distance to each coarse centroid is computed. These distances can be used to efficiently compute a priority-ordered sequence of cells [3] such that cells later in the sequence are less likely to have near neighbors of the query than earlier cells. The items in cell buckets are retrieved in this order until some desired quota has been met.

After this retrieval phase, the fine codes are used to rank by approximate Euclidean distance. The query is projected into each local space and the distance to each indexed item is estimated as the sum of the squared distances of the query subvectors to the corresponding subquantizer centroid indexed by the fine codes.

NN search with LOPQ is highly scalable and has excellent properties in terms of both index storage requirements and query-time latencies when implemented well.

#### References

For more information and performance benchmarks can be found at http://image.ntua.gr/iva/research/lopq/.

1. Y. Kalantidis, Y. Avrithis. [Locally Optimized Product Quantization for Approximate Nearest Neighbor Search.](http://image.ntua.gr/iva/files/lopq.pdf) CVPR 2014.
2. H. Jegou, M. Douze, and C. Schmid. [Product quantization for nearest neighbor search.](https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf) PAMI, 33(1), 2011.
3. A. Babenko and V. Lempitsky. [The inverted multi-index.](http://www.computer.org/csdl/trans/tp/preprint/06915715.pdf) CVPR 2012.
3 changes: 2 additions & 1 deletion python/lopq/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Copyright 2015, Yahoo Inc.
# Licensed under the terms of the Apache License, Version 2.0. See the LICENSE file associated with the project for terms.
# Licensed under the terms of the Apache License, Version 2.0.
# See the LICENSE file associated with the project for terms.
import model
import search
import utils
Expand Down
32 changes: 19 additions & 13 deletions python/lopq/eval.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,22 @@
# Copyright 2015, Yahoo Inc.
# Licensed under the terms of the Apache License, Version 2.0. See the LICENSE file associated with the project for terms.
# Licensed under the terms of the Apache License, Version 2.0.
# See the LICENSE file associated with the project for terms.
import time
import numpy as np


def compute_all_neighbors(data1, data2=None, just_nn=True):
"""
For each point in data1, compute a ranked list of neighbor indices from data2.
If data2 is not provided, compute neighbors relative to data1
""" For each point in data1, compute a ranked list of neighbor indices
from data2. If data2 is not provided, compute neighbors relative to data1

:param ndarray data1:
an m1 x n dim matrix with observations on the rows
:param ndarray data2:
an m2 x n dim matrix with observations on the rows

:returns ndarray:
an m1 x m2 dim matrix with the distance-sorted indices of neighbors on the rows
an m1 x m2 dim matrix with the distance-sorted indices of neighbors
on the rows
"""
from scipy.spatial.distance import cdist

Expand Down Expand Up @@ -89,10 +90,11 @@ def get_proportion_of_reconstructions_with_same_codes(data, model):
return float(count) / N


def get_recall(searcher, queries, nns, thresholds=[1, 10, 100, 1000], normalize=True, verbose=False):
"""
Given a LOPQSearcher object with indexed data and groundtruth nearest neighbors for a set of test
query vectors, collect and return recall statistics.
def get_recall(searcher, queries, nns, thresholds=[1, 10, 100, 1000],
normalize=True, verbose=False):
""" Given a LOPQSearcher object with indexed data and groundtruth nearest
neighbors for a set of test query vectors, collect and return recall
statistics.

:param LOPQSearcher searcher:
a searcher that contains the indexed nearest neighbors
Expand All @@ -101,10 +103,11 @@ def get_recall(searcher, queries, nns, thresholds=[1, 10, 100, 1000], normalize=
:param ndarray nns:
a list of true nearest neighbor ids for each vector in queries
:param list thresholds:
the recall thresholds to evaluate - the last entry defines the number of
results to retrieve before ranking
the recall thresholds to evaluate - the last entry defines the number
of results to retrieve before ranking
:param bool normalize:
flag to indicate whether the result should be normalized by the number of queries
flag to indicate whether the result should be normalized by the number
of queries
:param bool verbose:
flag to print every 50th search to visualize progress

Expand Down Expand Up @@ -156,6 +159,9 @@ def get_subquantizer_distortion(data, model):
pall = np.concatenate((p1, p2), axis=1)
suball = model.subquantizers[0] + model.subquantizers[1]

dists = np.array([sum(np.linalg.norm(compute_residuals(d, c)[0], ord=2, axis=1) ** 2) for c, d in zip(suball, np.split(pall, 8, axis=1))])
dists = np.array([
sum(np.linalg.norm(compute_residuals(d, c)[0], ord=2, axis=1) ** 2)
for c, d in zip(suball, np.split(pall, 8, axis=1))
])

return dists / data.shape[0]
Loading