Similarity Search, Part 2: Product Quantization


Be taught a robust method to successfully compress giant information

Similarity search is an issue the place given a question the aim is to seek out essentially the most related paperwork to it amongst all of the database paperwork.


In information science, similarity search usually seems within the NLP area, search engines like google and yahoo or recommender programs the place essentially the most related paperwork or gadgets should be retrieved for a question. There exists a big number of alternative ways to enhance search efficiency in large volumes of information.

Within the first part of this text sequence, we checked out kNN and inverted file index construction for performing similarity search. As we discovered, kNN is essentially the most simple method whereas inverted file index acts on high of it suggesting a trade-off between pace acceleration and accuracy. Nonetheless, each strategies don’t use information compression methods which could result in reminiscence points, particularly in instances of enormous datasets and restricted RAM. On this article, we are going to attempt to handle this subject by one other methodology referred to as Product Quantization.

Similarity Search, Part 1: kNN & Inverted File Index


Product quantization is the method the place every dataset vector is transformed into a brief memory-efficient illustration (referred to as PQ code). As a substitute of totally holding all of the vectors, their quick representations are saved. On the identical time, product quantization is a lossy-compression methodology which leads to decrease prediction accuracy however in apply, this algorithm works very nicely.

Usually, quantization is the method of mapping infinite values to discrete ones.


Firstly, the algorithm divides every vector into a number of equal components — subvectors. Every of the respective components of all dataset vectors kind impartial subspaces and is processed individually. Then a clustering algorithm is executed for every subspace of vectors. By doing so, a number of centroids in every subspace are created. Every subvector is encoded with the ID of the centroid that it belongs to. Moreover, the coordinates of all centroids are saved for later use.

Subspace centroids are additionally referred to as quantized vectors.

In product quantization, a cluster ID is also known as a copy worth.

Observe. Within the figures under a rectangle represents a vector containing a number of values whereas a sq. signifies a single quantity.

1*98eO9hCC3Wzp8AURuZT NA
Encoding utilizing quantization

Because of this, if an unique vector is split into n components, then it may be encoded by n numbers — IDs of respective centroids for every of its subvectors. Usually, the variety of created centroids okay is normally chosen as an influence of two for extra environment friendly reminiscence utilization. This fashion, the reminiscence required to retailer an encoded vector is n * log(okay) bits.

The gathering of all centroids inside a subspace known as a codebook. Operating n clustering algorithms for all subspaces produces n separate codebooks.

Compression instance

Think about an unique vector of dimension 1024 which shops floats (32 bits) was divided into n = 8 subvectors the place every subvector is encoded by certainly one of okay = 256 clusters. Subsequently, encoding the ID of a single cluster would require log(256) = 8 bits. Allow us to evaluate the reminiscence sizes for the vector illustration in each instances:

  • Authentic vector: 1024 * 32 bits = 4096 bytes.
  • Encoded vector: 8 * 8 bits = 8 bytes.

The ultimate compression is 512 instances! That is the actual energy of product quantization.

1*it0v87JxJNo0TSHmaXdD A
Quantization instance. Numbers in vectors present what number of numbers it shops.

Listed here are some essential notes:

  • The algorithm may be educated on one subset of vectors (e.g., to create clusters) and be used for one more one: as soon as the algorithm is educated, one other dataset of vectors is handed the place new vectors are encoded by utilizing already constructed centroids for every subspace.
  • Usually, k-means is chosen as a clustering algorithm. Considered one of its benefits is that the variety of clusters okay is a hyperparameter that may be manually outlined, based on reminiscence utilization necessities.


To get a greater understanding, allow us to first take a look at a number of naive approaches and discover out their downsides. This can even assist us notice why they shouldn’t be usually used.

Naive approaches

The primary naive method consists of decompressing all vectors by concatenating the corresponding centroids of every vector. After that, the L2 distance (or one other metric) may be calculated from a question vector to all of the dataset vectors. Clearly, this methodology works however it is rather time-consuming as a result of the brute-force search is carried out and the gap calculation is carried out on high-dimensional decompressed vectors.

One other doable approach is to separate a question vector into subvectors and compute a sum of distances from every question subvector to respective quantized vectors of a database vector, based mostly on its PQ code. As a consequence, the brute-search method is used once more and the gap calculation right here nonetheless requires a linear time of the unique vectors’ dimensionality, as within the earlier case.

1*iHNq9 s0KzbwPfGC3tNHUw
Calculating approximate distance utilizing naive method. The instance is proven for euclidean distance as a metric.

One other doable methodology is to encode the question vector right into a PQ code. Then this PQ code is instantly utilized to calculate distances to all different PQ codes. The dataset vector with the corresponding PQ code which has the shortest distance is then thought of as the closest neighbour to the question. This method is quicker than the earlier two as a result of the gap is at all times computed between low-dimensional PQ codes. Nonetheless, PQ codes are composed by cluster IDs which would not have a number of semantic that means and may be thought of as a categorical variable explicitly used as an actual variable. Clearly, it is a dangerous apply and this methodology can result in poor prediction high quality.

Optimized method

A question vector is split into subvectors. For every of its subvectors, distances to all of the centroids of the corresponding subspace are computed. Finally, this info is saved in desk d.

Acquiring a desk d storing partial question subvector-to-centroid distances

Calculated subvector-to-centroid distances are also known as partial distances.

Through the use of this subvector-to-centroid distance desk d, the approximate distance from the question to any database vector may be simply obtained by its PQ codes:

  1. For every of subvectors of a database vector, the closest centroid j is discovered (by utilizing mapping values from PQ codes) and the partial distance d[i][j] from that centroid to the question subvector i (by utilizing the calculated matrix d) is taken.
  2. All of the partial distances are squared and summed up. By taking the sq. root of this worth, the approximate euclidean distance is obtained. If you wish to know learn how to get approximate outcomes for different metrics as nicely, navigate to the part under “Approximation of different distance metrics”.
1*x3nluQ hDgjKNE9IvuJf w
Computing distance from a question to database vector by utilizing PQ code and distance desk

Utilizing this methodology for calculating approximate distances assumes that partial distances d are very near precise distances a between question and database subvectors.

Nonetheless, this situation will not be glad, particularly when the gap c between the database subvector and its centroid is giant. In such instances, calculations end in decrease accuracy.

1*ue4tkKi tXOGaDkjUW5CDw
Instance on the left reveals an excellent case of approximation when the precise distance may be very near the partial distance (c is small). On the fitting facet, we are able to observe a nasty situation as a result of the partial distance is for much longer than the precise distance (c is giant).

After we have now obtained approximate distances for all database rows, we seek for vectors with the smallest values. These vectors would be the nearest neighbours to the question.

Approximation of different distance metrics

Up to now have checked out learn how to approximate euclidean distance by utilizing partial distances. Allow us to generalize the rule for different metrics as nicely.

Think about we want to calculate a distance metric between a pair of vectors. If we all know the metrics’ formulation, we are able to instantly apply it to get the end result. However generally we are able to do it by components within the following method:

  • Each vectors are divided into n subvectors.
  • For every pair of respective subvectors, the gap metric is calculated.
  • Calculated n metrics are then mixed to supply the precise distance between the unique vectors.
1*NM WmD5E66jLYKmBGPn7bg
The determine reveals two methods of calculating a metric. On the left, the metric formulation is instantly utilized to each vectors. On the fitting, partial distances are calculated for every pair of respective subvectors. Then they’re mixed by utilizing aggregation capabilities h, g and f.

Euclidean distance is an instance of a metric which may be calculated by components. Based mostly on the determine above, we are able to select the aggregation capabilities to be h(z) = z² , g(z₀, z₁, …, zₙ) = sum(z₀, z₁, …, zₙ) and f(z) = √z.

1*2c1XExuTK8 CempRJjtKrw
Euclidean distance may be calculated by components

Inside product is one other instance of such metric with aggregation capabilities h(z) = z, g(z₀, z₁, …, zₙ) = sum(z₀, z₁, …, zₙ) and f(z) = z.

Within the context of product quantization, it is a essential property as a result of throughout inference the algorithm calculates distances by components. Which means that it will be rather more problematic to make use of metrics for product quantization that would not have this property. Cosine distance is an instance of such metric.

If there’s nonetheless a necessity to make use of a metric with out this property, then extra heuristics should be utilized to mixture partial distances with some error.


The primary benefit of the product quantization is an enormous compression of database vectors that are saved as quick PQ codes. For some purposes, such compression fee could also be even increased than 95%! Nonetheless, aside from PQ codes, the matrix d of dimension okay x n containing quantized vectors of every subspace must be saved.

Product quantization is a lossy-compression methodology, so the upper the compression is, the extra probably that the prediction accuracy will lower.

Constructing a system for environment friendly illustration requires coaching a number of cluster algorithms. Aside from it, throughout inference, okay * n partial distances should be calculated in a brute-force method and summed up for every of the database vectors which can take some time.

Product Quantization efficiency

Faiss implementation

Faiss (Fb AI Search Similarity) is a Python library written in C++ used for optimised similarity search. This library presents various kinds of indexes that are information buildings used to effectively retailer the info and carry out queries.

Based mostly on the knowledge from the Faiss documentation, we are going to see how product quantization is utilized.

Product quantization is carried out within the IndexPQ class. For initialisation, we have to present it 3 parameters:

  • d: variety of dimensions in information.
  • M: variety of splits for every vector (the identical parameter as n used above).
  • nbits: variety of bits it takes to encode a single cluster ID. Which means that the variety of complete clusters in a single subspace will likely be equal to okay = 2^nbits.

For equal subspace dimensions splitting, the parameter dim have to be divisible by M.

The overall variety of bytes required to retailer a single vector is equal to:

As we are able to see within the formulation above, for extra environment friendly reminiscence utilization the worth of M * nbits ought to be divisible by 8.

Faiss implementation of IndexPQ


We now have appeared by a extremely popular algorithm in info retrieval programs that effectively compresses giant volumes of information. Its principal draw back is a sluggish inference pace. Regardless of this truth, the algorithm is broadly utilized in fashionable Massive information purposes, particularly together with different similarity search methods.

Within the first a part of the article sequence, we described the workflow of the inverted file index. In actual fact, we are able to merge these two algorithms right into a extra environment friendly one which is able to possess some great benefits of each! That is what precisely we’re going to do within the subsequent a part of this sequence.

Similarity Search, Part 3: Merging Inverted File Index and Product Quantization


All photos until in any other case famous are by the writer.


Similarity Search, Part 2: Product Quantization was initially revealed in Towards Data Science on Medium, the place persons are persevering with the dialog by highlighting and responding to this story.

Source link


Please enter your comment!
Please enter your name here