Locality-sensitive hashing: Difference between revisions

From testwiki
Jump to navigation Jump to search
imported>Headbomb
Add: chapter, title. | Use this tool. Report bugs. | #UCB_Gadget
 
(No difference)

Latest revision as of 18:08, 29 December 2024

Template:Short description In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability.[1] (The number of buckets is much smaller than the universe of possible input items.)[1] Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.

Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).[2][3]

Locality-preserving hashing was initially devised as a way to facilitate data pipelining in implementations of massively parallel algorithms that use randomized routing and universal hashing to reduce memory contention and network congestion.[4][5]

Definitions

A finite family of functions h:MS is defined to be an LSH family[1][6][7] for

  • a metric space =(M,d),
  • a threshold r>0,
  • an approximation factor c>1,
  • and probabilities p1>p2

if it satisfies the following condition. For any two points a,bM and a hash function h chosen uniformly at random from :

  • If d(a,b)r, then h(a)=h(b) (i.e., Template:Mvar and Template:Mvar collide) with probability at least p1,
  • If d(a,b)cr, then h(a)=h(b) with probability at most p2.

Such a family is called (r,cr,p1,p2)-sensitive.

LSH with respect to a similarity measure

Alternatively[8] it is possible to define an LSH family on a universe of items Template:Mvar endowed with a similarity function ϕ:U×U[0,1]. In this setting, a LSH scheme is a family of hash functions Template:Mvar coupled with a probability distribution Template:Mvar over Template:Mvar such that a function hH chosen according to Template:Mvar satisfies Pr[h(a)=h(b)]=ϕ(a,b) for each a,bU.

Amplification

Given a (d1,d2,p1,p2)-sensitive family , we can construct new families 𝒢 by either the AND-construction or OR-construction of .[1]

To create an AND-construction, we define a new family 𝒢 of hash functions Template:Mvar, where each function Template:Mvar is constructed from Template:Mvar random functions h1,,hk from . We then say that for a hash function g𝒢, g(x)=g(y) if and only if all hi(x)=hi(y) for i=1,2,,k. Since the members of are independently chosen for any g𝒢, 𝒢 is a (d1,d2,p1k,p2k)-sensitive family.

To create an OR-construction, we define a new family 𝒢 of hash functions Template:Mvar, where each function Template:Mvar is constructed from Template:Mvar random functions h1,,hk from . We then say that for a hash function g𝒢, g(x)=g(y) if and only if hi(x)=hi(y) for one or more values of Template:Mvar. Since the members of are independently chosen for any g𝒢, 𝒢 is a (d1,d2,1(1p1)k,1(1p2)k)-sensitive family.

Applications

LSH has been applied to several problem domains, including:

Methods

Bit sampling for Hamming distance

One of the easiest ways to construct an LSH family is by bit sampling.[7] This approach works for the Hamming distance over Template:Mvar-dimensional vectors {0,1}d. Here, the family of hash functions is simply the family of all the projections of points on one of the d coordinates, i.e., ={h:{0,1}d{0,1}h(x)=xi for some i{1,,d}}, where xi is the ith coordinate of x. A random function h from simply selects a random bit from the input point. This family has the following parameters: P1=1R/d, P2=1cR/d. That is, any two vectors x,y with Hamming distance at most R collide under a random h with probability at least P1. Any x,y with Hamming distance at least cR collide with probability at most P2.

Min-wise independent permutations

Template:Main

Suppose Template:Mvar is composed of subsets of some ground set of enumerable items Template:Mvar and the similarity function of interest is the Jaccard index Template:Mvar. If Template:Mvar is a permutation on the indices of Template:Mvar, for AS let h(A)=minaA{π(a)}. Each possible choice of Template:Mvar defines a single hash function Template:Mvar mapping input sets to elements of Template:Mvar.

Define the function family Template:Mvar to be the set of all such functions and let Template:Mvar be the uniform distribution. Given two sets A,BS the event that h(A)=h(B) corresponds exactly to the event that the minimizer of Template:Mvar over AB lies inside AB. As Template:Mvar was chosen uniformly at random, Pr[h(A)=h(B)]=J(A,B) and (H,D) define an LSH scheme for the Jaccard index.

Because the symmetric group on Template:Mvar elements has size Template:Mvar!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized Template:Mvar. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen Template:Mvar. It has been established that a min-wise independent family of permutations is at least of size lcm{1,2,,n}eno(n),[19] and that this bound is tight.[20]

Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families. Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most Template:Mvar.[21] Approximate min-wise independence differs from the property by at most a fixed Template:Mvar.[22]

Open source methods

Nilsimsa Hash

Template:Main

Nilsimsa is a locality-sensitive hashing algorithm used in anti-spam efforts.[23] The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:

  1. The digest identifying each message should not vary significantly for changes that can be produced automatically.
  2. The encoding must be robust against intentional attacks.
  3. The encoding should support an extremely low risk of false positives.

Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.[24]

TLSH

TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications.[17] The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar.

An implementation of TLSH is available as open-source software.[25]

Random projection

Template:Main

θ(u,v)π is proportional to 1cos(θ(u,v)) on the interval [0, π

The random projection method of LSH due to Moses Charikar[8] called SimHash (also sometimes called arccos[26]) uses an approximation of the cosine distance between vectors. The technique was used to approximate the NP-complete max-cut problem.[8]

The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector Template:Mvar) at the outset and use the hyperplane to hash input vectors.

Given an input vector Template:Mvar and a hyperplane defined by Template:Mvar, we let h(v)=sgn(vr). That is, h(v)=±1 depending on which side of the hyperplane Template:Mvar lies. This way, each possible choice of a random hyperplane Template:Mvar can be interpreted as a hash function h(v).

For two vectors Template:Mvar with angle θ(u,v) between them, it can be shown that

Pr[h(u)=h(v)]=1θ(u,v)π.

Since the ratio between θ(u,v)π and 1cos(θ(u,v)) is at least 0.439 when θ(u,v)[0,π],[8][27] the probability of two vectors being on different sides of the random hyperplane is approximately proportional to the cosine distance between them.

Stable distributions

The hash function [28] h𝐚,b(υ):d𝒩 maps a Template:Mvar-dimensional vector υ onto the set of integers. Each hash function in the family is indexed by a choice of random 𝐚 and b where 𝐚 is a Template:Mvar-dimensional vector with entries chosen independently from a stable distribution and b is a real number chosen uniformly from the range [0,r]. For a fixed 𝐚,b the hash function h𝐚,b is given by h𝐚,b(υ)=𝐚υ+br.

Other construction methods for hash functions have been proposed to better fit the data. [29] In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.

Semantic hashing

Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher semantic similarity.[30] The hashcodes are found via training of an artificial neural network or graphical model.Template:Citation needed

One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor search algorithms. Consider an LSH family . The algorithm has two main parameters: the width parameter Template:Mvar and the number of hash tables Template:Mvar.

In the first step, we define a new family 𝒢 of hash functions Template:Mvar, where each function Template:Mvar is obtained by concatenating Template:Mvar functions h1,,hk from , i.e., g(p)=[h1(p),,hk(p)]. In other words, a random hash function Template:Mvar is obtained by concatenating Template:Mvar randomly chosen hash functions from . The algorithm then constructs Template:Mvar hash tables, each corresponding to a different randomly chosen hash function Template:Mvar.

In the preprocessing step we hash all Template:Mvar Template:Mvar-dimensional points from the data set Template:Mvar into each of the Template:Mvar hash tables. Given that the resulting hash tables have only Template:Mvar non-zero entries, one can reduce the amount of memory used per each hash table to O(n) using standard hash functions.

Given a query point Template:Mvar, the algorithm iterates over the Template:Mvar hash functions Template:Mvar. For each Template:Mvar considered, it retrieves the data points that are hashed into the same bucket as Template:Mvar. The process is stopped as soon as a point within distance Template:Mvar from Template:Mvar is found.

Given the parameters Template:Mvar and Template:Mvar, the algorithm has the following performance guarantees:

  • preprocessing time: O(nLkt), where Template:Mvar is the time to evaluate a function h on an input point Template:Mvar;
  • space: O(nL), plus the space for storing data points;
  • query time: O(L(kt+dnP2k));
  • the algorithm succeeds in finding a point within distance Template:Mvar from Template:Mvar (if there exists a point within distance Template:Mvar) with probability at least 1(1P1k)L;

For a fixed approximation ratio c=1+ϵ and probabilities P1 and P2, one can set k=lognlog1/P2 and L=P1k=O(nρP11), where ρ=logP1logP2. Then one obtains the following performance guarantees:

  • preprocessing time: O(n1+ρP11kt);
  • space: O(n1+ρP11), plus the space for storing data points;
  • query time: O(nρP11(kt+d));

Improvements

When Template:Mvar is large, it is possible to reduce the hashing time from O(nρ). This was shown by[31] and[32] which gave

  • query time: O(tlog2(1/P2)/P1+nρ(d+1/P1));
  • space: O(n1+ρ/P1+log2(1/P2)/P1);

It is also sometimes the case that the factor 1/P1 can be very large. This happens for example with Jaccard similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the query. In[33] it was shown how to reduce the query time to O(nρ/P11ρ) (not including hashing costs) and similarly the space usage.

See also

References

Template:Reflist

Further reading

  1. 1.0 1.1 1.2 1.3 Template:Cite web
  2. Template:Cite conference
  3. Template:Cite book
  4. 4.0 4.1 Template:Cite thesis
  5. 5.0 5.1 Template:Cite journal
  6. Template:Cite journal
  7. 7.0 7.1 Template:Cite conference
  8. 8.0 8.1 8.2 8.3 Template:Cite conference
  9. Template:Citation.
  10. Template:Citation.
  11. Template:Citation.
  12. Template:Citation
  13. Template:Citation
  14. Template:Citation
  15. Template:Cite arXiv
  16. Template:Citation
  17. 17.0 17.1 Template:Cite conference
  18. Template:Citation
  19. Template:Cite conference
  20. Template:Cite journal
  21. Template:Cite journal
  22. Template:Cite journal
  23. Template:Cite web
  24. Template:Cite journal
  25. Template:Cite web
  26. Template:Cite journal
  27. Template:Cite journal
  28. Template:Cite journal
  29. Template:Cite journal
  30. Template:Cite journal
  31. Dahlgaard, Søren, Mathias Bæk Tejs Knudsen, and Mikkel Thorup. "Fast similarity sketching." 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2017.
  32. Christiani, Tobias. "Fast locality-sensitive hashing frameworks for approximate near neighbor search." International Conference on Similarity Search and Applications. Springer, Cham, 2019.
  33. Ahle, Thomas Dybdahl. "On the Problem of p11 in Locality-Sensitive Hashing." International Conference on Similarity Search and Applications. Springer, Cham, 2020.
  34. Gorman, James, and James R. Curran. "Scaling distributional similarity to large corpora." Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006.