How to understand Locality Sensitive Hashing? [closed] How to understand Locality Sensitive Hashing? [closed] c c

How to understand Locality Sensitive Hashing? [closed]


The best tutorial I have seen for LSH is in the book: Mining of Massive Datasets.Check Chapter 3 - Finding Similar Itemshttp://infolab.stanford.edu/~ullman/mmds/ch3a.pdf

Also I recommend the below slide:http://www.cs.jhu.edu/%7Evandurme/papers/VanDurmeLallACL10-slides.pdf .The example in the slide helps me a lot in understanding the hashing for cosine similarity.

I borrow two slides from Benjamin Van Durme & Ashwin Lall, ACL2010 and try to explain the intuitions of LSH Families for Cosine Distance a bit.enter image description here

  • In the figure, there are two circles w/ red and yellow colored, representing two two-dimensional data points. We are trying to find their cosine similarity using LSH.
  • The gray lines are some uniformly randomly picked planes.
  • Depending on whether the data point locates above or below a gray line, we mark this relation as 0/1.
  • On the upper-left corner, there are two rows of white/black squares, representing the signature of the two data points respectively. Each square is corresponding to a bit 0(white) or 1(black).
  • So once you have a pool of planes, you can encode the data points with their location respective to the planes. Imagine that when we have more planes in the pool, the angular difference encoded in the signature is closer to the actual difference. Because only planes that resides between the two points will give the two data different bit value.

enter image description here

  • Now we look at the signature of the two data points. As in the example, we use only 6 bits(squares) to represent each data. This is the LSH hash for the original data we have.
  • The hamming distance between the two hashed value is 1, because their signatures only differ by 1 bit.
  • Considering the length of the signature, we can calculate their angular similarity as shown in the graph.

I have some sample code (just 50 lines) in python here which is using cosine similarity. https://gist.github.com/94a3d425009be0f94751


Tweets in vector space can be a great example of high dimensional data.

Check out my blog post on applying Locality Sensitive Hashing to tweets to find similar ones.

http://micvog.com/2013/09/08/storm-first-story-detection/

And because one picture is a thousand words check the picture below:

enter image description herehttp://micvog.files.wordpress.com/2013/08/lsh1.png

Hope it helps.@mvogiatzis


Here's a presentation from Stanford that explains it. It made a big difference for me. Part two is more about LSH, but part one covers it as well.

A picture of the overview (There are much more in the slides):

enter image description here

Near Neighbor Search in High Dimensional Data - Part1:http://www.stanford.edu/class/cs345a/slides/04-highdim.pdf

Near Neighbor Search in High Dimensional Data - Part2:http://www.stanford.edu/class/cs345a/slides/05-LSH.pdf