Efficient string matching in Apache Spark Efficient string matching in Apache Spark python python

Efficient string matching in Apache Spark


I wouldn't use Spark in the first place, but if you are really committed to the particular stack, you can combine a bunch of ml transformers to get best matches. You'll need Tokenizer (or split):

import org.apache.spark.ml.feature.RegexTokenizerval tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")

NGram (for example 3-gram)

import org.apache.spark.ml.feature.NGramval ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")

Vectorizer (for example CountVectorizer or HashingTF):

import org.apache.spark.ml.feature.HashingTFval vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")

and LSH:

import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}// Increase numHashTables in practice.val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")

Combine with Pipeline

import org.apache.spark.ml.Pipelineval pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))

Fit on example data:

val query = Seq("Hello there 7l | real|y like Spark!").toDF("text")val db = Seq(  "Hello there 😊! I really like Spark ❤️!",   "Can anyone suggest an efficient algorithm").toDF("text")val model = pipeline.fit(db)

Transform both:

val dbHashed = model.transform(db)val queryHashed = model.transform(query)

and join

model.stages.last.asInstanceOf[MinHashLSHModel]  .approxSimilarityJoin(dbHashed, queryHashed, 0.75).show
+--------------------+--------------------+------------------+                  |            datasetA|            datasetB|           distCol|+--------------------+--------------------+------------------+|[Hello there 😊! ...|[Hello there 7l |...|0.5106382978723405|+--------------------+--------------------+------------------+

The same approach can be used in Pyspark

from pyspark.ml import Pipelinefrom pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSHquery = spark.createDataFrame(    ["Hello there 7l | real|y like Spark!"], "string").toDF("text")db = spark.createDataFrame([    "Hello there 😊! I really like Spark ❤️!",     "Can anyone suggest an efficient algorithm"], "string").toDF("text")model = Pipeline(stages=[    RegexTokenizer(        pattern="", inputCol="text", outputCol="tokens", minTokenLength=1    ),    NGram(n=3, inputCol="tokens", outputCol="ngrams"),    HashingTF(inputCol="ngrams", outputCol="vectors"),    MinHashLSH(inputCol="vectors", outputCol="lsh")]).fit(db)db_hashed = model.transform(db)query_hashed = model.transform(query)model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show()# +--------------------+--------------------+------------------+# |            datasetA|            datasetB|           distCol|# +--------------------+--------------------+------------------+# |[Hello there 😊! ...|[Hello there 7l |...|0.5106382978723405|# +--------------------+--------------------+------------------+

Related