Find best substring match Find best substring match python python

Find best substring match


This function finds best matching substring of variable length.

The implementation considers the corpus as one long string, hence avoiding your concerns with spaces and unseparated words.

Code summary:1. Scan the corpus for match values in steps of size step to find the approximate location of highest match value, pos.2. Find the substring in the vicinity of pos with the highest match value, by adjusting the left/right positions of the substring.

from difflib import SequenceMatcherdef get_best_match(query, corpus, step=4, flex=3, case_sensitive=False, verbose=False):    """Return best matching substring of corpus.    Parameters    ----------    query : str    corpus : str    step : int        Step size of first match-value scan through corpus. Can be thought of        as a sort of "scan resolution". Should not exceed length of query.    flex : int        Max. left/right substring position adjustment value. Should not        exceed length of query / 2.    Outputs    -------    output0 : str        Best matching substring.    output1 : float        Match ratio of best matching substring. 1 is perfect match.    """    def _match(a, b):        """Compact alias for SequenceMatcher."""        return SequenceMatcher(None, a, b).ratio()    def scan_corpus(step):        """Return list of match values from corpus-wide scan."""        match_values = []        m = 0        while m + qlen - step <= len(corpus):            match_values.append(_match(query, corpus[m : m-1+qlen]))            if verbose:                print(query, "-", corpus[m: m + qlen], _match(query, corpus[m: m + qlen]))            m += step        return match_values    def index_max(v):        """Return index of max value."""        return max(range(len(v)), key=v.__getitem__)    def adjust_left_right_positions():        """Return left/right positions for best string match."""        # bp_* is synonym for 'Best Position Left/Right' and are adjusted         # to optimize bmv_*        p_l, bp_l = [pos] * 2        p_r, bp_r = [pos + qlen] * 2        # bmv_* are declared here in case they are untouched in optimization        bmv_l = match_values[p_l // step]        bmv_r = match_values[p_l // step]        for f in range(flex):            ll = _match(query, corpus[p_l - f: p_r])            if ll > bmv_l:                bmv_l = ll                bp_l = p_l - f            lr = _match(query, corpus[p_l + f: p_r])            if lr > bmv_l:                bmv_l = lr                bp_l = p_l + f            rl = _match(query, corpus[p_l: p_r - f])            if rl > bmv_r:                bmv_r = rl                bp_r = p_r - f            rr = _match(query, corpus[p_l: p_r + f])            if rr > bmv_r:                bmv_r = rr                bp_r = p_r + f            if verbose:                print("\n" + str(f))                print("ll: -- value: %f -- snippet: %s" % (ll, corpus[p_l - f: p_r]))                print("lr: -- value: %f -- snippet: %s" % (lr, corpus[p_l + f: p_r]))                print("rl: -- value: %f -- snippet: %s" % (rl, corpus[p_l: p_r - f]))                print("rr: -- value: %f -- snippet: %s" % (rl, corpus[p_l: p_r + f]))        return bp_l, bp_r, _match(query, corpus[bp_l : bp_r])    if not case_sensitive:        query = query.lower()        corpus = corpus.lower()    qlen = len(query)    if flex >= qlen/2:        print("Warning: flex exceeds length of query / 2. Setting to default.")        flex = 3    match_values = scan_corpus(step)    pos = index_max(match_values) * step    pos_left, pos_right, match_value = adjust_left_right_positions()    return corpus[pos_left: pos_right].strip(), match_value

Example:

query = "ipsum dolor"corpus = "lorem i psum d0l0r sit amet"match = get_best_match(query, corpus, step=2, flex=4)print(match)('i psum d0l0r', 0.782608695652174)

Some good heuristic advice is to always keep step < len(query) * 3/4, and flex < len(query) / 3. I also added case sensitivity, in case that's important. It works quite well when you start playing with the step and flex values. Small step values gives better results but takes longer to compute. flex governs how flexible the length of the resulting substring is allowed to be.

Important to note: This will only find the first best match, so if there are multiple equally good matches, only the first will be returned. To allow for multiple matches, change index_max() to return a list of indices for the n highest values of the input list, and loop over adjust_left_right_positions() for values in that list.


The main path to a solution uses finite state automata (FSA) of some kind. If you want a detailed summary of the topic, check this dissertation out (PDF link). Error-based models (including Levenshtein automata and transducers, the former of which Sergei mentioned) are valid approaches to this. However, stochastic models, including various types of machine learning approaches integrated with FSAs, are very popular at the moment.

Since we are looking at edit distances (effectively misspelled words), the Levenshtein approach is good and relatively simple. This paper (as well as the dissertation; also PDF) give a decent outline of the basic idea and it also explicitly mentions the application to OCR tasks. However, I will review some of the key points below.

The basic idea is that you want to build an FSA that computes both the valid string as well as all strings up to some error distance (k). In the general case, this k could be infinite or the size of the text, but this is mostly irrelevant for OCR (if your OCR could even potentially return bl*h where * is the rest of the entire text, I would advise finding a better OCR system). Hence, we can restrict regex's like bl*h from the set of valid answers for the search string blah. A general, simple and intuitive k for your context is probably the length of the string (w) minus 2. This allows b--h to be a valid string for blah. It also allows bla--h, but that's okay. Also, keep in mind that the errors can be any character you specify, including spaces (hence 'multiword' input is solvable).

The next basic task is to set up a simple weighted transducer. Any of the OpenFST Python ports can do this (here's one). The logic is simple: insertions and deletions increment the weight while equality increments the index in the input string. You could also just hand code it as the guy in Sergei's comment link did.

Once you have the weights and associated indexes of the weights, you just sort and return. The computational complexity should be O(n(w+k)), since we will look ahead w+k characters in the worst case for each character (n) in the text.

From here, you can do all sorts of things. You could convert the transducer to a DFA. You could parallelize the system by breaking the text into w+k-grams, which are sent to different processes. You could develop a language model or confusion matrix that defines what common mistakes exist for each letter in the input set (and thereby restrict the space of valid transitions and the complexity of the associated FSA). The literature is vast and still growing so there are probably as many modifications as there are solutions (if not more).

Hopefully that answers some of your questions without giving any code.


I would try to build a regular expression template from the query string. The template could then be used to search the corpus for substrings that are likely to match the query. Then use difflib or fuzzywuzzy to check if the substring does match the query.

For example, a possible template would be to match at least one of the first two letters of the query, at least one of the last two letters of the query, and have approximately the right number of letters in between:

import requery = "ipsum dolor"corpus = ["lorem 1psum d0l0r sit amet",          "lorem 1psum dlr sit amet",          "lorem ixxxxxxxr sit amet"]first_letter, second_letter = query[:2]minimum_gap, maximum_gap = len(query) - 6, len(query) - 3penultimate_letter, ultimate_letter = query[-2:]fmt = '(?:{}.|.{}).{{{},{}}}(?:{}.|.{})'.formatpattern = fmt(first_letter, second_letter,              minimum_gap, maximum_gap,              penultimate_letter, ultimate_letter)#print(pattern) # for debugging patternm = difflib.SequenceMatcher(None, "", query, False)for c in corpus:    for match in re.finditer(pattern1, c, re.IGNORECASE):        substring = match.group()        m.set_seq1(substring)        ops = m.get_opcodes()        # EDIT fixed calculation of the number of edits        #num_edits = sum(1 for t,_,_,_,_ in ops if t != 'equal')        num_edits = sum(max(i2-i1, j2-j1) for op,i1,i2,j1,j2 in ops if op != 'equal' )        print(num_edits, substring)

Output:

3 1psum d0l0r3 1psum dlr9 ixxxxxxxr

Another idea is to use the characteristics of the ocr when building the regex. For example, if the ocr always gets certain letters correct, then when any of those letters are in the query, use a few of them in the regex. Or if the ocr mixes up '1', '!', 'l', and 'i', but never substitutes something else, then if one of those letters is in the query, use [1!il] in the regex.