Tokenize text for search in Python Tokenize text for search in Python elasticsearch elasticsearch

Tokenize text for search in Python


I think you want to look into a full-text search solution that provides the features you describe instead of implementing something your own in python. The two big open-source players in this space are elasticsearch and solr.

With these products you can configure fields that define custom tokenization, removal of punctuation, synonyms to aid in search, tokenization on more than just whitespace, etc etc. You can also easily add plugins to alter this analysis chain.

Here's an example of solr's schema that has some useful stuff:

Define Field Types

<fieldType class="solr.TextField" name="text_en" positionIncrementGap="100">  <analyzer type="index">    <tokenizer class="solr.WhitespaceTokenizerFactory"/>    <filter class="solr.SynonymFilterFactory" synonyms="index_synonyms.txt" ignoreCase="true" expand="false"/>    <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>-->    <filter catenateAll="0" catenateNumbers="1" catenateWords="1" class="solr.WordDelimiterFilterFactory" generateNumberParts="1" generateWordParts="1" splitOnCaseChange="1"/>    <filter class="solr.LowerCaseFilterFactory"/>    <filter class="solr.ASCIIFoldingFilterFactory"/>  </analyzer> </fieldType>

Define a Field

<field indexed="true" name="text_body" stored="false" type="text_en"/>

You can then work with search server via a nice REST API through python or just use Solr/Elasticsearch directly.