Synonym filter with "&" not working in elasticsearch suggest with standard tokenizer Synonym filter with "&" not working in elasticsearch suggest with standard tokenizer elasticsearch elasticsearch

Synonym filter with "&" not working in elasticsearch suggest with standard tokenizer


Your current issue apparently lies in the choice of tokenizer for suggest_analyzer. The standard tokenizer does not generate a token for &, and thus the token stream passed to your filters do not see the & token for them to be able to replace it. You can see how this works using the _analyze endpoint

In this case, the tokens generated by the standard tokenizer look like this for the text s & p

"tokens": [      {         "token": "s",         "start_offset": 5,         "end_offset": 6,         "type": "<ALPHANUM>",         "position": 1      },      {         "token": "p",         "start_offset": 9,         "end_offset": 10,         "type": "<ALPHANUM>",         "position": 2      }   ]

The standard tokenizer eats the &. The simplest way to get everything to work here is to change your analyzer to use the whitespace analyzer, which will not strip out special characters or do much work at all, its job is to split on white space.

I modified your mapping to be this:

  "settings": {    "analysis": {      "analyzer": {        "suggest_analyzer": {          "type":      "custom",          "tokenizer": "whitespace",          "filter":    [ "lowercase", "my_synonym_filter" ]        }      },      "filter": {        "my_synonym_filter": {          "type": "synonym",           "synonyms": [              "&, and",              "foo, bar" ]        }      }    }  }

That will get you results like this:

{   "_shards": {      "total": 5,      "successful": 5,      "failed": 0   },   "name_suggest": [      {         "text": "s and",         "offset": 0,         "length": 5,         "options": [            {               "text": "s & p",               "score": 1            }         ]      }   ]}


Another option is to replace ampersands before hitting the tokenizer, using a char filter. Like so:

            ...            "char_filter" : {                "replace_ampersands" : {                    "type" : "mapping",                    "mappings" : ["&=>and"]                }            },            "analyzer": {                "autocomplete": {                    "type": "custom",                    "tokenizer": "standard",                    "char_filter" : ["replace_ampersands"],                    "filter": [                        "lowercase",                        "addy_synonym_filter",                        "autocomplete_filter",                    ]                }            }            ...