ElasticSearch: EdgeNgrams and Numbers

孤者浪人 提交于 2020-01-10 04:03:07

问题


Any ideas on how EdgeNgram treats numbers?

I'm running haystack with an ElasticSearch backend. I created an indexed field of type EdgeNgram. This field will contain a string that may contain words as well as numbers.

When I run a search against this field using a partial word, it works how it's supposed to. But if I put in a partial number, I'm not getting the result that I want.

Example:

I search for the indexed field "EdgeNgram 12323" by typing "edgen" and I'll get the index returned to me. If I search for that same index by typing "123" I get nothing.

Thoughts?


回答1:


if you're using the edgeNGram tokenizer, then it will treat "EdgeNGram 12323" as a single token and then apply the edgeNGram'ing process on it. For example, if min_grams=1 max_grams=4, you'll get the following tokens indexed: ["E", "Ed", "Edg", "Edge"]. So I guess this is not what you're really looking for - consider using the edgeNGram token filter instead:

If you're using the edgeNGram token filter, make sure you're using a tokenizer that actually tokenizes the text "EdgeNGram 12323" to produce two tokens out of it: ["EdgeNGram", "12323"] (standard or whitespace tokenizer will do the trick). Then apply the edgeNGram filter next to it.

In general, edgeNGram will take "12323" and produce tokens such as "1", "12", "123", etc...




回答2:


I found my way here trying to solve this same problem in Haystack + Elasticsearch. Following the hints from uboness and ComoWhat, I wrote an alternate Haystack engine that (I believe) makes EdgeNGram fields treat numeric strings like words. Others may benefit, so I thought I'd share it.

from haystack.backends.elasticsearch_backend import ElasticsearchSearchEngine, ElasticsearchSearchBackend

class CustomElasticsearchBackend(ElasticsearchSearchBackend):
    """
    The default ElasticsearchSearchBackend settings don't tokenize strings of digits the same way as words, so emplids
    get lost: the lowercase tokenizer is the culprit. Switching to the standard tokenizer and doing the case-
    insensitivity in the filter seems to do the job.
    """
    def __init__(self, connection_alias, **connection_options):
        # see http://stackoverflow.com/questions/13636419/elasticsearch-edgengrams-and-numbers
        self.DEFAULT_SETTINGS['settings']['analysis']['analyzer']['edgengram_analyzer']['tokenizer'] = 'standard'
        self.DEFAULT_SETTINGS['settings']['analysis']['analyzer']['edgengram_analyzer']['filter'].append('lowercase')
        super(CustomElasticsearchBackend, self).__init__(connection_alias, **connection_options)

class CustomElasticsearchSearchEngine(ElasticsearchSearchEngine):
    backend = CustomElasticsearchBackend


来源:https://stackoverflow.com/questions/13636419/elasticsearch-edgengrams-and-numbers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!