I\'m trying to implement autocomplete using Elasticsearch thinking that I understand how to do it...
I\'m trying to build multi-word (phrase) suggestions by using ES
A tokenizer will split the whole input into tokens and a token filter will apply some transformation on each token.
For instance, let's say the input is The quick brown fox. If you use an edgeNGram tokenizer, you'll get the following tokens:
TThTheThe (last character is a space)The qThe quThe quiThe quicThe quickThe quick (last character is a space)The quick bThe quick brThe quick broThe quick browThe quick brownThe quick brown (last character is a space)The quick brown fThe quick brown foThe quick brown foxHowever, if you use a standard tokenizer which will split the input into words/tokens, and then an edgeNGram token filter, you'll get the following tokens
T, Th, Theq, qu, qui, quic, quickb, br, bro, brow, brownf, fo, foxAs you can see, choosing between an edgeNgram tokenizer or token filter depends on how you want to slice and dice your text and how you want to search it.
I suggest having a look at the excellent elyzer tool which provides a way to visualize the analysis process and see what is being produced during each step (tokenizing and token filtering).
As of ES 2.2, the _analyze endpoint also supports an explain feature which shows the details during each step of the analysis process.