问题
Let say I have a thousand keys, and I would want to store the associated values. The intuitive approach seems to be something like
{
"key1":"someval",
"key2":"someotherval",
...
}
Is this a bad design pattern for elasticsearch index to have thousands of keys? Would each keys introduced this way create overhead for every documents under the index?
回答1:
If you know there is an upper limit to the number of keys you'll have, a few thousand fields is not a problem.
The problem is when you have an unbounded set of keys, e.g. when the key is derived from a value, as you'll have a continuously growing mapping and thus also cluster state. It can also lead to quirky searches.
This is a common enough question/issue that I dedicated a section to it in my article on Troubleshooting Elasticsearch searches, for Beginners.
In short, thousands of fields is no problem - not having control of the mapping is.
回答2:
Elasticsearch is not ideal for 1000s of key-value pattern in a document. and if you want to update them in real-time or something, then try redis or riak for that.
If you have thousands of keys in a document/record, essentially they become fields and the value become the text and indexed.
From information-retrieval perspective with large data, it is advised to use fewer big fields than numerous small fields, for faster search performance.
来源:https://stackoverflow.com/questions/21911162/too-many-fields-bad-for-elasticsearch-index