elasticsearch-percolate

Getting ElasticSearch Percolator Queries

杀马特。学长 韩版系。学妹 提交于 2019-12-23 09:56:39
问题 I'm trying to query ElasticSearch for all the percolator queries that are currently stored on the system. My first thought was to do a match_all with a type filter but from my testing they don't seem to be returned if I do a match_all query. I haven't for the life of me been able to find the proper way to query them or any documentation on it so any help is greatly appreciated. Also any other information on how stored percolator queries are treated differently from other types is appreciated.

Elasticsearch percolate performance

谁都会走 提交于 2019-12-11 04:14:52
问题 I use percolator(Elasticsearch 2.3.3) and i have ~100 term queries. When i percolate 1 document in 1 thread, it took ~500ms: {u'total': 0, u'took': 452, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 0.467885982513 There are 4 CPU, so i want to percolate in 4 processes. But when i launch them, everyone took ~2000ms: {u'total': 0, u'took': 1837, u'_shards': {u'successful': 12, u'failed': 0, u'total': 12}} TIME 1.890885982513 Why? I use python module Elasticsearch 2.3.0. I

How to improve percolator performance in ElasticSearch?

与世无争的帅哥 提交于 2019-12-06 09:24:14
问题 Summary We need to increase percolator performance (throughput). Most likely approach is scaling out to multiple servers. Questions How to do scaling out right? 1) Would increasing number of shards in underlying index allow running more percolate requests in parallel? 2) How much memory does ElasticSearch server need if it does percolation only? Is it better to have 2 servers with 4GB RAM or one server with 16GB RAM? 3) Would having SSD meaningfully help percolator's performance, or it is

How to Optimize elasticsearch percolator index Memory Performance

倖福魔咒の 提交于 2019-12-04 12:13:46
问题 Is there a way to improve memory performance when using an elasticsearch percolator index? I have created a separate index for my percolator. I have roughly 1 000 000 user created saved searches (for email alerts). After creating this percolator index, my heap usage spikes to 100% and the server became unresponsive for any queries. I have somewhat limited resources and am not able to simply throw more RAM at the problem. The only solution was to delete the index containing my saved searches.

Using NEST to percolate

懵懂的女人 提交于 2019-12-04 07:33:41
I'm indexing my query as follows: client.Index(new PercolatedQuery { Id = "std_query", Query = new QueryContainer(new MatchQuery { Field = Infer.Field<LogEntryModel>(entry => entry.Message), Query = "just a text" }) }, d => d.Index(EsIndex)); client.Refresh(EsIndex); Now, how do I use the percolator capabilities of ES to match an incoming document with this query? To say the NEST documentation is lacking in this area would be a huge understatement. I tried using client.Percolate call, but it's deprecated now and they advise to use the search api, but don't tell how to use it with percolator...

How to set up percolator to return when an aggregation value hits a certain threshold?

十年热恋 提交于 2019-11-28 01:22:18
Take the following aggregation query as an example: { "query": { "match_all": {} }, "aggs": { "groupBy": { "terms": { "field": "CustomerName" }, "aggs": { "points_sum": { "stats": { "field": "TransactionAmount" } } } } }, "size": 0 } I am interested in knowing when any CustomerName has an average TransactionAmount (stats.avg) that is above some threshold for all of that customer's purchases, as soon as I index a document that would put my average above that threshold. It seems like percolator is designed for matching documents to rules, more or less, but I can't find any good examples of using

How to set up percolator to return when an aggregation value hits a certain threshold?

这一生的挚爱 提交于 2019-11-26 21:54:36
问题 Take the following aggregation query as an example: { "query": { "match_all": {} }, "aggs": { "groupBy": { "terms": { "field": "CustomerName" }, "aggs": { "points_sum": { "stats": { "field": "TransactionAmount" } } } } }, "size": 0 } I am interested in knowing when any CustomerName has an average TransactionAmount (stats.avg) that is above some threshold for all of that customer's purchases, as soon as I index a document that would put my average above that threshold. It seems like percolator