What is the ideal bulk size formula in ElasticSearch?

冷暖自知 提交于 2019-12-10 12:47:25

问题


I believe there should be a formula to calculate bulk indexing size in ElasticSearch. Probably followings are the variables of such a formula.

  • Number of nodes
  • Number of shards/index
  • Document size
  • RAM
  • Disk write speed
  • LAN speed

I wonder If anyone know or use a mathematical formula. If not, how people decide their bulk size? By trial and error?


回答1:


There is no golden rule for this. Extracted from the doc:

There is no “correct” number of actions to perform in a single bulk call. You should experiment with different settings to find the optimum size for your particular workload.




回答2:


I derived this information from the Java API's BulkProcessor class. It defaults to 1000 actions or 5MB, it also allows you to set a flush interval but this is not set by default. I'm just using the default settings.

I'd suggest using BulkProcessor if you are using the Java API.




回答3:


Read ES bulk API doc carefully: https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html#_using_and_sizing_bulk_requests

  • Try with 1 KiB, try with 20 KiB, then with 10 KiB, ... dichotomy
  • Use bulk size in KiB (or equivalent), not document count !
  • Send data in bulk (no streaming), pass redundant info API url if you can
  • Remove superfluous whitespace in your data if possible
  • Disable search index updates, activate it back later
  • Round-robin across all your data nodes



回答4:


I was searching about it and i found your question :) i found this in elastic documentation .. so i will investigate the size of my documents.

It is often useful to keep an eye on the physical size of your bulk requests. One thousand 1KB documents is very different from one thousand 1MB documents. A good bulk size to start playing with is around 5-15MB in size




回答5:


In my case, I could not get more than 100,000 records to insert at a time. Started with 13 million, down to 500,000 and after no success, started on the other side, 1,000, then 10,000 then 100,000, my max.




回答6:


I haven't found a better way than trial and error (i.e. the traditional engineering process), as there are many factors beyond hardware influencing indexing speed: the structure/complexity of your index (complex mappings, filters or analyzers), data types, whether your workload is I/O or CPU bound, and so on.

In any case, to demonstrate how variable it can be, I can share my experience, as it seems different from most posted here:

Elastic 5.6 with 10GB heap running on a single vServer with 16GB RAM, 4 vCPU and an SSD that averages 150 MB/s while searching.

I can successfully index documents of wildly varying sizes via the http bulk api (curl) using a batch size of 10k documents (20k lines, file sizes between 25MB and 79MB), each batch taking ~90 seconds. index.refresh_interval is set to -1 during indexing, but that's about the only "tuning" I did, all other configurations are the default. I guess this is mostly due to the fact that the index itself is not too complex.

The vServer is at about 50% CPU, SSD averaging at 40 MB/s and 4GB RAM free, so I could probably make it faster by sending two files in parallel (I've tried simply increasing the batch size by 50% but started getting errors), but after that point it probably makes more sense to consider a different API or simply spreading the load over a cluster.



来源:https://stackoverflow.com/questions/18488747/what-is-the-ideal-bulk-size-formula-in-elasticsearch

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!