druid vs Elasticsearch

╄→гoц情女王★ 提交于 2020-01-02 04:43:12

问题


I'm new to druid. I've already read "druid VS Elasticsearch", but I still don't know what druid is good at.

Below is my problem:

  1. I have a solr cluster with 70 nodes.

  2. I have a very big table in solr which has 1 billion rows, and each row has 100 fields.

  3. The user will use different combinations range query of fields (20 combinations at least in one query) to count the distinct number of customer id, but the solr's distinct count algorithm is very slow and uses a lot of memory, so if the query result is more than 200 thousand, the solr's query node will crash.

Does druid has better performance than solr in distinct count?


回答1:


Druid is vastly different from search-specific databases like ES/Solr. It is a database designed for analytics, where you can do rollups, column filtering, probabilistic computations, etc.

Druid does count distinct through its use of HyperLogLog, which is a probabilistic data-structure. So if you dont worry about 100% accuracy, you can definitely try Druid and I have seen drastic improvements in response times in one of my projects. But, if you care about accuracy, then Druid might not be the best solution (even though it is quite possible to achieve in Druid as well, with performance hits and extra space taken up) - see more here: https://groups.google.com/forum/#!topic/druid-development/AMSOVGx5PhQ




回答2:


ES typically needs raw data because it's designed for search. It means the index is huge yet nested aggregations is expensive. (I know I skipped a lot of details here).

Druid is designed for metric calculation over timeseries data. It has clear distinction of dimensions and metrics. Based on dimension fields, the metric fields are pre-aggregated at the time of ingestion. This step helps reducing huge amount of data depending on cardinality of the dimensional data. In other words, Druid works best when the dimension is categorical value.

You mentioned range query. Range filter on metrics works great. But if you mean query by numerical dimensions that's something Druid is still work in progress.

As for the distinct count, both ES and Druid support HyperLogLog. In Druid, you have to specify fields at the time of ingestion in order to apply HyperLogLog at the query time. It's pretty fast and efficient.




回答3:


Recent versions (6.x AFAIK) of Elasticsearch support your use case and you will get the result from all 3 (Druid, ES, Solr), but to answer your last question about performance, I feel Druid will be the most performant with minimal resource requirement for your use case.

Though ES supports analytics and aggregations, it's primary design is based on free text search requirement. As ES does more things than your requirement mentioned above, it will use resources and may not be the right fit unless you want to do more than just the distinct count.

Quoting from Druid's website https://druid.apache.org/docs/latest/comparisons/druid-vs-elasticsearch.html

Druid focuses on OLAP workflows. Druid is optimized for high performance (fast aggregation and ingestion) at low cost and supports a wide range of analytic operations.



来源:https://stackoverflow.com/questions/39119568/druid-vs-elasticsearch

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!