Cassandra timeout cqlsh query large(ish) amount of data

前端 未结 4 1951
一整个雨季
一整个雨季 2021-01-03 21:36

I\'m doing a student project involving building and querying a Cassandra data cluster.

When my cluster load was light ( around 30GB ) my queries ran without a probl

4条回答
  •  悲&欢浪女
    2021-01-03 21:58

    I'm going to guess that you are also using secondary indexes. You are finding out firsthand why secondary index queries and ALLOW FILTERING queries are not recommended...because those type of design patterns do not scale for large datasets. Rebuild your model with query tables that support primary key lookups, as that is how Cassandra is designed to work.

    Edit

    "The variables that are constrained are cluster keys."

    Right...which means they are not partition keys. Without constraining your partition key(s) you are basically scanning your entire table, as clustering keys are only valid (cluster data) within their partition key.

    Edit 20190731

    So while may I have the "accepted" answer, I can see that there are three additional answers here. They all focus on changing the query timeout, and two of them outscore my answer (one by quite a bit).

    As this question continues to rack-up page views, I feel compelled to address the aspect of increasing the timeout. Now, I'm not about to downvote anyone's answers, as that would look like "sour grapes" from a vote perspective. But I can articulate why I don't feel that solves anything.

    First, the fact that the query times-out at all, is a symptom; it's not the main problem. Therefore increasing the query timeout is simply a bandaid solution, obscuring the main problem.

    The main problem of course being, that the OP is trying to force the cluster to support a query that does not match the underlying data model. As long as this problem is ignored and subject to work-arounds (instead of being dealt with directly) this problem will continue to manifest itself.

    Secondly, look at what the OP is actually trying to do:

    My goal for data generation is 2TB. How do I query that large of space without running into timeouts?

    Those query timeout limits are there to protect your cluster. If you were to run a full-table scan (which means full-cluster scan to Cassandra) through 2TB of data, that timeout threshold would be quite large. In fact, if you did manage to find the right number to allow that, your coordinator node would tip over LONG before most of the data was assembled in the result set.

    In summary, increasing query timeouts:

    1. Gives the appearance of "helping" by forcing Cassandra to work against how it was designed.

    2. Can potentially crash a node, putting the stability of the underlying cluster at risk.

    Therefore, increasing the query timeouts is a terrible, TERRIBLE IDEA.

提交回复
热议问题