cassandra-3.0

Performance of token range based queries on partition keys?

南笙酒味 提交于 2019-12-07 16:29:21
问题 I am selecting all records from cassandra nodes based on token range of my partition key. Below is the code: public static synchronized List<Object[]> getTokenRanges( final Session session) { if (cluster == null) { cluster = session.getCluster(); } Metadata metadata = cluster.getMetadata(); return unwrapTokenRanges(metadata.getTokenRanges()); } private static List<Object[]> unwrapTokenRanges(Set<TokenRange> wrappedRanges) { final int tokensSize = 2; List<Object[]> tokenRanges = new ArrayList<

How to downgrade Cassandra 3.0.0 -> 2.x?

丶灬走出姿态 提交于 2019-12-07 09:57:06
问题 I recently found out that Cassandra 3.0.0 and PrestoDB don't play well together. I have a lot of data loaded into Cassandra 3.0 and I wouldn't like to rebuild the whole thing. Is there a safe way to downgrade to 2.x temporarily until Presto is updated, so then I can come back to 3.0? I know downgrading is not officially supported, but I'm wondering whether more experienced S.O. Cassandra users could point me in the right direction here. I assume the answer will be "don't try it", but who

Many-to-many in Cassandra 3

ぐ巨炮叔叔 提交于 2019-12-07 07:44:25
What's the right way to model many-to-many relationships in Cassandra (using 3.10 at the moment)? From what answers I was able to find, denormalization into two relationship tables is suggested (as in here, for example: Modeling many-to-many relations in Cassandra 2 with CQL3 ). But there are problems with that on deletes, and those answers are so sparse they do not mention any details on that. Suppose we have the following tables: CREATE TABLE foo ( key UUID PRIMARY KEY, content TEXT ) CREATE TABLE bar ( key UUID PRIMARY KEY, content TEXT ) CREATE TABLE foo_bar ( foo UUID, bar UUID, PRIMARY

Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)

你说的曾经没有我的故事 提交于 2019-12-07 05:06:24
问题 Below is my script CREATE TABLE alrashed.tbl_alerts_details ( alert_id int, action_required int, alert_agent_id int, alert_agent_type_id int, alert_agent_type_name text, alert_definer_desc text, alert_definer_name text, alert_source text, alert_state text, col_1 text, col_2 text, col_3 text, col_4 text, col_5 text, current_escalation_level text, date_part date, device_id text, driver map<text, text>, is_processed int, is_real_time int, location map<text, text>, seq_no int, severity text, time

Cassandra query failure (Tombstones)

帅比萌擦擦* 提交于 2019-12-06 15:34:49
So this is driving me crazy. i tried querying one of my table in Cassandra and it showed query failure. i tried digging dip in to the reason behind it and found that it was because of tombstone. i changed GC_GRACE_SECONDS to Zero and triggered Compaction using nodetool, And when i queried again it worked fine. however on a subsequent calls query failed again with a same reason. i am using cassandra-nodejs driver. This is my data model. CREATE TABLE my_table ( firstname text, lastname text, email text, mobile text, date timeuuid, value float, PRIMARY KEY (firstname, lastname, email, mobile) )

Cassandra Cluster - Specific Node - specific table high Dropped Mutations

杀马特。学长 韩版系。学妹 提交于 2019-12-06 14:02:44
My Compression strategy in Production was LZ4 Compression. But I modified it to Deflate For compression change, we had to use nodetool Upgradesstables to forcefully upgrade the compression strategy on all sstables But once upgradesstabloes command completed on all the 5 nodes in the cluster, My requests started to fail, both read and write The issue is traced to a specific node out of the 5 node cluster and to a spcific table on that node. My whole cluster has roughly same amount of data and configuration , but 1 node in particular goes down is misbehaving Output of nodetool status |/ State

ttl in cassandra creating tombstones

点点圈 提交于 2019-12-06 09:50:51
I am only doing inserts to cassandra. While inserting , not nulls are only inserted to avoid tombstones. But few records are inserted with TTL. But then doing select count(*) from table gives following errors - Read 76 live rows and 1324 tombstone cells for query SELECT * FROM xx.yy WHERE token(y) >= token(fc872571-1253-45a1-ada3-d6f5a96668e8) LIMIT 100 (see tombstone_warn_threshold) Do TTL inserts lead to tombstones in cassandra 3.7 ? How can the warning be mitigated ? There are no updates done only inserts , some records without TTL , others with TTL From datastax documentation: https://docs

Performance of token range based queries on partition keys?

安稳与你 提交于 2019-12-06 03:47:55
I am selecting all records from cassandra nodes based on token range of my partition key. Below is the code: public static synchronized List<Object[]> getTokenRanges( final Session session) { if (cluster == null) { cluster = session.getCluster(); } Metadata metadata = cluster.getMetadata(); return unwrapTokenRanges(metadata.getTokenRanges()); } private static List<Object[]> unwrapTokenRanges(Set<TokenRange> wrappedRanges) { final int tokensSize = 2; List<Object[]> tokenRanges = new ArrayList<>(); for (TokenRange tokenRange : wrappedRanges) { List<TokenRange> unwrappedTokenRangeList =

Cassandra failure during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded, 1 failed)

时光毁灭记忆、已成空白 提交于 2019-12-05 10:19:46
Below is my script CREATE TABLE alrashed.tbl_alerts_details ( alert_id int, action_required int, alert_agent_id int, alert_agent_type_id int, alert_agent_type_name text, alert_definer_desc text, alert_definer_name text, alert_source text, alert_state text, col_1 text, col_2 text, col_3 text, col_4 text, col_5 text, current_escalation_level text, date_part date, device_id text, driver map<text, text>, is_processed int, is_real_time int, location map<text, text>, seq_no int, severity text, time_stamp timestamp, transporter map<text, text>, transporter_name text, trip_id int, updated_on timestamp

Apache Cassandra upgrade 3.X from 2.1

与世无争的帅哥 提交于 2019-12-03 16:20:19
Is it possible to upgrade Apache Cassandra 2.1.9+ to Apache Cassandra 3.1+ directly? The release notes for 3.0 mention direct upgrades need a minimum of Apache Cassandra 2.1.9+, but all further releases of Apache Cassandra don't mention whether an intermediate version is needed. Yes, you can upgrade from Cassandra 2.1.9 (or higher) to Cassandra 3.1 (or higher). As stated in the DataStax dev blog in June of 2015 , Cassandra moved to a "tick-tock" release cycle with version 3. The details of which you can get from the link, but the main point is that the release structure of 3.x is not the same