cassandra-3.0

Retry policies in Cassandra using nodejs

孤者浪人 提交于 2019-12-02 05:13:57
问题 I finally have written a retry policy for Cassandra in nodejs , I have a use case where the whenever there is a one node replica available I need to allow the read and write by modifying my consistency to the minimum level. I have attached my updated retry code (DowngradeRetryPolicy in retry.js), Could you please check the link and give your comments, https://gist.github.com/harigist/f74b29976702a84f8f37e1bf7b509e0e 1) I expect the list of problems in using this retry policy? 2) Anything

Tombstone vs nodetool and repair

自作多情 提交于 2019-12-02 04:24:22
I inserted 10K entries in a table in Cassandra which has the TTL of 1 minute under the single partition. After the successful insert, I tried to read all the data from a single partition but it throws an error like below, WARN [ReadStage-2] 2018-04-04 11:39:44,833 ReadCommand.java:533 - Read 0 live rows and 100001 tombstone cells for query SELECT * FROM qcs.job LIMIT 100 (see tombstone_warn_threshold) DEBUG [Native-Transport-Requests-1] 2018-04-04 11:39:44,834 ReadCallback.java:132 - Failed; received 0 of 1 responses ERROR [ReadStage-2] 2018-04-04 11:39:44,836 StorageProxy.java:1906 - Scanned

Cassandra 3.0 latency statistic incorrect

落爺英雄遲暮 提交于 2019-12-02 04:12:11
I have setup new Cassandra 3.3 cluster. Then I use jvisualvm to monitor Cassandra read/write latency by using MBean (jmx metric). The result of read/write latency is always stable in all nodes for many weeks whereas read/write request in that cluster have normally movement (heavy or less in some day). As I use jvisualvm to monitor Cassandra 2.0 cluster. The read/write latency have normally behavior. It have movement depending on read/wire requests. I wonder that Why the read/write latency statistics of Cassandra 3.0+ are always stable? And I think it is incorrect result. (I have load tested in

Prevent tombstones creation

谁都会走 提交于 2019-12-02 04:02:14
I need to perform an insert to Cassandra table without creating tombstones for any column. I am using a query similar to this : insert into my_table(col1,col2,col3) values(val1,val2,null) where col1, col2 and col3 are all the attributes in my_table. Is there any other solution or workaround to prevent tombstone creation for say col3 apart from passing only non-null attributes in our query and letting cassandra set the remaining attributes to null? Don't include col3 in your insert and it just wont set anything. insert into my_table(col1,col2) values(val1,val2) If curious about structure on

Is cassandra unable to store relationships that cross partition size limit?

有些话、适合烂在心里 提交于 2019-12-02 03:54:47
I've noticed that relationships cannot be properly stored in C* due to its 100MB partition limit, denormalization doesn't help in this case and the fact that C* can have 2B cells per partition neither as those 2B cells of just Longs have 16GB ?!?!? Doesn't that cross 100MB partition size limit ? Which is what I don't understand in general, C* proclaims it can have 2B cells but a partition sizes should not cross 100MB ??? What is the idiomatic way to do this? People say that this an ideal use case for TitanDB or JanusDB that scale well with billions of nodes and edges. How do these databases

Cassandra upgrade from 2.0.x to 2.1.x or 3.0.x

风流意气都作罢 提交于 2019-12-01 12:18:15
I've searched for previous versions of this question, but none seem to fit my case. I have an existing Cassandra cluster running 2.0.x. I've been allocated new VMs, so I do NOT want to upgrade my existing Cassandra nodes - rather I want to migrate to a) new VMs and b) a more current version of Cassandra. I know for in-place upgrades, I would upgrade to the latest 2.0.x, then to the latest 2.1.x. AFAIK, there's no SSTable inconsistency here. If I go this route via addition of new nodes, I assume I would follow the datastax instructions for adding new nodes/decommissioning old nodes? Given the

Cassandra upgrade from 2.0.x to 2.1.x or 3.0.x

邮差的信 提交于 2019-12-01 11:02:58
问题 I've searched for previous versions of this question, but none seem to fit my case. I have an existing Cassandra cluster running 2.0.x. I've been allocated new VMs, so I do NOT want to upgrade my existing Cassandra nodes - rather I want to migrate to a) new VMs and b) a more current version of Cassandra. I know for in-place upgrades, I would upgrade to the latest 2.0.x, then to the latest 2.1.x. AFAIK, there's no SSTable inconsistency here. If I go this route via addition of new nodes, I

How to store java.sql.Date in cassandra date field using mapping manager?

大憨熊 提交于 2019-12-01 01:09:28
Can someone help me to store the current system date in cassandra date column in format yyyy-mm-dd using Java? I get exception while saving the java.sql.Date using MappingManager . My sample program is: Test.java import com.datastax.driver.mapping.annotations.Table; import java.sql.Date; @Table(keyspace = "testing", name = "test") public class Test { private String uid; private Date rece; public String getUid() { return uid; } public void setUid(String uid) { this.uid = uid; } public Date getRece() { return rece; } public void setRece(Date rece) { this.rece = rece; } } TestDao.java Mapper

Getting sometimes NullPointerException while saving into cassandra

老子叫甜甜 提交于 2019-11-30 09:58:33
问题 I have following method to write into cassandra some time it saving data fine. When I run it again , sometimes it is throwing NullPointerException Not sure what is going wrong here ... Can you please help me. ' @throws(classOf[IOException]) def writeDfToCassandra(o_model_family:DataFrame , keyspace:String, columnFamilyName: String) = { logger.info(s"writeDfToCassandra") o_model_family.write.format("org.apache.spark.sql.cassandra") .options(Map( "table" -> columnFamilyName, "keyspace" ->

Getting sometimes NullPointerException while saving into cassandra

杀马特。学长 韩版系。学妹 提交于 2019-11-29 18:07:52
I have following method to write into cassandra some time it saving data fine. When I run it again , sometimes it is throwing NullPointerException Not sure what is going wrong here ... Can you please help me. ' @throws(classOf[IOException]) def writeDfToCassandra(o_model_family:DataFrame , keyspace:String, columnFamilyName: String) = { logger.info(s"writeDfToCassandra") o_model_family.write.format("org.apache.spark.sql.cassandra") .options(Map( "table" -> columnFamilyName, "keyspace" -> keyspace )) .mode(SaveMode.Append) .save() } ' 18/10/29 05:23:56 ERROR BMValsProcessor: java.lang