cassandra-2.0

Doesn't Cassandra perform “late” replication when a node down and up again?

孤街浪徒 提交于 2019-12-24 16:22:47
问题 I have a 2-node Cassandra cluster. The replication factor is 2. The client sends data to node 1 only. If 2 nodes are both running the data is replicated from node 1 to node 2. However, if I first start node 1 only, the client sends data to node 1 then stops to send data. After that I start node 2. I expect that the data is "late" (or asynchronously) replicated from node 1 to node 2 but it's not. How can I configure this worked? My Cassandra version is 2.1.6. 回答1: Whenever a node is down while

sstableloader does not exit after successful data loading

夙愿已清 提交于 2019-12-24 13:14:06
问题 I'm trying to bulk-load my data into DSE but sstableloader doesn't exit after a successful run. According to the output, the progress for each node is already 100% and the progress total also shows 100% Environment: CentOS 6.x x86_64; DSE 4.0.1 Topology: 1 Cassandra node, 5 Solr nodes (DC auto-assigned by DSE); RF 2 System ulimit (hard, soft) in each DSE node: 65536 sstableloader heap size (-Xmx): 10240M (10G) SSTables size: 158gb (from 80gb CSV, 241m rows) I tried to take down all nodes -

4 node setup in cassandra is as same as 3 node setup

自作多情 提交于 2019-12-24 08:58:47
问题 I have a 4 node setup in Cassandra and decided to go with the following configuration, but ppl are saying this will be same as 3 node setup, So could somebody please give me a light and say why, Nodes = 3, Replication Factor = 2, Write Consistency = 2, Read Consistency = 1 Nodes = 4, Replication Factor = 3, Write Consistency = 3, Read Consistency = 1 As per my understanding, Nodes = 4, provide the two node failure, It is beneficial to have RF as '3' but ppl are saying RF = 2 will be same as

insert into column family with case sensitive column name

不问归期 提交于 2019-12-24 07:41:32
问题 I am using the following Cassandra/CQL versions: [cqlsh 4.0.1 | Cassandra 2.0.1 | CQL spec 3.1.1 | Thrift protocol 19.37.0] I am trying to insert data into a pre-existing CF with case sensitive column names. I hit "unknown identifier" errors when trying to insert data. Following is how the column family is described: CREATE TABLE "Sample_List_CS" ( key text, column1 text, "fName" text, "ipSubnet" text, "ipSubnetMask" text, value text, PRIMARY KEY (key, column1) ) WITH COMPACT STORAGE AND

writetime of cassandra row in spark

纵然是瞬间 提交于 2019-12-24 01:47:26
问题 i'm using spark with cassandra, and i want to select from my cassandra table the writeTime of my row. This is my request : val lines = sc.cassandraTable[(String, String, String, Long)](CASSANDRA_SCHEMA, table).select("a", "b", "c", "writeTime(d)").count() but it display this error : java.io.IOException: Column channal not found in table test.mytable I've tried also this request val lines = sc.cassandraTable[(String, String, String, Long)](CASSANDRA_SCHEMA, table).select("a", "b", "c",

Is Cassandra for OLAP or OLTP or both?

☆樱花仙子☆ 提交于 2019-12-23 17:55:44
问题 Cassandra does not comply with ACID like RDBMS but CAP. So Cassandra picks AP out of CAP and leaves it to the user for tuning consistency. I definitely cannot use Cassandra for core banking transaction because C* is slightly inconsistent. But Cassandra writes are extremely fast which is good for OLTP. I can use C* for OLAP because reads are extremely fast which is good for reporting too. So i understood that C* is good only when your application do not need your data to be consistent for some

Unable to start cqlsh in Mac OS X?

情到浓时终转凉″ 提交于 2019-12-23 09:28:11
问题 I have installed cassandra 2.0 successfully. When I try to start cql 3 I get no such option: cassandra -v 2.0.9 ./cqlsh -3 Usage: cqlsh [options] [host [port]] cqlsh: error: no such option: -3 回答1: Once cassandra is starded (I guess in localhost with default settings) you can connect using ./cqlsh localhost if you want start it with a specific (older) version you can do ./cqlsh --cqlversion=X.Y.Z localhost; where X.Y.Z is the version (eg: 3.1.0) 来源: https://stackoverflow.com/questions

How to increment Cassandra Counter Column with phantom-dsl?

丶灬走出姿态 提交于 2019-12-23 03:03:43
问题 Are there any examples of implementing the counter operation within phantom-dsl? Have checked: http://outworkers.com/blog/post/a-series-on-cassandra-part-3-advanced-features https://github.com/outworkers/phantom/wiki/Counter-columns https://github.com/outworkers/phantom/blob/develop/phantom-dsl/src/test/scala/com/websudos/phantom/tables/CounterTableTest.scala Kinda looking for a phantom-dsl version of this info: https://github.com/Netflix/astyanax/wiki/Working-with-counter-columns The

Unable to run spark master in dse 4.5 and slaves file is missing

烂漫一生 提交于 2019-12-22 14:51:14
问题 I have 5 node cluster in DSE 4.5 is running and up. out of 5 nodes 1 node is hadoop_enabled and spark_enabled but spark master is not running. ERROR [Thread-709] 2014-07-02 11:35:48,519 ExternalLogger.java (line 73) SparkMaster: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: /54.xxx.xxx.xxx:7077 Anyone have any idea on this?? I have also tried to export SPARK_LOCAL_IP but this is also not working DSE documentation wrongly mentioned that spark-env.sh

Apache Spark 1.5 with Cassandra : Class cast exception

匆匆过客 提交于 2019-12-22 12:07:42
问题 I use the following softwares: Cassandra 2.1.9 Spark 1.5 Java using the Cassandra driver provided by Datastax. Ubuntu 12.0.4 When I run spark locally using local[8] , the program runs fine and data is saved into Cassandra. However, when I submit the job to spark cluster, the following exception is thrown: 16 Sep 2015 03:08:58,808 WARN [task-result-getter-0] (Logging.scala:71) TaskSetManager - Lost task 3.0 in stage 0.0 (TID 3, 192.168.50.131): java.lang.ClassCastException: cannot assign