cassandra-2.0

ResponseError : Expected 4 or 0 byte int

只谈情不闲聊 提交于 2019-12-12 08:01:56
问题 I am trying cassandra node driver and stuck in problem while inserting a record, it looks like cassandra driver is not able to insert float values. Problem: When passing int value for insertion in db, api gives following error: Debug: hapi, internal, implementation, error ResponseError: Expected 4 or 0 byte int (8) at FrameReader.readError (/home/gaurav/Gaurav-Drive/code/nodejsWorkspace/cassandraTest/node_modules/cassandra-driver/lib/readers.js:291:13) at Parser.parseError (/home/gaurav

How to set up Cassandra client-to-node encryption with the DataStax Java driver?

回眸只為那壹抹淺笑 提交于 2019-12-12 07:57:46
问题 I've set up node-to-node encryption on my Cassandra cluster. Now I want to set up client-to-node. According to this documentation, it should be as easy as taking the SSL certificate of my client and importing it into every node's truststore. I don't have such a certificate yet but this is not my question. Since my client is using the DataStax Java driver, it seems that in order to enable SSL from the client side, when I am building the Cluster I should use the withSSL() method to enable SSL.

Opsagent UnsupportedOperationException with PersistentHashMap

不问归期 提交于 2019-12-12 05:53:54
问题 I'm running Cassandra along with opscenter agent, and got the following error in the log when Opscenter tries to get general and CF metrics. INFO [jmx-metrics-1] 2015-08-02 21:55:20,555 New JMX connection (127.0.0.1:7199) INFO [jmx-metrics-1] 2015-08-02 21:55:20,558 New JMX connection (127.0.0.1:7199) ERROR [jmx-metrics-2] 2015-08-02 21:55:25,448 Error getting CF metrics java.lang.UnsupportedOperationException: nth not supported on this type: PersistentArrayMap at clojure.lang.RT.nthFrom(RT

Why does spark-submit fail with “Failed to load class for data source: org.apache.spark.sql.cassandra” with Cassandra connector in --jars?

点点圈 提交于 2019-12-12 03:07:42
问题 Spark version: 1.4.1 Cassandra Version: 2.1.8 Datastax Cassandra Connector: 1.4.2-SNAPSHOT.jar Command I ran ./spark-submit --jars /usr/local/src/spark-cassandra-connector/spark-cassandra-connector-java/target/scala-2.10/spark-cassandra-connector-java-assembly-1.4.2-SNAPSHOT.jar --driver-class-path /usr/local/src/spark-cassandra-connector/spark-cassandra-connector-java/target/scala-2.10/spark-cassandra-connector-java-assembly-1.4.2-SNAPSHOT.jar --jars /usr/local/lib/spark-1.4.1/external/kafka

When does fetch happen from Cassandra

ε祈祈猫儿з 提交于 2019-12-12 00:07:54
问题 I have an application that triggers the job to the spark master. But when I check the IP address executing the job, its displaying my application IP and not the spark worker IP. So, from what I understand, the call on RDD generates a spark worker to work. But my question is this. CassandraSQLContext c = new CassandraSQLContext(sc); QueryExecution q=c.executeSql(cqlCommand); //-----1 q.toRDD().count(); //----2 I saw the worker doing something for 2 but nothing for 1. So does this mean fetch

Strange behavior of timeuuid comparison

☆樱花仙子☆ 提交于 2019-12-11 21:28:29
问题 I have Cassandra 2.x cluster with 3 nodes and the db scheme like this: cqlsh> CREATE KEYSPACE test_ks WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 3} AND durable_writes = true; cqlsh> CREATE TABLE IF NOT EXISTS test_ks.test_cf ( ... time timeuuid, ... user_id varchar, ... info varchar, ... PRIMARY KEY (time, user_id) ... ) WITH compression = {'sstable_compression': 'LZ4Compressor'} AND compaction = {'class': 'LeveledCompactionStrategy'}; Lets add some data (wait some

Regular expression search or LIKE type feature in cassandra

淺唱寂寞╮ 提交于 2019-12-11 19:47:04
问题 I am using datastax cassandra ver 2.0. How do we search in cassandra column a value using regular expression.Is there way to achieve 'LIKE' ( as in sQL) functionality ? I have created table with below schema. CREATE TABLE Mapping ( id timeuuid, userid text, createdDate timestamp, createdBy text, lastUpdateDate timestamp, lastUpdateBy text, PRIMARY KEY (id,userid) ); I inserted few test records as below. id | userid | createdby -------------------------------------+----------+-----------

connecting to cassandra nodes on a datastax cluster on EC2 Ruby on Rails

天大地大妈咪最大 提交于 2019-12-11 17:46:25
问题 I created a datastax cassandra Enterprise cluster with 2 cassandra nodes, 2 search nodes and 2 Analytics nodes. Everything seems to work correctly EXCEPT, I can't connect to it from outside. If I'm on node0 server I can run the cassandra-cli and connect to the cassandra nodes on port 9160 but when I tried to connect using datastax-rails gem, I get "No live servers" I also tried datastax devCenter which tries to connect to the native port 9042 but also didn't work. I'm really puzzled, any help

Large writes cause instability in Cassandra ring

Deadly 提交于 2019-12-11 16:23:34
问题 I'm attempting to load a large amount of data into a 10-node Cassandra ring. The script doing the inserts gets ~4000 inserts / s, blocked presumably on network I/O. I launch 8 of these on a single machine, and the throughput scales almost linearly. (The individual throughput goes down slightly, but is more than compensated for by the additional processes.) This works decently, however, I'm still not getting enough throughput, so I launched the same setup on 3 more VMs. (Thus, 8 processes * 4

Refresh metadata of cassandra cluster

走远了吗. 提交于 2019-12-11 15:00:15
问题 I added nodes to a cluster which initialy used the wrong network interface as listen_adress. I fixed it by changeing the listen_address to the correct IP. The cluster is running well with that configuration but clients trying to connect to that cluster still receive the wrong IPs as Metadata from cluster. Is there any way to refresh metadata of a cluster whithout decommissioning the nodes and setting up new ones again? 回答1: First of all, you may try to follow this advice: http://www.datastax