datastax-enterprise

Inserting special characters

丶灬走出姿态 提交于 2019-12-11 01:35:11
问题 I'm trying to insert special characters in my Cassandra table but I couldn't insert it. Inserting data in table with umlaut is not possible As mentioned in the link i tried above link even though my character set is UTF8 as mentioned.I'm not able to insert. I've tried using quotes also still didn't work CREATE TABLE test.calendar ( race_id int, race_start_date timestamp, race_end_date timestamp, race_name text, PRIMARY KEY (race_id, race_start_date, race_end_date) ) WITH CLUSTERING ORDER BY

How to FILTER Cassandra TimeUUID/UUID in Pig

三世轮回 提交于 2019-12-10 17:46:11
问题 Here is my Cassandra schema, using Datastax Enterprise CREATE KEYSPACE applications WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 1}; USE applications; CREATE TABLE events( bucket text, id timeuuid, app_id uuid, event text, PRIMARY KEY(bucket, id) ); I want to FILTER in PIG by app_id (TimeUUID) and id (UUID), here is my Pig script. events = LOAD 'cql://applications/events' USING CqlStorage() AS (bucket: chararray, id: chararray, app_id: chararray, event: chararray);

Strange exception in /var/log/cassandra/system.log

爷,独闯天下 提交于 2019-12-10 15:39:34
问题 Unexpected errors in the Cassandra logs, haven't been able to trace down the underlaying cause yet. What component utilise Netty, or is this problem well known? (couldn't find any info) INFO [SharedPool-Worker-1] 2016-05-18 13:47:41,004 Message.java:532 - Unexpected exception during request; channel = [id: 0xe93fe01e, /40.68.XX.XXX:50818 :> /10.1.XX.X:9042] io.netty.channel.unix.Errors$NativeIoException: readAddress() failed: Connection timed out at io.netty.channel.unix.Errors.newIOException

LOCAL_ONE and unexpected data replication with Cassandra

爱⌒轻易说出口 提交于 2019-12-10 12:27:25
问题 FYI. We are running this test with Cassandra 2.1.12.1047 | DSE 4.8.4 We have a simple table in Cassandra that has a 5,000 rows of data in it. Some time back, as a precaution, we added monitoring on each Cassandra instance to ensure that it has 5,000 rows of data because our replication factor enforces this i.e. we have 2 replicas in every region and we have 6 servers in total in our dev cluster. CREATE KEYSPACE example WITH replication = {'class': 'NetworkTopologyStrategy', 'ap-southeast-1-A'

Why Datastax community edition installation turned out to be enterprise?

独自空忆成欢 提交于 2019-12-10 12:12:10
问题 Update Ok, some more factors discovered, this image was taken after I clicked using "manage existing cluster, and added 127.0.0.1 as host node" so I guess there should be a configuration where I can set the package to be community edition not enterprise. But If I do create new cluster where I'm able to pick community edition package, the problem is that it tries to install cassandra and datastax-agent over these nodes and finishes with errors dismiss and retry While trying to fix a cassandra

Issues with datastax spark-cassandra connector

南笙酒味 提交于 2019-12-10 11:53:16
问题 Before i go ahead and explain the question can anyone please tell me the difference between sparkSQL and CassandraSQLContext ? I am trying to run a scala code(don't want to create jar for testing purpose) on spark-cassandra cluster. So, i have the following code which does some basic query on cassandra. But every time i run the code i get the following error : Java.lang.ClassNotFoundException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition Even though i have mentioned for the

Unable to start solr aspect of DSE search

自古美人都是妖i 提交于 2019-12-10 11:37:00
问题 I am unable to start the solr aspect of DSE search and i get the following exception message when i execute, when i execute bin/dse cassandra start the cassandra service is started but not solr, does anyone have any guidance to offer me i know i have missed something: bin/dse cassandra -s message: Cannot start node if snitch's data center (Solr) differs from previous data center (Cassandra). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag

What's the limit to spark streaming in terms of data amount?

99封情书 提交于 2019-12-08 16:37:43
问题 I have a tens of millions of rows of data. Is it possible to analyze all of these within a week or a day using spark streaming? What's the limit to spark streaming in terms of data amount? I am not sure what's the upper limit and when I should put them into my database since Stream probably can't handle them anymore. I also have different time windows 1,3, 6 hours etc. where I use window operations to separate the data. Please find my code below: conf = SparkConf().setAppName(appname) sc =

One cassandra vnode keeps moving back and forth all the time

久未见 提交于 2019-12-08 12:41:19
问题 We are using vnodes on a 8 nodes datacenter. One of the node keeps moving it's token range, and when doing so provokes time out errors from the connected client. Here is what we see in OpsCenter events: 4/13/2016, 10:51am Info Host 172.31.34.155 moved from '-1108852503760494577' to '8185241953623605265' ip-172-31-34-155 4/13/2016, 10:46am Info Host 172.31.34.155 moved from '8185241953623605265' to '-1108852503760494577' ip-172-31-34-155 4/13/2016, 10:43am Info Host 172.31.34.155 moved from '

Why is my Cassandra Prepared Statement Ingest of Data so slow?

限于喜欢 提交于 2019-12-08 11:35:28
问题 I have a Java list of 100,000 names that I'd like to ingest into a 3 node Cassandra cluster that is running Datastax Enterprise 5.1 with Cassandra 3.10.0 My code ingests but it takes a looooong time. I ran a stress test on the cluster and was able to do over 25,000 writes per second. With my ingest code I am getting a terrible performace of around 200/second. My Java List has 100,000 names in it and is called myList. I use the following prepared statement and session execution to ingest the