datastax

Write timeout thrown by cassandra datastax driver

余生颓废 提交于 2019-11-29 21:03:15
While doing a bulk load of data, incrementing counters based on log data, I am encountering a timeout exception. Im using the Datastax 2.0-rc2 java driver. Is this an issue with the server not being able to keep up (ie server side config issue), or is this an issue with the client getting bored waiting for the server to respond? Either way, is there an easy config change I can make that would fix this? Exception in thread "main" com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during write query at consistency ONE (1 replica were required but only 0 acknowledged the

Getting sometimes NullPointerException while saving into cassandra

杀马特。学长 韩版系。学妹 提交于 2019-11-29 18:07:52
I have following method to write into cassandra some time it saving data fine. When I run it again , sometimes it is throwing NullPointerException Not sure what is going wrong here ... Can you please help me. ' @throws(classOf[IOException]) def writeDfToCassandra(o_model_family:DataFrame , keyspace:String, columnFamilyName: String) = { logger.info(s"writeDfToCassandra") o_model_family.write.format("org.apache.spark.sql.cassandra") .options(Map( "table" -> columnFamilyName, "keyspace" -> keyspace )) .mode(SaveMode.Append) .save() } ' 18/10/29 05:23:56 ERROR BMValsProcessor: java.lang

Enable one time Cassandra Authentication and Authorization check and cache it forever

混江龙づ霸主 提交于 2019-11-29 12:34:36
I use the authentication and authorization in my single node Cassandra setup, But I frequently get the following error in Cassandra server logs, ERROR [SharedPool-Worker-71] 2018-06-01 10:40:36,661 ErrorMessage.java:338 - Unexpected exception during request java.lang.RuntimeException: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 1 responses. at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:489) ~[apache-cassandra-3.0.8.jar:3.0.8] at org.apache.cassandra.auth.CassandraRoleManager.getRoles(CassandraRoleManager.java

Does Cassandra support Java 10?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-29 06:33:48
We're planning on migrating our environment from Java 8 to OpenJDK 10. Doing this on my local machine, I've found that Cassandra will no longer start for me, giving the following error : I can't find any solid information online that says it is definitely not supported. This post from 4 months ago suggests that they do not support Java 10, but doesn't say it is confirmed, and is more inferred. There is also a comment on it from another user saying they have managed to get it running on Java 11. The final comment on this ticket on datastax says "We've updated our CI matrix to include Java 10

Mutation of 17076203 bytes is too large for the maxiumum size of 16777216

自闭症网瘾萝莉.ら 提交于 2019-11-29 02:30:43
I have "commitlog_segment_size_in_mb: 32" in the cassandra settings but the error below indicates maximum size is 16777216, which is about 16mb. Am I looking at the correct setting for fixing the error below? I am referring to this setting based on the suggestion provided at http://mail-archives.apache.org/mod_mbox/cassandra-user/201406.mbox/%3C53A40144.2020808@gmail.com%3E I am using 2.1.0-2 for Cassandra. I am using Kairosdb, and the write buffer max size is 0.5Mb. WARN [SharedPool-Worker-1] 2014-10-22 17:31:03,163 AbstractTracingAwareExecutorService.java:167 - Uncaught exception on thread

Write timeout thrown by cassandra datastax driver

我的梦境 提交于 2019-11-28 16:43:46
问题 While doing a bulk load of data, incrementing counters based on log data, I am encountering a timeout exception. Im using the Datastax 2.0-rc2 java driver. Is this an issue with the server not being able to keep up (ie server side config issue), or is this an issue with the client getting bored waiting for the server to respond? Either way, is there an easy config change I can make that would fix this? Exception in thread "main" com.datastax.driver.core.exceptions.WriteTimeoutException:

Spark Datastax Java API Select statements

喜你入骨 提交于 2019-11-28 13:52:53
I'm using a tutorial here in this Github to run spark on cassandra using a java maven project: https://github.com/datastax/spark-cassandra-connector . I've figured how to use direct CQL statements, as I have previously asked a question about that here: Querying Data in Cassandra via Spark in a Java Maven Project However, now I'm trying to use the datastax java API in fear that my original code in my original question will not work for Datastax version of Spark and Cassandra. For some weird reason, it won't let me use .where even though it is outlined in the documentation that I can use that

Use of Order by clause in cassandra

浪子不回头ぞ 提交于 2019-11-28 13:41:42
When creating table in cassandra, we can give clustering keys with ordering like below. Create table user(partitionkey int, id int, name varchar, age int, address text, insrt_ts timestamp, Primary key(partitionkey, name, insrt_ts, id) with clustering order by (name asc, insrt_ts desc, id asc); when we insert data into that table, As per cassandra documentation records are sorted based on clustering keys. When i retrieve records with CQL1 and CQL2, I am getting in the same sorted order. CQL1: Select * from user where partitionkey=101; CQL2: Select * from user where partitionkey=101 order by

Operation Time Out Error in cqlsh console of cassandra

爱⌒轻易说出口 提交于 2019-11-28 10:54:21
I have a three nodes Cassandra Cluster and I have created one table which has more than 2,000,000 rows. When I execute this ( select count(*) from userdetails ) query in cqlsh, I got this error: OperationTimedOut: errors={}, last_host=192.168.1.2 When I run count function for less row or with limit 50,000 it works fine. count(*) actually pages through all the data. So a select count(*) from userdetails without a limit would be expected to timeout with that many rows. Some details here: http://planetcassandra.org/blog/counting-key-in-cassandra/ You may want to consider maintaining the count

Can't start Cassandra after OS patch up

天大地大妈咪最大 提交于 2019-11-28 08:26:21
When I try to start Cassandra after patching my OS, I get this error: Exception (java.lang.AbstractMethodError) encountered during startup: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; java.lang.AbstractMethodError: org.apache.cassandra.utils.JMXServerUtils$Exporter.exportObject(Ljava/rmi/Remote;ILjava/rmi/server/RMIClientSocketFactory;Ljava/rmi/server/RMIServerSocketFactory;Lsun/misc/ObjectInputFilter;)Ljava/rmi/Remote; at javax