datastax

Enable one time Cassandra Authentication and Authorization check and cache it forever

怎甘沉沦 提交于 2019-11-28 06:14:55
问题 I use the authentication and authorization in my single node Cassandra setup, But I frequently get the following error in Cassandra server logs, ERROR [SharedPool-Worker-71] 2018-06-01 10:40:36,661 ErrorMessage.java:338 - Unexpected exception during request java.lang.RuntimeException: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 1 responses. at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:489) ~[apache-cassandra

Cassandra CQL Select count with LIMIT

时间秒杀一切 提交于 2019-11-28 00:42:30
I created a simple tabe: CREATE TABLE test ( "type" varchar, "value" varchar, PRIMARY KEY(type,value) ); I inserted 5 rows into it: INSERT INTO test(type,value) VALUES('test','tag1') INSERT INTO test(type,value) VALUES('test','tag2') INSERT INTO test(type,value) VALUES('test','tag3') INSERT INTO test(type,value) VALUES('test','tag4') INSERT INTO test(type,value) VALUES('test','tag5') I ran SELECT * from test LIMIT 3 and it works as expected. type | value ------+------ test | tag1 test | tag2 test | tag3 When I ran SELECT COUNT(*) from test LIMIT 3 , it produces: count ------- 5 Shouldn't it

com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces

独自空忆成欢 提交于 2019-11-27 21:08:25
I am trying to configure spring data with cassandra. But I am getting bellow error , when my app is deploying in tomcat. When I check the connection, it is available to the given port. (127.0.0.1:9042). I have include stack trace and spring configuration bellow. Does anyone having idea on this error? Full stack trace : 2015-12-06 17:46:25 ERROR web.context.ContextLoader:331 - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cassandraSession': Invocation of init method failed; nested exception is com.datastax.driver.core

Can't connect to cassandra node from different host

你。 提交于 2019-11-27 20:41:05
I have a cassandra node at a machine. When I access cqlsh from the same machne it works properly. But when I tried to connect to it's cqlsh using "192.x.x.x" from another machine, I'm getting an error saying Connection error: ('Unable to connect to any servers', {'192.x.x.x': error(111, "Tried connecting to [('192.x.x.x', 9042)]. Last error: Connection refused")}) What is the reason for this? How can I fix it? Nicola Ferraro Probably the remote Cassandra node is not bound to the external network interface but to the loopback one (this is the default configuration). You can ensure this by using

Mutation of 17076203 bytes is too large for the maxiumum size of 16777216

穿精又带淫゛_ 提交于 2019-11-27 16:51:56
问题 I have "commitlog_segment_size_in_mb: 32" in the cassandra settings but the error below indicates maximum size is 16777216, which is about 16mb. Am I looking at the correct setting for fixing the error below? I am referring to this setting based on the suggestion provided at http://mail-archives.apache.org/mod_mbox/cassandra-user/201406.mbox/%3C53A40144.2020808@gmail.com%3E I am using 2.1.0-2 for Cassandra. I am using Kairosdb, and the write buffer max size is 0.5Mb. WARN [SharedPool-Worker-1

TaskSchedulerImpl: Initial job has not accepted any resources;

丶灬走出姿态 提交于 2019-11-27 13:45:42
Here is what I am trying to do. I have created two nodes of DataStax enterprise cluster,on top of which I have created a java program to get the count of one table (Cassandra database table). This program was built in eclipse which is actually from a windows box. At the time of running this program from windows it's failing with the following error at runtime: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory The same code has been compiled & run successfully on those clusters without any issue. What could be the

Spark Datastax Java API Select statements

半腔热情 提交于 2019-11-27 08:01:37
问题 I'm using a tutorial here in this Github to run spark on cassandra using a java maven project: https://github.com/datastax/spark-cassandra-connector. I've figured how to use direct CQL statements, as I have previously asked a question about that here: Querying Data in Cassandra via Spark in a Java Maven Project However, now I'm trying to use the datastax java API in fear that my original code in my original question will not work for Datastax version of Spark and Cassandra. For some weird

Use of Order by clause in cassandra

本秂侑毒 提交于 2019-11-27 07:49:02
问题 When creating table in cassandra, we can give clustering keys with ordering like below. Create table user(partitionkey int, id int, name varchar, age int, address text, insrt_ts timestamp, Primary key(partitionkey, name, insrt_ts, id) with clustering order by (name asc, insrt_ts desc, id asc); when we insert data into that table, As per cassandra documentation records are sorted based on clustering keys. When i retrieve records with CQL1 and CQL2, I am getting in the same sorted order. CQL1:

Is there a reason not to use SparkContext.getOrCreate when writing a spark job?

末鹿安然 提交于 2019-11-27 05:40:14
I'm writing Spark Jobs that talk to Cassandra in Datastax. Sometimes when working through a sequence of steps in a Spark job, it is easier to just get a new RDD rather than join to the old one. You can do this by calling the SparkContext [getOrCreate][1] method. Now sometimes there are concerns inside a Spark Job that referring to the SparkContext can take a large object (the Spark Context) which is not serializable and try and distribute it over the network. In this case - you're registering a singleton for that JVM, and so it gets around the problem of serialization. One day my tech lead

Cassandra - Is there a way to limit number of async queries?

隐身守侯 提交于 2019-11-27 05:12:41
I would like to know if there is way to limit the number of queries executed simultaneously by the cassandra java driver ? Currently, I execute a lot of queries as follows : ... PreparedStatement stmt = session.prepare("SELECT * FROM users WHERE id = ?"); BoundStatement boundStatement = new BoundStatement(stmt); List<ResultSetFuture> futures = Lists.newArrayListWithExpectedSize(list.length); for(String id : list ) { futures.add(session.executeAsync(boundStatement.bind(id))); } for (ListenableFuture<ResultSet> future : futures) { ResultSet rs = future.get(); ... // do some stuff } Unfortunately