datastax

DSE OpsCenter best practice fails when Cassandra PasswordAuthenticator is used

自作多情 提交于 2019-12-06 05:31:12
The following best practice checks fail when Cassandra's PasswordAuthenticator is enabled: Search nodes enabled with bad autocommit Search nodes enabled with query result cache Search nodes with bad filter cache My values are in compliance with the recommended values; and I have confirmed that the checks indeed pass when I disable authentication in Cassandra. What's weird is that there are 6 checks under the "Solr Advisor" category of the Best Practice Service and only these 3 are failing when authentication is enabled. Is this a known bug in Opscenter? I'm using v5.0.1 but I've seen this

Cqlsh with client to node SSL encryption

南笙酒味 提交于 2019-12-06 05:23:39
Am trying to enable client to node SSL encryption in my DSE server. My cqlshrc file looks like below [connection] hostname = 127.0.0.1 port = 9160 factory = cqlshlib.ssl.ssl_transport_factory [ssl] certfile = /path/to/dse_node0.cer validate = true ;; Optional, true by default. [certfiles] ;; Optional section, overrides the default certfile in the [ssl] section. 1.2.3.4 = /path/to/dse_node0.cer When I tried to login into cqlsh shell then am getting the below error Connection error: Could not connect to 127.0.0.1:9160 There are several possible causes I hope one of these solutions is helpful. 1)

java-cassnadra object Frozen annotation for address map<text, frozen<list<frozen<address>>>>,

筅森魡賤 提交于 2019-12-06 04:11:16
I am trying to insert data into Cassandra(2.1.9) My Java object has a map of a list of UDT. On running the code I am getting error regarding @Frozen annotation. I am using DataStax(2.1.9) Library. http://docs.datastax.com/en/drivers/java/2.1/index.html?com/datastax/driver/mapping/annotations/FrozenValue.html create table user{ name text, addresses map<text, frozen<list<frozen<address>>>>, } My Java Class public class User{ private String name; @FrozenValue private Map<String, List<AddressUDT>> addresses; } But I am getting following error java.lang.IllegalArgumentException: Error while

Cassandra: Adding new column to the table

邮差的信 提交于 2019-12-06 02:20:30
Hi I just added a new column Business_sys to my table my_table: ALTER TABLE my_table ALTER business_sys TYPE set<text>; But again I just droped this column name because I wanted to change the type of column: ALTER TABLE my_table DROP business_sys; Again when I tried to add the same colmn name with different type am getting error message "Cannnot add a collection with the name business_sys because the collection with the same name and different type has already been used in past" I just tried to execute this command to add a new column with different type- ALTER TABLE my_table ADD business_sys

Cassandra Timeouts with No CPU Usage

回眸只為那壹抹淺笑 提交于 2019-12-06 00:47:33
I am getting Cassandra timeouts using the Phantom-DSL with the Datastax Cassandra driver. However, Cassandra does not seem to be overloaded. Below is the exception I get: com.datastax.driver.core.exceptions.OperationTimedOutException: [node-0.cassandra.dev/10.0.1.137:9042] Timed out waiting for server response at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onTimeout(RequestHandler.java:766) at com.datastax.driver.core.Connection$ResponseHandler$1.run(Connection.java:1267) at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:588) at io.netty.util

RDD not serializable Cassandra/Spark connector java API

守給你的承諾、 提交于 2019-12-05 21:54:55
so I previously had some questions on how to query cassandra using spark in a java maven project here: Querying Data in Cassandra via Spark in a Java Maven Project Well my question was answered and it worked, however I've run into an issue (possibly an issue). I'm trying to now use the datastax java API. Here is my code: package com.angel.testspark.test2; import org.apache.commons.lang3.StringUtils; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.Function; import java.io

Is there a good way to check whether a Datastax Session.executeAsync() has thrown an exception?

点点圈 提交于 2019-12-05 21:15:30
问题 I'm trying to speed up our code by calling session.executeAsync() instead of session.execute() for DB writes. We have use cases where the DB connection might be down, currently the previous execute() throws an exception when the connection is lost (no hosts reachable in the cluster). We can catch these exceptions and retry or save the data somewhere else etc... With executeAsync() , it doesn't look like there's any way to fulfill this use case - the returned ResultSetFuture object needs to be

Cassandra Allow filtering

删除回忆录丶 提交于 2019-12-05 09:00:25
I have a table as below CREATE TABLE test ( day int, id varchar, start int, action varchar, PRIMARY KEY((day),start,id) ); I want to run this query Select * from test where day=1 and start > 1475485412 and start < 1485785654 and action='accept' ALLOW FILTERING Is this ALLOW FILTERING efficient? I am expecting that cassandra will filter in this order 1. By Partitioning column(day) 2. By the range column(start) on the 1's result 3. By action column on 2's result. So the allow filtering will not be a bad choice on this query. In case of the multiple filtering parameters on the where clause and

“no viable alternative at input” error when querying cassndra table

别说谁变了你拦得住时间么 提交于 2019-12-05 07:58:21
I have a table in Cassandra like this: CREATE TABLE vroc.sensor_data ( dpnode text, year int, month int, day int, data_timestamp bigint, data_sensor text, dsnode text, data_quality double, data_value blob, PRIMARY KEY ((dpnode, year, month, day), data_timestamp, data_sensor, dsnode) ) WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.1 AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' } AND comment = '' AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max

How will i know that record was duplicate or it was inserted successfully?

…衆ロ難τιáo~ 提交于 2019-12-05 06:38:40
Here is my CQL table: CREATE TABLE user_login ( userName varchar PRIMARY KEY, userId uuid, fullName varchar, password text, blocked boolean ); I have this datastax java driver code PreparedStatement prepareStmt= instances.getCqlSession().prepare("INSERT INTO "+ AppConstants.KEYSPACE+".user_info(userId, userName, fullName, bizzCateg, userType, blocked) VALUES(?, ?, ?, ?, ?, ?);"); batch.add(prepareStmt.bind(userId, userData.getEmail(), userData.getName(), userData.getBizzCategory(), userData.getUserType(), false)); PreparedStatement pstmtUserLogin = instances.getCqlSession().prepare("INSERT