datastax

major compaction on LCS

柔情痞子 提交于 2019-12-08 12:46:21
问题 I have table with LCS in Cassandra cluster. I am observing too may tombstones in my cluster so I have decided to reduce GC grace seconds and performing major compaction. Ran nodetoolcompact keyspace table but compaction job ran within a second. it seems major compaction not worked. Can you please help me to understand. 回答1: If you're actually using the antique Cassandra 2.0, as the label on your question said, then indeed it didn't support major compaction on LCS, and "nodetool compact" only

DSE 4.6 to DSE 4.7 Failed to find Spark assembly

自闭症网瘾萝莉.ら 提交于 2019-12-08 10:08:42
问题 I have a problem with job-server-0.5.0 after upgraded DSE 4.6 to 4.7. If I run server_start.sh I'll get error "Failed to find Spark assembly in /usr/share/dse/spark/assembly/target/scala-2.10 You need to build Spark before running this program." I found in /usr/share/dse/spark/bin/compute-classpath.sh this code raises error for f in ${assembly_folder}/spark-assembly*hadoop*.jar; do if [[ ! -e "$f" ]]; then echo "Failed to find Spark assembly in $assembly_folder" 1>&2 echo "You need to build

Cassandra timing out when queried for key that have over 10,000 rows even after giving timeout of 10sec

和自甴很熟 提交于 2019-12-08 09:28:50
问题 Im using a DataStax Community v 2.1.2-1 (AMI v 2.5) with preinstalled default settings. And i have a table : CREATE TABLE notificationstore.note ( user_id text, real_time timestamp, insert_time timeuuid, read boolean, PRIMARY KEY (user_id, real_time, insert_time)) WITH CLUSTERING ORDER BY (real_time DESC, insert_time ASC) AND bloom_filter_fp_chance = 0.01 AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"} AND **default_time_to_live** = 20160 The other configurations are: I have 2

Normal Query on Cassandra using DataStax Enterprise works, but not solr_query

依然范特西╮ 提交于 2019-12-08 05:20:59
问题 I am having a strange issue occur while utilizing the solr_query handler to make queries in Cassandra on my terminal. When I perform normal queries on my table, I am having no issues, but when I use solr_query I get the following error: Unable to complete request: one or more nodes were unavailable. Other individuals who have experienced this problem seem unable to do any queries on their data whatsoever, whether or not it is solr_query. My problem only persists while using that handler. Can

DSE - Cassandra : Commit Log Disk Impact on Performances

ぃ、小莉子 提交于 2019-12-08 04:50:34
问题 I'm running a DSE 4.6.5 Cluster (Cassandra 2.0.14.352). Following datastax's guidelines, on every machine, I separated the data directory from the commitlog/saved caches directories: data is on blazing fast drives commit log and saved caches are on the system drives : 2 HDD RAID1 Monitoring disks with OpsCenter while performing intensive writes, I see no issue with the first, however I see the queue size from the later (commit log) averaging around 300 to 400 with spikes up to 700 requests.

Opscenter backup to S3 location fails

故事扮演 提交于 2019-12-08 03:30:06
问题 Using OpsCenter 5.1.1, Datastax Enterprise 4.5.1, 3 node cluster in AWS. I set up a scheduled backup to local server and also to bucket in S3. The On Server backup finished successfully on all 3 nodes. The S3 backup runs slowly and fails on all 3 nodes. Some keyspaces are backed up, files are created in the S3 bucket. It appears that not all tables are backed up. Looking at /var/log/opscenter/opscenterd.log, I see an OOM error. Why should there be an out-of-memory error when writing to S3

Cqlsh with client to node SSL encryption

拥有回忆 提交于 2019-12-08 03:18:55
问题 Am trying to enable client to node SSL encryption in my DSE server. My cqlshrc file looks like below [connection] hostname = 127.0.0.1 port = 9160 factory = cqlshlib.ssl.ssl_transport_factory [ssl] certfile = /path/to/dse_node0.cer validate = true ;; Optional, true by default. [certfiles] ;; Optional section, overrides the default certfile in the [ssl] section. 1.2.3.4 = /path/to/dse_node0.cer When I tried to login into cqlsh shell then am getting the below error Connection error: Could not

DSE - Cassandra : Commit Log Disk Impact on Performances

不羁的心 提交于 2019-12-08 03:06:28
I'm running a DSE 4.6.5 Cluster (Cassandra 2.0.14.352). Following datastax's guidelines, on every machine, I separated the data directory from the commitlog/saved caches directories: data is on blazing fast drives commit log and saved caches are on the system drives : 2 HDD RAID1 Monitoring disks with OpsCenter while performing intensive writes, I see no issue with the first, however I see the queue size from the later (commit log) averaging around 300 to 400 with spikes up to 700 requests. Of course the latency is also fairly high on theses drives ... Is this affecting, the performance of my

Cassandra update fails

流过昼夜 提交于 2019-12-08 02:22:16
问题 Solved I was testing update on 3 nodes, and the time on one of those nodes was 1 second behind, so when update a row, the write time is always behind the timestamp, cassandra would not update the rows. I sync all nodes time, and the issue fixed. Edit: I double checked the result, all insertions are succeed, partial updates failed. There's no error/exception messages I have a cassandra cluster(Cassandra 2.0.13) which contains 5 nodes. Using python(2.6.6) cassandra driver(2.6.0c2) for inserting

How will i know that record was duplicate or it was inserted successfully?

爱⌒轻易说出口 提交于 2019-12-07 02:33:56
问题 Here is my CQL table: CREATE TABLE user_login ( userName varchar PRIMARY KEY, userId uuid, fullName varchar, password text, blocked boolean ); I have this datastax java driver code PreparedStatement prepareStmt= instances.getCqlSession().prepare("INSERT INTO "+ AppConstants.KEYSPACE+".user_info(userId, userName, fullName, bizzCateg, userType, blocked) VALUES(?, ?, ?, ?, ?, ?);"); batch.add(prepareStmt.bind(userId, userData.getEmail(), userData.getName(), userData.getBizzCategory(), userData