datastax-enterprise

Should I call session.close() and cluster. close() after each web API call

狂风中的少年 提交于 2019-12-04 19:09:13
问题 I have an webservice API allowing client to insert into Cassandra. I read the document on the page of datastax (http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/Session.html) stating that we should keep the session and cluster object till the end of the application. I was wondering should I call session.close() and cluster.close() after each web API call or I just keep the session until I shutdown web server? 回答1: I would advise against creating a Session each time you

Cannot record QUEUE latency of n minutes - DSE

心已入冬 提交于 2019-12-04 18:45:43
One of our nodes in our 3 node cluster is down and on checking the log file, it shows the below messages INFO [keyspace.core Index WorkPool work thread-2] 2016-09-14 14:05:32,891 AbstractMetrics.java:114 - Cannot record QUEUE latency of 11 minutes because higher than 10 minutes. INFO [keyspace.core Index WorkPool work thread-2] 2016-09-14 14:05:33,233 AbstractMetrics.java:114 - Cannot record QUEUE latency of 10 minutes because higher than 10 minutes. WARN [keyspace.core Index WorkPool work thread-2] 2016-09-14 14:05:33,398 Worker.java:99 - Interrupt/timeout detected. java.util.concurrent

Solr docValues usage

拥有回忆 提交于 2019-12-04 18:26:22
I am planning to try Solr's docValues to hopefully improve facet and sort performance. I have some questions around this feature: If I enable docValues, will Solr create a forward index (for faceting) in addition to a separate reverse index (for searching)? Or will Solr simply create a forward index ONLY? (thus, resulting to performance gain in faceting in exchange for performance loss in searching) If I want to both facet and search in a single field, what is the best practice? Should I set "indexed=true" and "docValues=true" in the same field or should I create a copy field where the source

TTL vs default_time_to_live which one is better and why?

ぃ、小莉子 提交于 2019-12-04 15:08:09
Requirement is simple: we have to create a table which will have only 24 hours of data. We have two options Defile TTL with each insert Make table property default_time_to_live for 24 hours. I have general idea about both the things but internally which one will be helpful to deal with tombstones? or both will generate same amount of tombstones? Which one will be better and why any reference link will be appreciated. If a table has default_time_to_live on it then rows that exceed this time limit are deleted immediately without tombstones being written. This will not affect rows / columns that

When to use Cassandra vs. Solr in DSE?

时光总嘲笑我的痴心妄想 提交于 2019-12-04 09:43:25
问题 I'm using DSE for Cassandra/Solr integration so that data are stored in Cassandra and indexed in Solr. It's very natural to use Cassandra to handle CRUD operation and use Solr for full text search respectively, and DSE can really simplify data synchronization between Cassandra and Solr. When it comes to query, however, there are actually two ways to go: Cassandra secondary/manual configured index vs. Solr. I want to know when to use which method and what's the performance difference in

Can I force cleanup of old tombstones?

天涯浪子 提交于 2019-12-04 00:09:04
问题 I have recently lowered gc_grace_seconds for a CQL table. I am running LeveledCompactionStrategy . Is it possible for me to force purging of old tombstones from my SSTables? 回答1: TL;DR Your tombstones will disappear on their own through compaction bit make sure you are running repair or they may come back from the dead. http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html Adding some more details: Tombstones are not immediately available for deletion

How to submit a job via REST API?

我怕爱的太早我们不能终老 提交于 2019-12-03 21:14:32
I'm using Datastax Enterprise 4.8.3. I'm trying to implement a Quartz based application to remotely submit Spark jobs. During my research I have stumbled upon the following links: Apache Spark Hidden REST API Spark feature - Provide a stable application submission gateway in standalone cluster mode To test out the theory I tried executing the below code snippet on the master node (IP: "spark-master-ip"; directly on the shell) of my 2 node cluster (as provided in link #1 above): curl -X POST http://spark-master-ip:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8"

cassandra cql shell window got disappears after installation in windows

泪湿孤枕 提交于 2019-12-03 17:41:38
问题 cassandra cql shell window got disappears after installation in windows? this was installed using MSI installer availalbe in planet cassandra. Why this happens ? please help me.. Thanks in advance. 回答1: I had the same issue with DataStax 3.9. This is how I sorted this: Step 1: Open file: DataStax-DDC\apache-cassandra\conf\cassandra.yaml Step 2: Uncomment the cdc_raw_directory and set new value to (for windows) cdc_raw_directory: "C:/Program Files/DataStax-DDC/data/cdc_raw" Step 3: Goto

Cassandra compaction tasks stuck

心已入冬 提交于 2019-12-03 16:46:44
I'm running Datastax Enterprise in a cluster consisting of 3 nodes. They are all running under the same hardware: 2 Core Intel Xeon 2.2 Ghz, 7 GB RAM, 4 TB Raid-0 This should be enough for running a cluster with a light load, storing less than 1 GB of data. Most of the time, everything is just fine but it appears that sometimes the running tasks related to the Repair Service in OpsCenter sometimes get stuck; this causes an instability in that node and an increase in load. However, if the node is restarted, the stuck tasks don't show up and the load is at normal levels again. Because of the

How to completely clear down, reset and restart a Cassandra cluster?

感情迁移 提交于 2019-12-03 14:04:45
I have an old Cassandra cluster that needs to be brought back into life. I would like to clear out all the user and system data, all stored tokens, everything and start from a clean slate - is there a recommended way of doing this? Here's the procedure I use for Apache Cassandra: First stop Cassandra on all the nodes, then on each node: rm -r <the commitlog_directory specified in cassandra.yaml> rm -r <the data_file_directories specified in cassandra.yaml> rm <the contents of the saved_caches_directory specified in cassandra.yaml> rm <old logfiles in /var/log/cassandra/> Then restart the