datastax

How Cassandra handle blocking execute statement in datastax java driver

夙愿已清 提交于 2019-12-01 08:28:02
Blocking execute fethod from com.datastax.driver.core.Session public ResultSet execute(Statement statement); Comment on this method: This method blocks until at least some result has been received from the database. However, for SELECT queries, it does not guarantee that the result has been received in full. But it does guarantee that some response has been received from the database, and in particular guarantee that if the request is invalid, an exception will be thrown by this method. Non-blocking execute fethod from com.datastax.driver.core.Session public ResultSetFuture executeAsync

Cassandra - one big table vs many tables

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-01 08:22:44
问题 I'm currently looking trying out Cassandra database. I'm using DataStax Dev center and DataStax C# driver. My Current model is quite simple and consists of only: ParameterId (int) - would serve as the id of the table. Value (bigint) MeasureTime (timestamp) I will be having 1000 (no more, no less) parameters, from 1 - 1000. And will be getting an entry for each parameter once pr. second and will be running for years. My question is regarding whether it is better practice to create a table as:

Unable to connect to Cassandra remotely using DataStax Python driver

∥☆過路亽.° 提交于 2019-12-01 07:57:03
问题 I'm having trouble connecting to Cassandra (running on an EC2 node) remotely (from my laptop). When I use the DataStax Python driver for Cassandra: from cassandra.cluster import Cluster cluster = Cluster(['10.X.X.X'], port=9042) cluster.connect() I get: Traceback (most recent call last): File "/Users/user/virtualenvs/test/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3035, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-23-dc85f20fd4f5>

Does Datastax DSE 5.1 search support Solr local paramater as used in facet.pivot

狂风中的少年 提交于 2019-12-01 07:21:34
问题 I understand that DSE 5.1 runs Solr 6.0 version. I am trying to use facet.pivot feature using Solr local paramater, but it does not seem to be working. My data is as follows Simple 4 fields What I need is to group the result by name field so as to get sum(money) for each Year. I believe facet.pivot with local parameter can solve but not working with DSE 5.1. From:Solr documentation Combining Stats Component With Pivots In addition to some of the general local parameters supported by other

Pig & Cassandra & DataStax Splits Control

亡梦爱人 提交于 2019-12-01 06:43:03
I have been using Pig with my Cassandra data to do all kinds of amazing feats of groupings that would be almost impossible to write imperatively. I am using DataStax's integration of Hadoop & Cassandra, and I have to say it is quite impressive. Hat-off to those guys!! I have a pretty small sandbox cluster (2-nodes) where I am putting this system thru some tests. I have a CQL table that has ~53M rows (about 350 bytes ea.), and I notice that the Mapper later takes a very long time to grind thru these 53M rows. I started poking around the logs and I can see that the map is spilling repeatedly (i

How to prevent Cassandra commit logs filling up disk space

僤鯓⒐⒋嵵緔 提交于 2019-12-01 03:59:37
I'm running a two node Datastax AMI cluster on AWS. Yesterday, Cassandra started refusing connections from everything. The system logs showed nothing. After a lot of tinkering, I discovered that the commit logs had filled up all the disk space on the allotted mount and this seemed to be causing the connection refusal (deleted some of the commit logs, restarted and was able to connect). I'm on DataStax AMI 2.5.1 and Cassandra 2.1.7 If I decide to wipe and restart everything from scratch, how do I ensure that this does not happen again? You could try lowering the commitlog_total_space_in_mb

Datastax Cassandra Driver throwing CodecNotFoundException

ぐ巨炮叔叔 提交于 2019-12-01 03:49:35
The exact Exception is as follows com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> java.math.BigDecimal] These are the versions of Software I am using Spark 1.5 Datastax-cassandra 3.2.1 CDH 5.5.1 The code I am trying to execute is a Spark program using the java api and it basically reads data (csv's) from hdfs and loads it into cassandra tables . I am using the spark-cassandra-connector. I had a lot of issues regarding the google s guava library conflict initially which I was able to resolve by shading the guava library and

Can I force cleanup of old tombstones?

荒凉一梦 提交于 2019-12-01 02:54:11
I have recently lowered gc_grace_seconds for a CQL table. I am running LeveledCompactionStrategy . Is it possible for me to force purging of old tombstones from my SSTables? TL;DR Your tombstones will disappear on their own through compaction bit make sure you are running repair or they may come back from the dead. http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html Adding some more details: Tombstones are not immediately available for deletion until both: 1) gc_grace_seconds has expired 2) they meet the requirements configured in tombstone compaction sub

Cassandra read timeout

混江龙づ霸主 提交于 2019-12-01 02:43:59
问题 I am pulling big amount of data from cassandra 2.0, but unfortunately getting timeout exception. My table: CREATE KEYSPACE StatisticsKeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 }; CREATE TABLE StatisticsKeyspace.HourlyStatistics( KeywordId text, Date timestamp, HourOfDay int, Impressions int, Clicks int, AveragePosition double, ConversionRate double, AOV double, AverageCPC double, Cost double, Bid double, PRIMARY KEY(KeywordId, Date, HourOfDay) ); CREATE

Datastax Mismatch for Key Issue

落花浮王杯 提交于 2019-12-01 01:46:41
Our current setup contain DSE 5.0.2 version with 3 node cluster.Currently we are facing issue with heavy load and node failure issue.Debug.log details is given below: DEBUG [ReadRepairStage:8] 2016-09-27 14:11:58,781 ReadCallback.java:234 - Digest mismatch: org.apache.cassandra.service.DigestMismatchException: Mismatch for key DecoratedKey(5503649670304043860, 343233) (45cf191fb10d902dc052aa76f7f0b54d vs ffa7b4097e7fa05de794371092c51c68) at org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:85) ~[cassandra-all-3.0.7.1159.jar:3.0.7.1159] at org.apache.cassandra.service