cassandra-3.0

Performance impact in using camelCase in Cassandra columns

拜拜、爱过 提交于 2019-12-23 04:48:43
问题 I know, Cassandra generally converts all the column names into lowercase. Is there a performance impact in using the camelCase in column names in Cassandra? I used the double quote in columns and I am able to store the column names in the camelCase, like below CREATE TABLE test ( Foo int PRIMARY KEY, "Bar" int ); Will there be a performance impact in storing the column name with the double quotes? 回答1: I don't believe there's an impact. I would say that the case-insensitive nature of CQL only

Cassandra query failure (Tombstones)

可紊 提交于 2019-12-23 00:53:13
问题 So this is driving me crazy. i tried querying one of my table in Cassandra and it showed query failure. i tried digging dip in to the reason behind it and found that it was because of tombstone. i changed GC_GRACE_SECONDS to Zero and triggered Compaction using nodetool, And when i queried again it worked fine. however on a subsequent calls query failed again with a same reason. i am using cassandra-nodejs driver. This is my data model. CREATE TABLE my_table ( firstname text, lastname text,

Cassandra Cluster - Specific Node - specific table high Dropped Mutations

大憨熊 提交于 2019-12-22 12:06:08
问题 My Compression strategy in Production was LZ4 Compression. But I modified it to Deflate For compression change, we had to use nodetool Upgradesstables to forcefully upgrade the compression strategy on all sstables But once upgradesstabloes command completed on all the 5 nodes in the cluster, My requests started to fail, both read and write The issue is traced to a specific node out of the 5 node cluster and to a spcific table on that node. My whole cluster has roughly same amount of data and

How to fix Exception while running locally spark-sql program on windows10 by enabling HiveSupport?

谁说我不能喝 提交于 2019-12-20 07:26:03
问题 I am working on SPARK-SQL 2.3.1 and I am trying to enable the hiveSupport for while creating a session as below .enableHiveSupport() .config("spark.sql.warehouse.dir", "c://tmp//hive") I ran below command C:\Software\hadoop\hadoop-2.7.1\bin>winutils.exe chmod 777 C:\tmp\hive While running my program getting: Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw- at org.apache.hadoop.hive

Is cassandra unable to store relationships that cross partition size limit?

拜拜、爱过 提交于 2019-12-20 04:58:29
问题 I've noticed that relationships cannot be properly stored in C* due to its 100MB partition limit, denormalization doesn't help in this case and the fact that C* can have 2B cells per partition neither as those 2B cells of just Longs have 16GB ?!?!? Doesn't that cross 100MB partition size limit ? Which is what I don't understand in general, C* proclaims it can have 2B cells but a partition sizes should not cross 100MB ??? What is the idiomatic way to do this? People say that this an ideal use

Cassandra does not start on Java 10

对着背影说爱祢 提交于 2019-12-20 02:44:21
问题 I have a brand new Windows 10 Home installation, with a brand-new installation of JDK 10.0.1 (which is what Oracle recommended when I went to the JDK download site.) I just now downloaded Cassandra 3.11.2, un-tar'd it, and put the bin directory on my classpath. When I attempt to start Cassandra using the cassandra -f command, I get this error: PS C:\javatools> cassandra -f *---------------------------------------------------------------------* *------------------------------------------------

How default_time_to_live would delete rows without tombstones in Cassandra?

血红的双手。 提交于 2019-12-19 21:52:43
问题 From How is data deleted? Cassandra allows you to set a default_time_to_live property for an entire table. Columns and rows marked with regular TTLs are processed as described above; but when a record exceeds the table-level TTL, Cassandra deletes it immediately, without tombstoning or compaction . This is also answered here If a table has default_time_to_live on it then rows that exceed this time limit are deleted immediately without tombstones being written . And commented in LastPickle's

How to store java.sql.Date in cassandra date field using mapping manager?

橙三吉。 提交于 2019-12-19 04:50:33
问题 Can someone help me to store the current system date in cassandra date column in format yyyy-mm-dd using Java? I get exception while saving the java.sql.Date using MappingManager . My sample program is: Test.java import com.datastax.driver.mapping.annotations.Table; import java.sql.Date; @Table(keyspace = "testing", name = "test") public class Test { private String uid; private Date rece; public String getUid() { return uid; } public void setUid(String uid) { this.uid = uid; } public Date

Aggregation in Cassandra across partitions

Deadly 提交于 2019-12-18 09:38:35
问题 I have a Data model like below, CREATE TABLE appstat.nodedata ( nodeip text, timestamp timestamp, flashmode text, physicalusage int, readbw int, readiops int, totalcapacity int, writebw int, writeiops int, writelatency int, PRIMARY KEY (nodeip, timestamp) ) WITH CLUSTERING ORDER BY (timestamp DESC) where, nodeip - primary key and timestamp - clustering key (Sorted by descinding oder to get the latest), Sample data in this table, SELECT * from nodedata WHERE nodeip = '172.30.56.60' LIMIT 2;

Cassandra batch prepared statement size warning

依然范特西╮ 提交于 2019-12-18 09:34:48
问题 I see this error continuously in the debug.log in cassandra, WARN [SharedPool-Worker-2] 2018-05-16 08:33:48,585 BatchStatement.java:287 - Batch of prepared statements for [test, test1] is of size 6419, exceeding specified threshold of 5120 by 1299. In this where, 6419 - Input payload size (Batch) 5120 - Threshold size 1299 - Byte size above threshold value so as per this ticket in Cassandra, https://github.com/krasserm/akka-persistence-cassandra/issues/33 I see that it is due to the increase