cassandra-3.0

Aggregation in Cassandra across partitions

烈酒焚心 提交于 2019-11-29 18:05:43
I have a Data model like below, CREATE TABLE appstat.nodedata ( nodeip text, timestamp timestamp, flashmode text, physicalusage int, readbw int, readiops int, totalcapacity int, writebw int, writeiops int, writelatency int, PRIMARY KEY (nodeip, timestamp) ) WITH CLUSTERING ORDER BY (timestamp DESC) where, nodeip - primary key and timestamp - clustering key (Sorted by descinding oder to get the latest), Sample data in this table, SELECT * from nodedata WHERE nodeip = '172.30.56.60' LIMIT 2; nodeip | timestamp | flashmode | physicalusage | readbw | readiops | totalcapacity | writebw | writeiops

Native Transport Requests in Cassandra

邮差的信 提交于 2019-11-29 18:04:51
I got some points about Native Transport Requests in Cassandra using this link : What are native transport requests in Cassandra? As per my understanding, any query I execute in Cassandra is an Native Transport Requests. I frequently get Request Timed Out error in Cassandra and I observed the following logs in Cassandra debug log and as well as using nodetool tpstats /var/log/cassandra# nodetool tpstats Pool Name Active Pending Completed Blocked All time blocked MutationStage 0 0 186933949 0 0 ViewMutationStage 0 0 0 0 0 ReadStage 0 0 781880580 0 0 RequestResponseStage 0 0 5783147 0 0

Cassandra batch prepared statement size warning

China☆狼群 提交于 2019-11-29 17:04:09
I see this error continuously in the debug.log in cassandra, WARN [SharedPool-Worker-2] 2018-05-16 08:33:48,585 BatchStatement.java:287 - Batch of prepared statements for [test, test1] is of size 6419, exceeding specified threshold of 5120 by 1299. In this where, 6419 - Input payload size (Batch) 5120 - Threshold size 1299 - Byte size above threshold value so as per this ticket in Cassandra, https://github.com/krasserm/akka-persistence-cassandra/issues/33 I see that it is due to the increase in input payload size so I Increased the commitlog_segment_size_in_mb in cassandra.yml to 60mb and we

Enable one time Cassandra Authentication and Authorization check and cache it forever

混江龙づ霸主 提交于 2019-11-29 12:34:36
I use the authentication and authorization in my single node Cassandra setup, But I frequently get the following error in Cassandra server logs, ERROR [SharedPool-Worker-71] 2018-06-01 10:40:36,661 ErrorMessage.java:338 - Unexpected exception during request java.lang.RuntimeException: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 1 responses. at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:489) ~[apache-cassandra-3.0.8.jar:3.0.8] at org.apache.cassandra.auth.CassandraRoleManager.getRoles(CassandraRoleManager.java

Native Transport Requests in Cassandra

人盡茶涼 提交于 2019-11-28 12:27:49
问题 I got some points about Native Transport Requests in Cassandra using this link : What are native transport requests in Cassandra? As per my understanding, any query I execute in Cassandra is an Native Transport Requests. I frequently get Request Timed Out error in Cassandra and I observed the following logs in Cassandra debug log and as well as using nodetool tpstats /var/log/cassandra# nodetool tpstats Pool Name Active Pending Completed Blocked All time blocked MutationStage 0 0 186933949 0

What is the batch limit in Cassandra?

亡梦爱人 提交于 2019-11-28 09:00:49
I have a Java client that pushes (INSERT) records in batch to Cassandra cluster. The elements in the batch all have the same row key, so they all will be placed in the same node. Also I don't need the transaction to be atomic so I've been using unlogged batch. The number of INSERT commands in each batch depends on different factors, but can be anything between 5 to 50000. First I just put as many commands as I had in one batch and submitted it. This threw com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large . Then I used a cap of 1000 INSERT per batch, and then down to

Enable one time Cassandra Authentication and Authorization check and cache it forever

怎甘沉沦 提交于 2019-11-28 06:14:55
问题 I use the authentication and authorization in my single node Cassandra setup, But I frequently get the following error in Cassandra server logs, ERROR [SharedPool-Worker-71] 2018-06-01 10:40:36,661 ErrorMessage.java:338 - Unexpected exception during request java.lang.RuntimeException: org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - received only 1 responses. at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:489) ~[apache-cassandra

using prepared statement multiple times, giving a warning of Cassandra Querying Reducing Performance

蹲街弑〆低调 提交于 2019-11-27 16:28:56
I am getting data from somewhere and inserting it into cassandra daily basis then I need to retrieve the data from cassandra for whole week and do some processing and insert result back onto cassandra . i have lot of records, each record executing most of the below operations. To do this I have written a program below its working fine but I get warning and according to API document should not use prepare statement multiple time its reducing performance. Please tell me how to avoid this to improve the performance OR suggest me any alternative approach to achieve this in scala. Here is some part

NoSpamLogger.java Maximum memory usage reached Cassandra

杀马特。学长 韩版系。学妹 提交于 2019-11-27 06:58:46
问题 I have a 5 node cluster of Cassandra, with ~650 GB of data on each node involving a replication factor of 3. I have recently started seeing the following error in /var/log/cassandra/system.log. INFO [ReadStage-5] 2017-10-17 17:06:07,887 NoSpamLogger.java:91 - Maximum memory usage reached (1.000GiB), cannot allocate chunk of 1.000MiB I have attempted to increase the file_cache_size_in_mb, but sooner rather than later this same error catches up. I have tried to go as high as 2GB for this

What is the batch limit in Cassandra?

给你一囗甜甜゛ 提交于 2019-11-27 02:34:27
问题 I have a Java client that pushes (INSERT) records in batch to Cassandra cluster. The elements in the batch all have the same row key, so they all will be placed in the same node. Also I don't need the transaction to be atomic so I've been using unlogged batch. The number of INSERT commands in each batch depends on different factors, but can be anything between 5 to 50000. First I just put as many commands as I had in one batch and submitted it. This threw com.datastax.driver.core.exceptions