cassandra-2.0

How to downgrade Cassandra 3.0.0 -> 2.x?

丶灬走出姿态 提交于 2019-12-07 09:57:06
问题 I recently found out that Cassandra 3.0.0 and PrestoDB don't play well together. I have a lot of data loaded into Cassandra 3.0 and I wouldn't like to rebuild the whole thing. Is there a safe way to downgrade to 2.x temporarily until Presto is updated, so then I can come back to 3.0? I know downgrading is not officially supported, but I'm wondering whether more experienced S.O. Cassandra users could point me in the right direction here. I assume the answer will be "don't try it", but who

Cassandra adding disks / increase storage volume without adding new nodes

最后都变了- 提交于 2019-12-07 05:54:00
问题 I have to increase storage volume in an cassandra cluster, the performance and throughput however is still more than enough. My first thoughts were to only add drives. Is it possible to increasing storage volume without adding new nodes? Is it possible with jbod to add new drives live in an running node? Or is the only way taking it offline, add the disks and take it back online afterwards? Any best practises? Thx, I really appreciate your help 回答1: You can modify the cassandra.yaml to have

Token Aware Astyanax Connection pool connecting on nodes without distributing connections over nodes

纵饮孤独 提交于 2019-12-07 04:56:28
问题 I was using astyanax connection pool defined as this: ipSeeds = "LOAD_BALANCER_HOST:9160"; conPool.setSeeds(ipSeeds) .setDiscoveryType(NodeDiscoveryType.TOKEN_AWARE) .setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE); However, my cluster have 4 nodes and I have 8 client machines connecting on it. LOAD_BALANCER_HOST forwards requests to one of my four nodes. On a client node, I have: $netstat -an | grep 9160 | awk '{print $5}' | sort |uniq -c 235 node1:9160 680 node2:9160 4 node3:9160 4

Error on Cassandra server: Unable to gossip with any seeds

风格不统一 提交于 2019-12-07 04:38:47
问题 I'm adding a second node to a single-node cassandra cluster, and getting a stack trace on the second node: ERROR 18:13:42,841 Exception encountered during startup java.lang.RuntimeException: Unable to gossip with any seeds at org.apache.cassandra.gms.Gossiper.doShadowRound(Gossiper.java:1193) at org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:446) at org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:655) at org.apache

Understanding Cassandra's storage overhead

烈酒焚心 提交于 2019-12-07 04:21:37
问题 I have been reading this section of the Cassandra docs and found the following a little puzzling: Determine column overhead: regular_total_column_size = column_name_size + column_value_size + 15 counter - expiring_total_column_size = column_name_size + column_value_size + 23 Every column in Cassandra incurs 15 bytes of overhead. Since each row in a table can have different column names as well as differing numbers of columns, metadata is stored for each column. For counter columns and

What options are there to speed up a full repair in Cassandra?

烈酒焚心 提交于 2019-12-07 00:02:44
问题 I have a Cassandra datacenter which I'd like to run a full repair on. The datacenter is used for analytics/batch processing and I'm willing to sacrifice latencies to speed up a full repair ( nodetool repair ). Writes to the datacenter is moderate. What are my options to make the full repair faster? Some ideas: Increase streamthroughput ? I guess I could disable autocompation and decrase compactionthroughput temporarily. Not sure I'd want to that, though... Additional information: I'm running

How to list column families in keyspace?

别来无恙 提交于 2019-12-06 16:50:36
问题 How can I get list of all column families in keyspace in Cassandra using CQL 3? 回答1: cqlsh> select columnfamily_name from system.schema_columnfamilies where keyspace_name = 'test'; columnfamily_name ------------------- commits foo has_all_types item_by_user test test2 user_by_item (7 rows) 回答2: Or even more simply (if you are using cqlsh), switch over to your keyspace with use and then execute describe tables : cqlsh> use products; cqlsh:products> describe tables; itemmaster itemhierarchy

Apache Spark 1.5 with Cassandra : Class cast exception

霸气de小男生 提交于 2019-12-06 14:14:12
I use the following softwares: Cassandra 2.1.9 Spark 1.5 Java using the Cassandra driver provided by Datastax. Ubuntu 12.0.4 When I run spark locally using local[8] , the program runs fine and data is saved into Cassandra. However, when I submit the job to spark cluster, the following exception is thrown: 16 Sep 2015 03:08:58,808 WARN [task-result-getter-0] (Logging.scala:71) TaskSetManager - Lost task 3.0 in stage 0.0 (TID 3, 192.168.50.131): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.HashMap$SerializationProxy to field scala.collection.Map$WithDefault

Cassandra Cluster - Specific Node - specific table high Dropped Mutations

杀马特。学长 韩版系。学妹 提交于 2019-12-06 14:02:44
My Compression strategy in Production was LZ4 Compression. But I modified it to Deflate For compression change, we had to use nodetool Upgradesstables to forcefully upgrade the compression strategy on all sstables But once upgradesstabloes command completed on all the 5 nodes in the cluster, My requests started to fail, both read and write The issue is traced to a specific node out of the 5 node cluster and to a spcific table on that node. My whole cluster has roughly same amount of data and configuration , but 1 node in particular goes down is misbehaving Output of nodetool status |/ State

What is the best way to expose Cassandra REST API to web?

﹥>﹥吖頭↗ 提交于 2019-12-06 12:30:04
I would like to work with Cassandra from javascript web app using REST API. REST should support basic commands working with DB - create table, select/add/update/remove items. Will be perfect to have something similar to odata protocol. P.S. I'm looking for some library or component. Java is a most preferred. Staash solution looks perfect for the task - https://github.com/Netflix/staash You can use DataStax drivers. I used it via Scala but you can use Java, a Session object is a long-lived object and it should not be used in a request/response short-lived fashion but it's up to you. ref. rules