cassandra-2.0

Is there a way to see token ranges for each node in cassandra which uses vnodes?

ⅰ亾dé卋堺 提交于 2019-12-10 23:19:43
问题 Is there a way to see token ranges for each node in cassandra which used vnodes? I dont want to see token for each node which you get by issuing nodetool ring. I just want to see the token ranger for each node which uses vnodes. 回答1: The token ranges for a given node will be a function of a keyspace's topology. Programmatically you can use the java-driver for this using Cluster.getMetadata().getTokenRanges(keyspace, host). The following code example shows retrieving all token ranges by host

default sorting order of columns in cassandra?

痞子三分冷 提交于 2019-12-10 20:24:59
问题 I was going through the tutorial where the instructor says that the default ordering of columns with in a row is UTF8-tye . But he does not touch upon it further. I don't understand what it means. especially what if my columns are different types such as int , timestamp etc. Also how would I specify the sort order on the columns to be something other than "UTF8-type". 回答1: He is talking about the columns names, not the columns values. In old cassandra versions you could use SuperColumns,

Cassandra - Write doesn't fail, but values aren't inserted

本秂侑毒 提交于 2019-12-10 18:29:05
问题 I have a cluster of 3 Cassandra 2.0 nodes. My application I wrote a test which tries to write and read some data into/from Cassandra. In general this works fine. The curiosity is that after I restarted my computer, this test will fail, because after writting I read the same value I´ve write before and there I get null instead of the value, but the was no exception while writing. If I manually truncate the used column family, the test will pass. After that I can execute this test how often I

Using Apache Cassandra In Coldfusion

亡梦爱人 提交于 2019-12-10 17:13:08
问题 I'm trying to use Apache Cassandra on a project I'm coding using Coldfusion. Since Coldfusion doesn't have a driver for Apache Cassandra and vice versa, I'm attempting to use Cassandra's Java drivers. I'm pretty much a Java newbie so please bear with me. I've managed to copy the necessary .jar files to /opt/railo/lib/ (I'm using Railo) and also managed to connect to Cassandra using Coldfusion using the code below. What I need help with is looping through the results returned by Cassandra when

How single partition batch in Cassandra function for multiple column update?

时光总嘲笑我的痴心妄想 提交于 2019-12-10 15:41:42
问题 We have multiple update queries in a single partition of a single columnfamily. Like below update t1 set username = 'abc', url = 'www.something.com', age = ? where userid = 100; update t1 set username = 'abc', url = 'www.something.com', weight = ? where userid = 100; update t1 set username = 'abc', url = 'www.something.com', height = ? where userid = 100; username , url will be always same and are mandatory fields. But depending on the information given there will be extra columns. As this is

Understanding “Number of keys” in nodetool cfstats

早过忘川 提交于 2019-12-10 14:24:25
问题 I am new to Cassandra, in this example i am using a cluster with 1 DC and 5 nodes and a NetworkTopologyStrategy with replication factor as 3. Keyspace: activityfeed Read Count: 0 Read Latency: NaN ms. Write Count: 0 Write Latency: NaN ms. Pending Tasks: 0 Table: feed_shubham SSTable count: 1 Space used (live), bytes: 52620684 Space used (total), bytes: 52620684 SSTable Compression Ratio: 0.3727660543119897 Number of keys (estimate): 137984 Memtable cell count: 0 Memtable data size, bytes: 0

high and low cardinality in Cassandra

江枫思渺然 提交于 2019-12-10 04:35:16
问题 I keep coming across these terms: high cardinality and low cardinality in Cassandra . I don't understand what exactly they mean. what effects they have on queries and what is preferred. Please explain with example since that will be be easy to follow. 回答1: The cardinality of X is nothing more than the number of elements that compose X. In Cassandra the partition key cardinality is very important for partitioning data. Since the partition key is responsible for the distribution of the data

Pickling Error running COPY command: CQLShell on Windows

做~自己de王妃 提交于 2019-12-09 00:41:21
问题 We're running a copy command in CQLShell on Windows 7. At first, we ran into an "IMPROPER COPY COMMAND": COPY ourdata(data_time, data_ID, dataBlob) FROM 'TestData.csv' WITH HEADER = true; We later started receiving this error after running the same command: Error starting import process: Can't pickle <type 'thread.lock'>: it's not found as thread.lock can only join a started process cqlsh:testkeyspace> Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Program

major compaction on LCS

柔情痞子 提交于 2019-12-08 12:46:21
问题 I have table with LCS in Cassandra cluster. I am observing too may tombstones in my cluster so I have decided to reduce GC grace seconds and performing major compaction. Ran nodetoolcompact keyspace table but compaction job ran within a second. it seems major compaction not worked. Can you please help me to understand. 回答1: If you're actually using the antique Cassandra 2.0, as the label on your question said, then indeed it didn't support major compaction on LCS, and "nodetool compact" only

Cassandra 1.2.x to 2.x data center rebuild

耗尽温柔 提交于 2019-12-08 04:32:18
问题 I'm trying to upgrade from Cassandra 1.2.x to 2.x. The way I normally do upgrades is by bringing up a new data center (this is on EC2, so not much of an issue) and using nodetool rebuild to move the data over to the new data center. Then switch apps over to the new data center, repair, and then shut down the old data center. However I am having some trouble with this going from 1.2.15.1 to 2.0.7.31. On the 2.x nodes when I run nodetool rebuild us-east-1-2-15-1 , instead of starting the