datastax

java-cassnadra object Frozen annotation for address map<text, frozen<list<frozen<address>>>>,

我与影子孤独终老i 提交于 2019-12-22 09:21:38
问题 I am trying to insert data into Cassandra(2.1.9) My Java object has a map of a list of UDT. On running the code I am getting error regarding @Frozen annotation. I am using DataStax(2.1.9) Library. http://docs.datastax.com/en/drivers/java/2.1/index.html?com/datastax/driver/mapping/annotations/FrozenValue.html create table user{ name text, addresses map<text, frozen<list<frozen<address>>>>, } My Java Class public class User{ private String name; @FrozenValue private Map<String, List<AddressUDT>

“no viable alternative at input” error when querying cassndra table

浪子不回头ぞ 提交于 2019-12-22 06:29:16
问题 I have a table in Cassandra like this: CREATE TABLE vroc.sensor_data ( dpnode text, year int, month int, day int, data_timestamp bigint, data_sensor text, dsnode text, data_quality double, data_value blob, PRIMARY KEY ((dpnode, year, month, day), data_timestamp, data_sensor, dsnode) ) WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.1 AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' } AND comment = ''

“no viable alternative at input” error when querying cassndra table

血红的双手。 提交于 2019-12-22 06:29:02
问题 I have a table in Cassandra like this: CREATE TABLE vroc.sensor_data ( dpnode text, year int, month int, day int, data_timestamp bigint, data_sensor text, dsnode text, data_quality double, data_value blob, PRIMARY KEY ((dpnode, year, month, day), data_timestamp, data_sensor, dsnode) ) WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.1 AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' } AND comment = ''

What should be datatype for timeuuid in datastax mapper class?

一世执手 提交于 2019-12-22 01:09:15
问题 Datatype for one of the column in cassandra table is timeuuid . While creating my Mapper class as per docs, I am not sure of data type I should use for timeuuid column. I understand that it should be an equivalent Java data type and hence I tried java.util.Date. Refer column definition and Mapper class column definition as below start timeuuid @PartitionKey(1) @Column(name="start") private UUID start; I get the below during CRUD operation Codec not found for requested operation: [timeuuid ->

Cannot record QUEUE latency of n minutes - DSE

冷暖自知 提交于 2019-12-22 01:06:24
问题 One of our nodes in our 3 node cluster is down and on checking the log file, it shows the below messages INFO [keyspace.core Index WorkPool work thread-2] 2016-09-14 14:05:32,891 AbstractMetrics.java:114 - Cannot record QUEUE latency of 11 minutes because higher than 10 minutes. INFO [keyspace.core Index WorkPool work thread-2] 2016-09-14 14:05:33,233 AbstractMetrics.java:114 - Cannot record QUEUE latency of 10 minutes because higher than 10 minutes. WARN [keyspace.core Index WorkPool work

Exception in main thread java.lang.NoClassDefFoundError

ε祈祈猫儿з 提交于 2019-12-21 22:10:52
问题 Getting error Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/util/concurrent/FutureCallback, while running below code. Pls advise which Jar file am missing. I am executing from Eclipse IDE package Datastax; import com.datastax.driver.core.Cluster; import com.datastax.driver.core.Host; import com.datastax.driver.core.Metadata; import com.datastax.driver.core.Session; public class DataStaxPOC { private Cluster cluster; public void connect(String node) { cluster =

Can we have cassandra only nodes and solr enabled nodes in same datacenter?

試著忘記壹切 提交于 2019-12-21 20:33:43
问题 I just started with solr and would like your suggestion in below scenario. We have 2 data centers with 3 nodes in each data center(both in different aws regions for location advantage). We have a requirement for which they asked me if we can have 2 solr nodes in each data center. so it will be 2 solr nodes and 1 cassandra only node in each data center. I want to understand if its fine to have this kind of setup and I am little confused whether solr nodes will have data on it along with the

What is the byte size of common Cassandra data types - To be used when calculating partition disk usage?

霸气de小男生 提交于 2019-12-21 09:33:04
问题 I am trying to calculate the the partition size for each row in a table with arbitrary amount of columns and types using a formula from the Datastax Academy Data Modeling Course. In order to do that I need to know the "size in bytes" for some common Cassandra data types. I tried to google this but I get a lot of suggestions so I am puzzled. The data types I would like to know the byte size of are: A single Cassandra TEXT character (I googled answers from 2 - 4 bytes) A Cassandra DECIMAL A

Cassandra commit log clarification

試著忘記壹切 提交于 2019-12-21 09:28:14
问题 I have read over several documents regarding the Cassandra commit log and, to me, there is conflicting information regarding this "structure(s)". The diagram shows that when a write occurs, Cassandra writes to the memtable and commit log. The confusing part is where this commit log resides. The diagram that I've seen over-and-over shows the commit log on disk. However, if you do some more reading, they also talk about a commit log buffer in memory - and that piece of memory is flushed to disk

What does rows_merged mean in compactionhistory?

和自甴很熟 提交于 2019-12-21 01:08:09
问题 When I issue $ nodetool compactionhistory I get . . . compacted_at bytes_in bytes_out rows_merged . . . 1404936947592 8096 7211 {1:3, 3:1} What does {1:3, 3:1} mean? The only documentation I can find is this which states the number of partitions merged which does not explain why multiple values and what the colon means. 回答1: So basically it means {tables:rows} for example {1:3, 3:1} means 3 rows were taken from one sstable (1:3) and 1 row taken from 3 (3:1) sstables, all to make the one