cassandra-3.0

Cassandra Accessor / Mapper not mapping udt field

北城余情 提交于 2019-12-12 03:53:48
问题 I am using datastax cassandra 3.1.2. I have created the following table in cassandra and inserted a record. CREATE TYPE memory ( capacity text ); create TABLE laptop ( id uuid primary key, model text, ram frozen<memory> ); select * from laptop ; id | model | ram --------------------------------------+---------------+------------------- e55cba2b-0847-40d5-ad56-ae97e793dc3e | Dell Latitude | {capacity: '8gb'} When I am trying to fetch the capacity field from frozen type memory in Java using

Unable to create a table with comment

半世苍凉 提交于 2019-12-11 23:19:42
问题 I am trying to create a table and want to add a comment in the schema but I am not able to get the syntax correct. CREATE TABLE codingjedi.practice_questions_javascript_tag( year bigint, month bigint, creation_time_hour bigInt, creation_time_minute bigInt, id uuid, PRIMARY KEY ((year, month), creation_time_hour, creation_time_minute) ) WITH comment = 'some comment' CLUSTERING ORDER BY (creation_time_hour DESC, creation_time_minute ASC) 回答1: I had to use AND to combine the two instruction

Joining streaming data on table data and update the table as the stream receives , is it possible?

我与影子孤独终老i 提交于 2019-12-11 19:46:33
问题 I am using spark-sql 2.4.1 , spark-cassandra-connector_2.11-2.4.1.jar and java8. I have scenario , where I need join streaming data with C*/Cassandra table data. If record/join found I need to copy the existing C* table record to another table_bkp and update the actual C* table record with latest data. As the streaming data come in I need to perform this. Is this can be done using spark-sql steaming ? If so , how to do it ? any caveats to take care ? For each batch how to get C* table data

how to use embedded-cassandra without any test framework

删除回忆录丶 提交于 2019-12-11 19:28:44
问题 I have a play/scala application and uses cassandra database. I read about embedded-cassandra and am trying to use it. My application doesn't use any test frameworks like junit (I'll prefer not to use them to avoid if possible). So far, I have created a factory and a cqlstatement . But I can't figure out how to execute the statement. Referring to the wiki, it refers to TestCassandra but my IDE can't find this class. Do I need to use TestNG or Junit4 ? class UsersRepositorySpecs extends

How to store and retrieve base64 encoded image in Cassandra

﹥>﹥吖頭↗ 提交于 2019-12-11 17:49:17
问题 I am sending an image in base64 format in a json message image:["data:image/png;base64,iVBORw...","data:image/png;base64,idfsd..."] I want to store this image in Cassandra . This json maps to my model as an Array[String] - case class ImageArray( image: Array[String] } I have read that to store images in cassandra, I need ByteBuffer. So I use ByteBuffer.wrap() to convert the array different indexes of the array into ByteBuffer. ByteBuffer.wrap(model.image(0).getBytes()) //for the moment, I am

Cassandra Mem table content

假如想象 提交于 2019-12-11 17:41:57
问题 There is a mem table heap size config in cassandra yaml file..lets say it's 2gb...now if clean up threshold is 33%..then after 675 mb of mem table space is occupied..cassandra will flush the largest mem table to disk..My question is what cassandra does with the remaining mem table space that is 1373 mb(2048-675). According to my understanding at any point of time data in memtable space will not be more than 675 Mb,the moment mem table data grows beyond 675 mb,largest memtable get flushed to

Data Partitioning in Cassandra

自闭症网瘾萝莉.ら 提交于 2019-12-11 17:23:49
问题 Two questions, Lets say I have three cassandra nodes / environments setup, Node 1, Node 2 and Node 3. where I specified the tokens for Node 1 as 1 to 60, Node 2 as 61 to 120, Node 3 as 121 to 255. 1) As per the Cassandra documentation, for the partition key matching 1 to 60 it should be there in Node 1 but during replication this partition data of 1 to 60 is replicated to Node 2 and Node 3 . So why do we need the partition separation in it? In this case, from which node the read happens for

Unable to authenticate cassandra cluster through spark scala program

自作多情 提交于 2019-12-11 15:55:37
问题 Please suggest me to solve the below issue, or suggest me any different approach to achieve my problem statement. I am getting data from somewhere and inserting it into cassandra daily basis then I need to retrieve the data from cassandra for whole week and do some processing and insert result back onto cassandra. i have lot of records, each record executing most of the below operations. According to my previous post Repreparing preparedstatement warning suggestion, to avoid repreparing the

Error in TestCassandra - A needed class was not found. … Missing class: com/google/common/util/concurrent/FutureFallback

余生颓废 提交于 2019-12-11 15:54:41
问题 I upgraded my set up to Play 2.7 , Silhouette 6 and also updated TestCassandra to 2.0.4 . My set up uses cassandra 3.11.4 . When I start my test, I get error A needed class was not found. This could be due to an error in your runpath. Missing class: com/google/common/util/concurrent/FutureFallback How can I fix the issue? There are few answers in SO but they require updating pom.xml . I am not using pom.xml in the test setup. 来源: https://stackoverflow.com/questions/57610622/error-in

Why Secondary Index ( = ?) and Clustering Columns (order by) CANNOT be used together for CQL Query?

痞子三分冷 提交于 2019-12-11 15:42:47
问题 EDIT: a related jira ticket A query in pattern select * from <table> where <partition_keys> = ? and <secondary_index_column> = ? order by <first_clustering_column> desc does not work, with error msg: InvalidRequest: Error from server: code=2200 [Invalid query] message="ORDER BY with 2ndary indexes is not supported." From the structure of index table, above query include the partition key, and first two cluster columns in the index table. Also, note that without order by clause, the result is