cql

Cassandra error - Order By only supported when partition key is restricted by EQ or IN

梦想与她 提交于 2019-12-04 18:23:15
问题 Here is the table I'm creating, this table contains information about players that played the last mundial cup. CREATE TABLE players ( group text, equipt text, number int, position text, name text, day int, month int, year int, club text, liga text, capitan text, PRIMARY key (name, day, month, year)); When doing the following query : Obtain 5 names from the oldest players that were captain of the selection team Here is my query: SELECT name FROM players WHERE captain='YES' ORDER BY year DESC

Cassandra data modeling for one-to-many lookup

99封情书 提交于 2019-12-04 17:41:34
Consider the problem of storing users and their contacts. There are about a 100 million users, each has a few hundred contacts and on an average contacts are 1kb in size. There may be some users with too many contacts (>5000) and there may be some contacts that are much (say 10x) bigger than the average of 1kb. Users actively add contacts and less often also delete them. Contacts are not pointers to other users but just a bundle of information. There are two kinds of queries - Given a user and a contact name, lookup the contact details Given a user, look up all associated contact names I was

Does CQL3 require a schema for Cassandra now?

纵然是瞬间 提交于 2019-12-04 10:45:17
问题 I've just had a crash course of Cassandra over the last week and went from Thrift API to CQL to grokking SuperColumns to learning I shouldn't use them and user Composite Keys instead. I'm now trying out CQL3 and it would appear that I can no longer insert into columns that are not defined in the schema, or see those columns in a select * Am I missing some option to enable this in CQL3 or does it expect me to define every column in the schema (defeating the purpose of wide, flexible rows, imho

Cassandra CQL range query rejected despite equality operator and secondary index

非 Y 不嫁゛ 提交于 2019-12-04 09:21:40
From the table schema below, I am trying to select all pH readings that are below 5. I have followed these three pieces of advice: Use ALLOW FILTERING Include an equality comparison Create a secondary index on the reading_value column. Here is my query: select * from todmorden_numeric where sensor_name = 'pHradio' and reading_value < 5 allow filtering; Which is rejected with this message: Bad Request: No indexed columns present in by-columns clause with Equal operator I tried adding a secondary index to the sensor_name column and was told that it was already part of the key and therefore

Upsert/Read into/from Cassandra database using Datastax API (using new Binary protocol)

帅比萌擦擦* 提交于 2019-12-04 06:16:37
问题 I have started working with Cassandra database . I am planning to use Datastax API to upsert/read into/from Cassandra database . I am totally new to this Datastax API (which uses new Binary protocol) and I am not able to find lot of documentations as well which have some proper examples. create column family profile with key_validation_class = 'UTF8Type' and comparator = 'UTF8Type' and default_validation_class = 'UTF8Type' and column_metadata = [ {column_name : crd, validation_class :

What is the byte size of common Cassandra data types - To be used when calculating partition disk usage?

一笑奈何 提交于 2019-12-04 05:28:33
I am trying to calculate the the partition size for each row in a table with arbitrary amount of columns and types using a formula from the Datastax Academy Data Modeling Course. In order to do that I need to know the "size in bytes" for some common Cassandra data types. I tried to google this but I get a lot of suggestions so I am puzzled. The data types I would like to know the byte size of are: A single Cassandra TEXT character (I googled answers from 2 - 4 bytes) A Cassandra DECIMAL A Cassandra INT (I suppose it is 4 bytes) A Cassandra BIGINT (I suppose it is 8 bytes) A Cassandra BOOELAN

Time series modelling( with start & end date) in cassandra

痴心易碎 提交于 2019-12-04 04:38:16
问题 I am doing time series data modelling where I have a start date and end date of events. I need to query on that data model like the following: Select * from tablename where startdate>'2012-08-09' and enddate<'2012-09-09' I referred to the following link on cql where clause but I couldn't achieve this. Any way to do that? I can also change the data model or any cql tweaks. I am using Cassandra 2.1. 回答1: I had to solve a similar problem in one of my former positions. This is one way in which

How to bind IN-clause values in a CQL 3 prepared statement?

旧街凉风 提交于 2019-12-04 03:53:13
问题 I have a table that is roughly like create table mytable ( id uuid, something text, primary key (id) ); I'm trying to create a prepared statement that has a bound in-clause: PreparedStatement ps = session.prepare("select * from mytable where id IN (?)"); ... UUID[] ids = { uuid1, uuid2, uuid3} ; No matter how I express the ids to bind, the java driver rejects them. ps.bind( /*as array*/) : driver complains statement has only one value, 2 supplied ps.bind( /*as comma separated string list of

Wrong count(*) with cassandra-cql

放肆的年华 提交于 2019-12-04 02:38:02
I tried to create some users for my testing. I created users in a loop from 0..100000 using the cassandra-cql gem for Ruby on Rails, and then I counted the users in my database and there were only 10000 users as result. If I create 9000, everything works fine. First I thought the users didn't exist, but I used the Apollo WebUI for Cassandra, and I could find the user with the id 100000 and users below. Why does this happen? I know I should use a counter column to provide the number of users in my application, but I want to know if this is a bug or a failure of mine. def self.create_users (0.

Copy data from one table to other in Cassandra using Java

╄→гoц情女王★ 提交于 2019-12-04 02:35:14
问题 I am trying to move all my data from one column-family (table) to the other. Since both the tables have different descriptions, I would have to pull all data from table-1 and create a new object for table-2 and then do a bulk aync insert. My table-1 has millions of records so I cannot get all the data directly in my data structure and work that out. I am looking out for solutions to do that easily using Spring Data Cassandra with Java. I initially planned for moving all the data to a temp