cql3

Cassandra CQL3 composite key not written by Hadoop reducer

我是研究僧i 提交于 2020-01-01 19:50:52
问题 I'm using Cassandra 1.2.8, and have several Hadoop MapReduce jobs, that read rows from some CQL3 tables and write result back to another CQL3 tables. If output CQL3 tables contain composite key, values of the composite key fields are not written by reducer - instead I see empty values for those fields, while performing select query in cqlsh. If the primary key is not composite, everything works correctly. Example of the output CQL3 table with composite key: CREATE TABLE events_by_type_with

Cassandra CQL3 composite key not written by Hadoop reducer

霸气de小男生 提交于 2020-01-01 19:50:44
问题 I'm using Cassandra 1.2.8, and have several Hadoop MapReduce jobs, that read rows from some CQL3 tables and write result back to another CQL3 tables. If output CQL3 tables contain composite key, values of the composite key fields are not written by reducer - instead I see empty values for those fields, while performing select query in cqlsh. If the primary key is not composite, everything works correctly. Example of the output CQL3 table with composite key: CREATE TABLE events_by_type_with

Query using composite keys, other than Row Key in Cassandra

前提是你 提交于 2020-01-01 09:30:35
问题 I want to query data filtering by composite keys other than Row Key in CQL3. These are my queries: CREATE TABLE grades (id int, date timestamp, subject text, status text, PRIMARY KEY (id, subject, status, date) ); When I try and access the data, SELECT * FROM grades where id = 1098; //works fine SELECT * FROM grades where subject = 'English' ALLOW FILTERING; //works fine SELECT * FROM grades where status = 'Active' ALLOW FILTERING; //gives an error Bad Request: PRIMARY KEY part status cannot

Cassandra: How to insert a new wide row with good performance using CQL

北慕城南 提交于 2019-12-29 04:07:19
问题 I am evaluating cassandra. I am using the datastax driver and CQL. I would like to store some data with the following internal structure, where the names are different for each update. +-------+-------+-------+-------+-------+-------+ | | name1 | name2 | name3 | ... | nameN | | time +-------+-------+-------+-------+-------+ | | val1 | val2 | val3 | ... | valN | +-------+-------+-------+-------|-------+-------+ So time should be the column key, and name should be the row key. The CQL statement

cassandra getendpoints with partition key has space

我们两清 提交于 2019-12-25 15:02:39
问题 my partition keys are id(int) and name(text). Below command works fine until there is no space in name(text). nodetool getendpoints test testtable2 1:aaa; if am using nodetool getendpoints test testtable2 3:aac cc; it throws an error as : nodetool: getendpoints requires keyspace, table and partition key arguments See 'nodetool help' or 'nodetool help '. i got token by executing SELECT id,name, token(id,name) FROM test.testtable2 where name='aac cc'AND id=3; and tried to search nodetool

Cassandra Timing out because of TTL expiration

久未见 提交于 2019-12-24 14:08:39
问题 Im using a DataStax Community v 2.1.2-1 (AMI v 2.5) with preinstalled default settings+ increased read time out to 10sec here is the issue create table simplenotification_ttl ( user_id varchar, real_time timestamp, insert_time timeuuid, read boolean, msg varchar, PRIMARY KEY (user_id, real_time, insert_time)); Insert Query: insert into simplenotification_ttl (user_id, real_time, insert_time, read) values ('test_3',14401440123, now(),false) using TTL 800; For same 'test_3' I inserted 33,000

run a bulk update query in cassandra on 1 column

天大地大妈咪最大 提交于 2019-12-24 08:16:08
问题 we have a scenario where a table in cassandra which has over million records and we want execute a bulk update on a column(basically set the column value to null in entire table). is there a way to do so since below query won't work in CQL UPDATE TABLE_NAME SET COL1=NULL WHERE PRIMARY_KEY IN(SELECT PRIMARY_KEY FROM TABLE_NAME ); P.S - the column is not a primary key or a cluster key. 回答1: There has been a similar question the other days regarding Deleting a column in cassandra for a large

Cassandra CQL where clause with multiple collection values?

China☆狼群 提交于 2019-12-24 04:13:09
问题 My data model:- tid | codes | raw | type -------------------------------------+--------------+--------------+------ a64fdd60-1bc4-11e5-9b30-3dca08b6a366 | {12, 34, 53} | {sdafb=safd} | cmd CREATE TABLE MyTable ( tid TIMEUUID, type TEXT, codes SET<INT>, raw TEXT, PRIMARY KEY (tid) ); CREATE INDEX ON myTable (codes); How to query the table to return rows based on multiple set values. This works:- select * from logData where codes contains 34; But i want to get row based on multiple set values

Bad performance when writing log data to Cassandra with timeuuid as a column name

三世轮回 提交于 2019-12-24 00:42:04
问题 Following the pointers in an ebay tech blog and a datastax developers blog, I model some event log data in Cassandra 1.2. As a partition key, I use “ddmmyyhh|bucket”, where bucket is any number between 0 and the number of nodes in the cluster. The Data model cqlsh:Log> CREATE TABLE transactions (yymmddhh varchar, bucket int, rId int, created timeuuid, data map, PRIMARY KEY((yymmddhh, bucket), created) ); (rId identifies the resource that fired the event.) (map is are key value pairs derived

Cassandra how to add clustering key in table?

泄露秘密 提交于 2019-12-23 23:22:48
问题 There is a table in cassandra create table test_moments(id Text, title Text, sort int, PRIMARY KEY(id)); How add clustering key in column "sort". Not re-creating the table 回答1: The main problem is the on-disk data structure. Clustering key directly dictates how data is sorted and serialized to disk (and then searched), so what you're asking is not possible. The only way is to "migrate" the data to another table. Depending on your data, if you have a lot of records you could encounter some