cql

Apache Nifi/Cassandra - how to load CSV into Cassandra table

痞子三分冷 提交于 2019-12-10 09:33:46
问题 I have various CSV files incoming several times per day, storing timeseries data from sensors, which are parts of sensors stations. Each CSV is named after the sensor station and sensor id from which it is coming from, for instance "station1_sensor2.csv". At the moment, data is stored like this : > cat station1_sensor2.csv 2016-05-04 03:02:01.001000+0000;0; 2016-05-04 03:02:01.002000+0000;0.1234; 2016-05-04 03:02:01.003000+0000;0.2345; I have created a Cassandra table to store them and to be

What should be the connection string while using CQL jdbc driver

*爱你&永不变心* 提交于 2019-12-10 03:36:49
问题 What should be the connection string while using CQL jdbc driver? Will I be able to find a proper/complete example for CQL using CQL JDBC driver in Java online? 回答1: You'll need the cql jar from the apache site. Here's the basic test I used after entering data via CLI (using sample from wiki): public class CqlJdbcTestBasic { public static void main(String[] args) { Connection con = null; try { Class.forName("org.apache.cassandra.cql.jdbc.CassandraDriver"); con = DriverManager.getConnection(

cassandra 3.5 fails to load trigger class

混江龙づ霸主 提交于 2019-12-09 23:25:44
问题 I am trying to get started with Cassandra triggers, but I cannot get Cassandra to load them. I have built jar files from here and here, and put them under C:\Program Files\DataStax-DDC\apache-cassandra\conf\triggers . I have restarted the DataStax_DDC_Server service (on Windows) and reopened the CQLSH command line, but trying to use the trigger class in a create trigger command gives me only: ConfigurationException: <ErrorMessage code=2300 [Query invalid because of configuration issue]

Wrong count(*) with cassandra-cql

﹥>﹥吖頭↗ 提交于 2019-12-09 16:07:31
问题 I tried to create some users for my testing. I created users in a loop from 0..100000 using the cassandra-cql gem for Ruby on Rails, and then I counted the users in my database and there were only 10000 users as result. If I create 9000, everything works fine. First I thought the users didn't exist, but I used the Apollo WebUI for Cassandra, and I could find the user with the id 100000 and users below. Why does this happen? I know I should use a counter column to provide the number of users

Range Queries in Cassandra (CQL 3.0)

天大地大妈咪最大 提交于 2019-12-09 10:27:07
问题 One main part of Cassandra that I don't fully understand is its range queries. I know that Cassandra emphasizes distributed environment and focuses on performance, but probably because of that, it currently only support several types of ranges queries that it can finish efficiently, and what I would like to know is that: which types of range queries are supported by Cassandra. As far as I know, Cassandra supports the following range queries: 1: Range Queries on Primary key with keyword TOKEN

Cassandra CQL method for paging through all rows

笑着哭i 提交于 2019-12-09 01:50:34
问题 I want to programmatically examine all the rows in a large cassandra table, and was hoping to use CQL. I know I could do this with thrift, getting 10,000 (or so) rows at a time with multiget and handing the last retrieved key into to the next multiget call. But I have looked through all the documentation on CQL select, and there doesn't seem to be a way to do this. I have resorted to setting the select limit higher and higher, and setting the timeout higher and higher to match it. Is there an

Cassandra 1.2 inserting/updating a blob column type using Python and the cql library

北慕城南 提交于 2019-12-08 21:48:09
问题 Intro I have a blob column on a Cassandra 1.2 column family, the table is defined as follows: CREATE TABLE objects ( id text, obj blob, PRIMARY KEY (id) ); The problem: The problem is that when I need to insert/update the blob column from Python using the cql library, I need to base 16 encode the contents of the column like this: import cPickle import cql ... def save_object(connection, obj): object['id'] = obj['id'] object['obj'] = cPickle.dumps(obj).encode("hex") cql_statement = "INSERT

Cassandra aggregation

泄露秘密 提交于 2019-12-08 21:44:08
问题 I have a Cassandra cluster with 4 table and data inside. I want to make request with aggregation function ( sum, max ...) but I've read here that it's impossible : http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/cql_function_r.html Is there a way to make sum , average, group by, without buying the enterprise version, can I use presto , or other solutions? Thanks 回答1: Aggregate functions will be available as part of Cassandra 3.0 https://issues.apache.org/jira/browse/CASSANDRA

Cassandra Read a negative frame size

China☆狼群 提交于 2019-12-08 19:26:35
问题 I'm experiencing this error while trying to query Cassandra using cassandra-jdbc(1.1.3) driver. Caused by: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2147418110)! at org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133) at org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) at org.apache.thrift.protocol.TBinaryProtocol.readStringBody

Get count of elements in Set type column in Cassandra

青春壹個敷衍的年華 提交于 2019-12-08 19:12:33
问题 how to get the count of elements in a column in Cassandra (cql) which is of Set; e.g; a Column in a table has value {'9970GBBHVOB61', '9970GBBHVOB62', '9970GBBHVOB6O'} .I want 3 to be returned from the query 回答1: unfortunately the Collections-support even in CQL Driver v2 is not perfect: you may add or delete items in upsert statements. But more on them, like doing an item select, asking for collection item's TTLs or asking for the collection's size, is not supported. So you have to resultset