datastax

Cassandra Allow filtering

倾然丶 夕夏残阳落幕 提交于 2019-12-07 02:23:13
问题 I have a table as below CREATE TABLE test ( day int, id varchar, start int, action varchar, PRIMARY KEY((day),start,id) ); I want to run this query Select * from test where day=1 and start > 1475485412 and start < 1485785654 and action='accept' ALLOW FILTERING Is this ALLOW FILTERING efficient? I am expecting that cassandra will filter in this order 1. By Partitioning column(day) 2. By the range column(start) on the 1's result 3. By action column on 2's result. So the allow filtering will not

Non frozen collections and user defined types on Cassandra 2.1.8

瘦欲@ 提交于 2019-12-06 17:22:27
问题 I'm trying to run the following example from here CREATE TYPE address ( street text, city text, zip int ); CREATE TABLE user_profiles ( login text PRIMARY KEY, first_name text, last_name text, email text, addresses map<text, address> ); However, when I try to create the user_profiles table, I get the following error: InvalidRequest: code=2200 [Invalid query] message="Non-frozen collections are not allowed inside collections: map<text, address> Any thoughts on why this could be happening? 回答1:

Normal Query on Cassandra using DataStax Enterprise works, but not solr_query

£可爱£侵袭症+ 提交于 2019-12-06 16:42:25
I am having a strange issue occur while utilizing the solr_query handler to make queries in Cassandra on my terminal. When I perform normal queries on my table, I am having no issues, but when I use solr_query I get the following error: Unable to complete request: one or more nodes were unavailable. Other individuals who have experienced this problem seem unable to do any queries on their data whatsoever, whether or not it is solr_query. My problem only persists while using that handler. Can anyone give me a suggestion for what the issue may be with my solr node. ALSO -- I can do queries off

Two node DSE spark cluster error setting up second node. Why?

醉酒当歌 提交于 2019-12-06 15:33:15
I have DSE spark cluster with 2 nodes. One DSE analytics node with spark cannot start after I install it. Without spark it starts just fine. But on my other node spark is enabled and it can start and works just fine. Why is that and how can I solve that? Thanks. Here is my error log: ERROR [main] 2016-02-27 20:35:43,353 CassandraDaemon.java:294 - Fatal exception during initialization org.apache.cassandra.exceptions.ConfigurationException: Cannot start node if snitch's data center (Analytics) differs from previous data center (Cassandra). Please fix the snitch configuration, decommission and

Solr docValues usage

无人久伴 提交于 2019-12-06 12:58:20
问题 I am planning to try Solr's docValues to hopefully improve facet and sort performance. I have some questions around this feature: If I enable docValues, will Solr create a forward index (for faceting) in addition to a separate reverse index (for searching)? Or will Solr simply create a forward index ONLY? (thus, resulting to performance gain in faceting in exchange for performance loss in searching) If I want to both facet and search in a single field, what is the best practice? Should I set

How do I connect to local cassandra db

主宰稳场 提交于 2019-12-06 12:47:56
问题 I have a cassandra db running locally. I can see it working in Ops Center. However, when I open dev center and try to connect I get a cryptic "unable to connect" error. How can I get the exact name / connectionstring that I need to use to connect to this local cassandra db via dev center? 回答1: The hostname/IP to connect to is specified in the listen_address property of your cassandra.yaml.If you are connecting to Cassandra from your localhost only (a sandbox machine), then you can set the

How to executing batch statement and LWT as a transaction in Cassandra

谁都会走 提交于 2019-12-06 12:16:27
I have two table with below model: CREATE TABLE IF NOT EXISTS INV ( CODE TEXT, PRODUCT_CODE TEXT, LOCATION_NUMBER TEXT, QUANTITY DECIMAL, CHECK_INDICATOR BOOLEAN, VERSION BIGINT, PRIMARY KEY ((LOCATION_NUMBER, PRODUCT_CODE))); CREATE TABLE IF NOT EXISTS LOOK_INV ( LOCATION_NUMBER TEXT, CHECK_INDICATOR BOOLEAN, PRODUCT_CODE TEXT, CHECK_INDICATOR_DDTM TIMESTAMP, PRIMARY KEY ((LOCATION_NUMBER), CHECK_INDICATOR, PRODUCT_CODE)) WITH CLUSTERING ORDER BY (CHECK_INDICATOR ASC, PRODUCT_CODE ASC); I have a business operation where i need to update CHECK_INDICATOR in both the tables and QUANTITY in INV

Cassandra update fails

て烟熏妆下的殇ゞ 提交于 2019-12-06 09:04:17
Solved I was testing update on 3 nodes, and the time on one of those nodes was 1 second behind, so when update a row, the write time is always behind the timestamp, cassandra would not update the rows. I sync all nodes time, and the issue fixed. Edit: I double checked the result, all insertions are succeed, partial updates failed. There's no error/exception messages I have a cassandra cluster(Cassandra 2.0.13) which contains 5 nodes. Using python(2.6.6) cassandra driver(2.6.0c2) for inserting data into database. my server systems are Centos6.X The following code is how i connect to cassandra

how to query by vertex id in Datastax DSE 5.0 Graph in a concise way?

狂风中的少年 提交于 2019-12-06 06:30:59
It seems that the unique id for vertices is community_id in DSE Graph. I have found that this works (id is long) : v = g.V().has("VertexLabel","community_id",id).next() none of those work: v = g.V("community_id",id).next() v = g.V("community_id","VertexLabel:"+id).next() v = g.V(id).next() v = g.V().hasId(id).next() v = g.V().hasId("VertexLabel:"+id).next() v = g.V("VertexLabel:"+id).next() Edit After some investigation I found that for a vertex v, v.id() returns a LinkedHashMap: Vertex v = gT.next(); Object id = v.id(); System.out.println(id); System.out.println(id.getClass()); System.out

Unable to connect to Spark master

∥☆過路亽.° 提交于 2019-12-06 06:28:04
I start my DataStax cassandra instance with Spark: dse cassandra -k I then run this program (from within Eclipse): import org.apache.spark.sql.SQLContext import org.apache.spark.SparkConf import org.apache.spark.SparkContext object Start { def main(args: Array[String]): Unit = { println("***** 1 *****") val sparkConf = new SparkConf().setAppName("Start").setMaster("spark://127.0.0.1:7077") println("***** 2 *****") val sparkContext = new SparkContext(sparkConf) println("***** 3 *****") } } And I get the following output ***** 1 ***** ***** 2 ***** Using Spark's default log4j profile: org/apache