datastax-enterprise

Datastax 5.0.3 Solr node does not reindex just goes down

青春壹個敷衍的年華 提交于 2019-12-13 08:09:45
问题 I am trying to reindex one of my indexes deleting old data and full-index. When i try to do this. The node immediately goes down, i cannot run nodetool commands such as nodetool status, nodetool tpstats.Also, i cannot see CPU, disk, network. The node is idle. When i look at the processes on my machine, I can see that is cassandra process. my schema.xml like that <field indexed="true" name="key1" stored="true" type="StrField"/> <field indexed="true" name="key2" stored="true" type="StrField"/>

add new datacenter during datastax upgrade 4.8.8 to 5.0.2

岁酱吖の 提交于 2019-12-13 06:49:03
问题 I have multiple datacenters. One of them is Cassandra other one is Solr datacenter. I already started upgrading process. Still 1 node is being upgrading since "upgradesstables" command have been taking for 4 days. I want to add new cassandra datacenter and i dont have time to wait upgrading process is done. Can i add new cassandra datacenter with version 5.0.2 while there is upgrading process is going on. 回答1: Although you can run a cluster in a partially upgraded state, it is a transient

How to disable vnodes in an existing cassandra cluster?

橙三吉。 提交于 2019-12-13 05:14:57
问题 The DSE documentation says this to disable vnodes but I believe it is in the context of setting up a new cluster. Can vnodes be disabled on an existing cluster without loss of data? Is there a procedure for this? Disabling virtual nodes¶ To disable virtual nodes: In the cassandra.yaml file, set num_tokens to 1. num_tokens: 1 Uncomment the initial_token property and set it to 1 or to the value of a generated token > for a multi-node cluster. 回答1: According to the answer that I received from

Gremlin group by vertex property and get sum other properties in the same vertex

你。 提交于 2019-12-12 18:27:46
问题 We have vertex which will store various jobs and their types and counts as properties. I have to group by the status and their counts. I tried the following query which works for one property(receiveCount) g.V().hasLabel("Jobs").has("Type",within("A","B","C")).group().by("Type").by(fold().match(__.as("p").unfold().values("receiveCount").sum().as("totalRec")).select("totalRec")).next() I wanted to give 10 more properties like successCount, FailedCount etc.. Is there a better way to give that?

User Defined Type (UDT) behavior in Cassandra

不羁的心 提交于 2019-12-12 09:19:05
问题 if someone has some experience in using UDT (User Defined Types), I would like to understand how the backward compatibility would work. Say I have the following UDT CREATE TYPE addr ( street1 text, zip text, state text ); If I modify "addr" UDT to have a couple of more attributes (say for example zip_code2 int, and name text): CREATE TYPE addr ( street1 text, zip text, state text, zip_code2 int, name text ); how does the older rows that does have these attributes work? Is it even compatible?

How to keep 2 Cassandra tables within same partition

北战南征 提交于 2019-12-12 07:59:26
问题 I tried reading up on datastax blogs and documentation but could not find any specific on this Is there a way to keep 2 tables in Cassandra to belong to same partition? For example: CREATE TYPE addr ( street_address1 text, city text, state text, country text, zip_code text, ); CREATE TABLE foo ( account_id timeuuid, data text, site_id int, PRIMARY KEY (account_id) }; CREATE TABLE bar ( account_id timeuuid, address_id int, address frozen<addr>, PRIMARY KEY (account_id, address_id) ); Here I

What do I need to import to make `SparkConf` resolvable?

倾然丶 夕夏残阳落幕 提交于 2019-12-12 05:26:10
问题 I am setting up a Java Spark application and am following the Datastax documentation on getting started with the Java API. I've added <dependencies> <dependency> <groupId>com.datastax.spark</groupId> <artifactId>spark-cassandra-connector-java_2.10</artifactId> <version>1.1.1</version> </dependency> ... </dependencies> and (a previously installed dse.jar to my local Maven repository) <dependency> <groupId>com.datastax</groupId> <artifactId>dse</artifactId> <version>version number</version> <

OpsCenter not getting data after restart of server

三世轮回 提交于 2019-12-12 04:23:36
问题 we are using Datastax Enterprize edition. We are running a 2 node cluster. We get the message: After restarting of OpsCentre node getting below error. 2017-03-20 14:49:45,819 [opscenterd] ERROR: Unhandled error in Deferred: There are no clusters with name or ID 'tracking' File "/usr/share/opscenter/lib/py/twisted/internet/defer.py", line 1124, in _inlineCallbacks result = g.send(result) File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/WebServer.py", line 523, in

spark job (scala) write type Date to Cassandra

流过昼夜 提交于 2019-12-12 03:38:59
问题 I'm using DSE 5.1 (spark 2.0.2.6 and cassandra 3.10.0.1652) My Cassandra table: CREATE TABLE ks.tbl ( dk int, date date, ck int, val int, PRIMARY KEY (dk, date, ck) ) WITH CLUSTERING ORDER BY (date DESC, ck ASC); with the following data: dk | date | ck | val ----+------------+----+----- 1 | 2017-01-01 | 1 | 100 1 | 2017-01-01 | 2 | 200 My code must read this data and write the same thing but with yesterday's date (it compiles successfully): package com.datastax.spark.example import com

OpsCenter does not show available storage

自闭症网瘾萝莉.ら 提交于 2019-12-12 01:36:56
问题 I have created a new DataStax Enterprise Cluster that is managed using OpsCenter. All versions used are the latest available from the package repository. The agents have been installed and everything is working perfectly, including RAM Usage, CPU Load, etc. I have added over 90 GB to this cluster without a problem and the hosts can support a lot more.. It is clearly an OpsCenter / DataStax-Agent issue from what I can see. I do not see a relevant line in the log files of either OpsCenter or