datastax-enterprise

DSE Solr nodes and vnodes

我是研究僧i 提交于 2019-12-01 23:34:41
The following documentation pages say that it is not recommended to use vnodes for Solr/Hadoop nodes: http://www.datastax.com/documentation/datastax_enterprise/4.0/datastax_enterprise/srch/srchIntro.html http://www.datastax.com/documentation/datastax_enterprise/4.0/datastax_enterprise/deploy/deployConfigRep.html#configReplication What is the exact problem with using vnodes for these node types? I inherited a DSE setup wherein the Search nodes all use vnodes, and I wonder if I should take down the cluster and disable vnodes. Is there any harm in leaving vnodes enabled in such a case? It is

How do I see SOLR dynamic fields in CQL with Cassandra?

北慕城南 提交于 2019-12-01 22:59:55
Solr dynamic fields appear as searchable in Solr and available in the Thrift interface, but when using CQL, the fields don't appear. Is there a specific search style or querying style that can be used to expose what the dynamic fields are and their values? Through CQL3 Dynamic fields should work as well with a few caveats. You need to declare the type as a map (eg: dyn_ map) and create the CQL schema. Post your schema with the dynamic type declared. The dynamic part isn't inferred inside the map by the name of the container (the map). So you need to include the dynamic part in the data. This

Can't backup to S3 with OpsCenter 5.2.1

早过忘川 提交于 2019-12-01 14:42:57
I upgraded OpsCenter from 5.1.3 to 5.2.0 (and then to 5.2.1). I had a scheduled backup to local server and an S3 location configured before the upgrade, which worked fine with OpsCenter 5.1.3. I made to no changes to the scheduled backup during or after the upgrade. The day after the upgrade, the S3 backup failed. In opscenterd.log, I see these errors: 2015-09-28 17:00:00+0000 [local] INFO: Instructing agents to start backups at Mon, 28 Sep 2015 17:00:00 +0000 2015-09-28 17:00:00+0000 [local] INFO: Scheduled job 458459d6-d038-41b4-9094-7d450e4bac6f finished 2015-09-28 17:00:00+0000 [local]

Can't backup to S3 with OpsCenter 5.2.1

邮差的信 提交于 2019-12-01 13:19:40
问题 I upgraded OpsCenter from 5.1.3 to 5.2.0 (and then to 5.2.1). I had a scheduled backup to local server and an S3 location configured before the upgrade, which worked fine with OpsCenter 5.1.3. I made to no changes to the scheduled backup during or after the upgrade. The day after the upgrade, the S3 backup failed. In opscenterd.log, I see these errors: 2015-09-28 17:00:00+0000 [local] INFO: Instructing agents to start backups at Mon, 28 Sep 2015 17:00:00 +0000 2015-09-28 17:00:00+0000 [local]

Can't connect to CFS node

て烟熏妆下的殇ゞ 提交于 2019-12-01 12:26:34
问题 I removed (or decommisioned, can't remember) a DSE analytics node (with IP 10.14.5.50 ) a couple of months ago. When I now try to execute a dse shark ( CREATE TABLE ccc AS SELECT ... ) query I now receiving: 15/01/22 13:23:17 ERROR parse.SharkSemanticAnalyzer: org.apache.hadoop.hive.ql.parse.SemanticException: 0:0 Error creating temporary folder on: cfs://10.14.5.50/user/hive/warehouse/mykeyspace.db. Error encountered near token 'TOK_TMP_FILE' at org.apache.hadoop.hive.ql.parse

Cassandra host in cluster with null ID

江枫思渺然 提交于 2019-12-01 11:34:00
Note: We are seeing this issue in our Cassandra 2.1.12.1047 (DSE 4.8.4) cluster with 6 nodes across 3 regions (2 in each region). Trying to update schemas on our cluster recently, we found the updates were failing. We suspected one node in the cluster was not accepting the change. When checking the system.peers table of one of our servers in us-east-1, that it had an anomaly, it had what seemed to be a complete entry for a host that does not exist. cassandra@cqlsh> SELECT peer, host_id FROM system.peers WHERE peer IN ('54.158.22.187', '54.196.90.253'); peer | host_id ---------------+----------

Cassandra host in cluster with null ID

对着背影说爱祢 提交于 2019-12-01 08:05:46
问题 Note: We are seeing this issue in our Cassandra 2.1.12.1047 (DSE 4.8.4) cluster with 6 nodes across 3 regions (2 in each region). Trying to update schemas on our cluster recently, we found the updates were failing. We suspected one node in the cluster was not accepting the change. When checking the system.peers table of one of our servers in us-east-1, that it had an anomaly, it had what seemed to be a complete entry for a host that does not exist. cassandra@cqlsh> SELECT peer, host_id FROM

Does Datastax DSE 5.1 search support Solr local paramater as used in facet.pivot

狂风中的少年 提交于 2019-12-01 07:21:34
问题 I understand that DSE 5.1 runs Solr 6.0 version. I am trying to use facet.pivot feature using Solr local paramater, but it does not seem to be working. My data is as follows Simple 4 fields What I need is to group the result by name field so as to get sum(money) for each Year. I believe facet.pivot with local parameter can solve but not working with DSE 5.1. From:Solr documentation Combining Stats Component With Pivots In addition to some of the general local parameters supported by other

Pig & Cassandra & DataStax Splits Control

亡梦爱人 提交于 2019-12-01 06:43:03
I have been using Pig with my Cassandra data to do all kinds of amazing feats of groupings that would be almost impossible to write imperatively. I am using DataStax's integration of Hadoop & Cassandra, and I have to say it is quite impressive. Hat-off to those guys!! I have a pretty small sandbox cluster (2-nodes) where I am putting this system thru some tests. I have a CQL table that has ~53M rows (about 350 bytes ea.), and I notice that the Mapper later takes a very long time to grind thru these 53M rows. I started poking around the logs and I can see that the map is spilling repeatedly (i

Can I force cleanup of old tombstones?

荒凉一梦 提交于 2019-12-01 02:54:11
I have recently lowered gc_grace_seconds for a CQL table. I am running LeveledCompactionStrategy . Is it possible for me to force purging of old tombstones from my SSTables? TL;DR Your tombstones will disappear on their own through compaction bit make sure you are running repair or they may come back from the dead. http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html Adding some more details: Tombstones are not immediately available for deletion until both: 1) gc_grace_seconds has expired 2) they meet the requirements configured in tombstone compaction sub