elastic-stack

Intermittent SocketTimeoutException with elasticsearch-rest-client-7.2.0

爷,独闯天下 提交于 2019-12-11 04:59:11
问题 I am using RestHighLevelClient version 7.2 to connect to the ElasticSearch cluster version 7.2. My cluster has 3 Master nodes and 2 data nodes. Data node memory config: 2 core and 8 GB. I have used to below code in my spring boot project to create RestHighLevelClient instance. @Bean(destroyMethod = "close") @Qualifier("readClient") public RestHighLevelClient readClient(){ final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials

Elasticsearch(6.5) HIgh level java rest client Delete an index by name is not working

不羁岁月 提交于 2019-12-11 04:25:55
问题 I can delete a document by passing the index name , type and id like this- DeleteRequest deleteRequest = new DeleteRequest(data.getIndexName(),data.getType(),data.getUniqueId()); DeleteResponse deleteResponse = client.delete(deleteRequest); But when I am trying to delete an index by giving the index name only like below- (According to this document) DeleteRequest deleteRequest = new DeleteRequest(allData.getIndexName()); DeleteResponse deleteResponse = client.delete(deleteRequest); getting-

ElasticSearch - RestHighLevelClient java.io.IOException: An existing connection was forcibly closed by the remote host

那年仲夏 提交于 2019-12-11 02:35:12
问题 Am fetch 100K plus documents from an Index using single query. For that am using ScrollSearch API. After that am iterating each and every doucments one by one and adding one more field into that and creating a new index for that. Am using RestHighLevelClient for the connectivity between Java Code and ElasticSearch. I have set Timeout options for 25 hours (setMaxRetryTimeoutMillis(90000000)). But, still am getting the below exception after 30 minutes Exception in thread "main" java.io

How to absolutely delete something from ElasticSearch?

三世轮回 提交于 2019-12-11 00:57:22
问题 We use an ELK stack for our logging. I've been asked to design a process for how we would remove sensitive information that had been logged accidentally. Now based on my reading around how ElasticSearch (Lucene) handles deletes and updates the data is still in the index just not available. It will ultimately get cleaned up as indexes get merged, etc.. Is there a process to run an update (to redact something) or delete (to remove something) and guarantee its removal? 回答1: When updating or

Unable to install Search Guard plugin for Elasticsearch-5.x

♀尐吖头ヾ 提交于 2019-12-10 13:56:43
问题 Due to the restrictions, I was not allowed to install any packages from internet. So, This command is not useful for me inorder to install search-guard. bin/elasticsearch-plugin install -b com.floragunn:search-guard-ssl:<version> However, I am able to install Search Guard successfully on a different network by running the above command. Because of this reason, I tried installing Search Guard from tar.gz or zip file by the below command as per documentation. /usr/share/elasticsearch# bin

How to write grok pattern in logstash

六月ゝ 毕业季﹏ 提交于 2019-12-10 12:11:27
问题 I am trying to start with logstash and my application has following type of logs. Here 5 indicate 5 more lines will follow which are stats collected for different related things. These are basically application stats with each line indicating about one of the resource. Is there a way to properly parse it using logstash so that it can be use for Elastic search? [20170502 01:57:26.209 EDT (thread-name) package-name.classname#MethodName INFO] Some info line (5 stats): [fieldA: strvalue1| field2:

FileBeat harvesting issues

∥☆過路亽.° 提交于 2019-12-10 11:47:46
问题 We are using ELK for controlling our program logs. In our FileBeat config we are harvesting from 30 different paths which contains files that updates every second (it updates every second only in the prod's machines - in the other Dev machines we have significantly less logs). Our log files not get deleted until they getting old and we stop using them (also we don't modify there names) . Lately we found out that the logs from last paths in the configuration file (.yml) from the prod machines

Kibana 5.5.1 behind a nginx 1.13 proxy (dockerized)

守給你的承諾、 提交于 2019-12-10 06:56:57
问题 Goal: I want to run the elk stack in a docker container. To be able to access the ELK Stack over a nginx proxy to bypass the individual ports for the services. The Kibana service (default port 5601) http://<server>.com:5601 should be reachable over the following address: http://<server>.com/kibana Problem: The problem is, that it is not possible to reach the kibana site after I add the server.basePath setting to the config. I only can bring up the service if I add every base api call of

Mask middle 6 digits of credit card number in logstash

南笙酒味 提交于 2019-12-08 11:14:57
问题 The requirement is to show the start 6 digits and last 4 digits and mask the remaining numbers of credit card in logstash. I applied gsub/mutate filter but the replacement string doesn't allow regex. Any other way this can be done in logstash? if [message] =~ '\d{16}' { mutate { gsub => ["message", "\d{6}\d{4}\d{4}", "\d{6}######\d{4}"] add_tag => "Masked CardNo" } } This code masks the credit card 3456902345871092 to \d{6}######\d{4} but it should be masked as 345690######1092. As an

Dumping elastic data into csv or into any NOSQL through python

泪湿孤枕 提交于 2019-12-08 10:52:30
问题 As we know we can't fetch more than 10000 rows in python from elastic search because of connection error issue. I want data for two hours from my elastic clusterand for every 5 minutes, I am having approx 10000 observation. 1.) Is there is any way if I can just dump the data from elastic search directly into csv or into some Nosql db with more than 10000 count. I writing my code in python. I am having elasticsearch version 5 回答1: Try the below code for scroll query from elasticsearch import