elastic-stack

ElasticSearch JavaAPI (SearchScroll)- search_context_missing_exception“,”reason“:”No search context found for id

纵然是瞬间 提交于 2019-12-24 08:48:47
问题 Am fetching more than 100k documents from one index using searchScroll and adding one more field in all 100K documents. Then again am inserting those documents into another new index. Am using SearchScroll api also am setting the size searchSourceBuilder.size(100) i have increased the size to searchSourceBuilder.size(1000) . In both the cases am getting the below error after processing 18100 doucments ( when searchSourceBuilder.size(100) ) & 21098 documents( when searchSourceBuilder.size(1000

Logstash does not process files sent by filebeat

99封情书 提交于 2019-12-24 08:29:21
问题 I have setup an elk stack infrastructure with docker. I can't see files being processed by logstash. Filebeat is configured to send .csv files to logstash from logstash, to elasticsearch. I see the logstash filebeat listner staring. Logstash to elasticsearch pipeline works however there is no document/index written. Please advise filebeat.yml filebeat.prospectors: - input_type: log paths: - logs/sms/*.csv document_type: sms paths: - logs/voip/*.csv document_type: voip output.logstash: enabled

Painless scripting Elastic Search : variable is not defined error when trying to access values from doc

寵の児 提交于 2019-12-24 07:58:55
问题 I am trying to learn painless scripting in Elastic Search by following the official documentation. ( https://www.elastic.co/guide/en/elasticsearch/painless/6.0/painless-examples.html ) A sample of the document I am working with : { "uid" : "CT6716617", "old_username" : "xyz", "new_username" : "abc" } the following script fields query using params._source to access document values works : { "script_fields": { "sales_price": { "script": { "lang": "painless", "source": "(params._source.old

Loading csv in ElasticSearch using logstash

大憨熊 提交于 2019-12-24 07:00:07
问题 I have a csv in which one column may contain multi-line values. ID,Name,Address 1, ABC, "Line 1 Line 2 Line 3" The data written above as per CSV standard is one record (to my knowledge). I have following filter for logstash filter { csv { separator => "," quote_char => "\"" columns => ["ID","Name", "Address"] } } output { elasticsearch { host => "localhost" port => "9200" index => "TestData" protocol => "http" } stdout {} } But when I execute it, it creates three records. (All are wrong in

How i can config multiline in logstash 5.1.2 for tomcat/java

旧城冷巷雨未停 提交于 2019-12-24 01:57:13
问题 I use a 5.1.2 verisón of logstash, filebeat, elasticsearch... "ELK" I try send logs from tomcat server (catalina.out and apps-java logs) but can´t because have problems of config of logstash multiline filter/codec. I follow this instructions https://blog.lanyonm.org/articles/2014/01/12/logstash-multiline-tomcat-log-parsing.html Logstash.conf is this: input { beats { port => 9000 } } filter { if [type] == "tomcat-pro" { codec => "multiline" { patterns_dir => "/opt/logstash/patterns" pattern =>

Kibana stopped working and now server not getting ready although kibana.service starts up nicely

ぐ巨炮叔叔 提交于 2019-12-23 03:12:11
问题 Without any major system update of my Ubuntu ( 4.4.0-142-generic #168-Ubuntu SMP ), Kibana 7.2.0 stopped working. I am still able to start the service with sudo systemctl start kibana.service and the corresponding status looks fine. There is only a warning and no error, this does not seem to be the issue: # sudo systemctl status kibana.service ● kibana.service - Kibana Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2019

ELK not passing metadata from filebeat into logstash

你说的曾经没有我的故事 提交于 2019-12-22 11:13:53
问题 Installed an ELK server via: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7 It seems to work except for the filebeat connection; filebeat does not appear to be forwarding anything or at least I can't find anything in the logs to indicate anything is happening. My filebeat configuration is as follows: filebeat: prospectors: - paths: - /var/log/*.log - /var/log/messages - /var/log/secure encoding: utf-8 input_type: log

Correlate messages in ELK by field

六月ゝ 毕业季﹏ 提交于 2019-12-22 10:53:48
问题 Related to: Combine logs and query in ELK We are setting up ELK and would want to create a visualization in Kibana 4. The issue here is that we want to relate between two different types of message. To simplify: Message type 1 fields: message_type, common_id_number, byte_count, ... Message type 2 fields: message_type, common_id_number, hostname, ... Both messages share the same index in elasticsearch. As you can see we were trying to graph without taking that common_id_number into account,

Use Logstash CSV filter doesn't work

二次信任 提交于 2019-12-22 09:02:11
问题 I was trying to use CSV filter on Logstash but it can upload values of my file. I'm using Ubuntu Server 14.04, kibana 4, logstash 1.4.2 and elasticsearch 1.4.4. Next I show my CSV file and filter I wrote. Am I doing something wrong? CSV File: Joao,21,555 Miguel,24,1000 Rodrigo,43,443 Maria,54,2343 Antonia,67,213 Logstash CSV filter: #Este e filtro que le o ficheiro e permite alocar os dados num index do Elasticsearch input { file { path => ["/opt/logstash/bin/testeFile_lite.csv"] start

ElasticSearch - fuzzyQuery Java API response are almost same as matchQuery

断了今生、忘了曾经 提交于 2019-12-20 07:15:03
问题 Am trying to fetch documents from elastic search using using matchQuery & fuzzyQuery but am getting same count of response for both the API. For example : Scenario 1 ( with matchQuery ) Am search for valve using matchQuery and am getting the count of 36 with the below matchQuery API QueryBuilder qb = QueryBuilders.boolQuery() .must(QueryBuilders.matchQuery("catalog_value", "valve")) .filter(QueryBuilders.termQuery("locale", "en_US" )); If i search for valves also am getting only 14 count.