elastic-stack

logstash grok filter for custom logs

久未见 提交于 2019-12-20 03:52:11
问题 I have two related questions. First is how best to grok logs that have "messy" spacing and so on, and the second, which I'll ask separately, is how to deal with logs that have arbitrary attribute-value pairs. (See: logstash grok filter for logs with arbitrary attribute-value pairs ) So for the first question, I have a log line that looks like this: 14:46:16.603 [http-nio-8080-exec-4] INFO METERING - msg=93e6dd5e-c009-46b3-b9eb-f753ee3b889a CREATE_JOB job=a820018e-7ad7-481a-97b0-bd705c3280ad

is Security free in Elastic search Stack Features?

戏子无情 提交于 2019-12-19 04:20:21
问题 we are building an opensource application which needs elasticsearch security feature. i am trying to find if the security feature is free for elastic search. elastic search website says Xpack is open now. Not sure if it is really opensource. Could someone please share your experience? 回答1: This blog post explained some of the reasons why Elastic "opened" their XPack code. "Open" here simply means that they merged their private XPack repositories into the open ones. One of the reasons that the

How to get logs and it's data having word “error” in then and how to configure logstashPipeLine.conf file for the same?

时光怂恿深爱的人放手 提交于 2019-12-18 16:57:13
问题 Currently I am working on an application where I need to create documents from particular data from a file at specific location. I have set up logstash pipeline configuration. Here is what it looks like currently: input{ file{ path => "D:\ELK_Info\logstashInput.log" start_position => "beginning" } } #Possible IF condition here in the filter output { #Possible IF condition here http { url => "http://localhost:9200/<index_name>/<type_name>" http_method => "post" format => "json" } } I want to

Ignore and move to next pattern if log contains a specific word

一个人想着一个人 提交于 2019-12-13 18:22:12
问题 I have a log file which comes from spring log file. The log file has three formats. Each of the first two formats is a single line, between them if there is keyword app-info, it is the message printed by own developer. If no, it is printed by spring framework. We may treat developers message different from spring framework ones. The third format is a multiline stack trace. We have an example for our own format, for example 2018-04-27 10:42:49 [http-nio-8088-exec-1] - INFO - app-info -

Logstash doc_as_upsert cross index in Elasticsearch to eliminate duplicates

北城以北 提交于 2019-12-13 12:40:45
问题 I have a logstash configuration that uses the following in the output block in an attempt to mitigate duplicates. output { if [type] == "usage" { elasticsearch { hosts => ["elastic4:9204"] index => "usage-%{+YYYY-MM-dd-HH}" document_id => "%{[@metadata][fingerprint]}" action => "update" doc_as_upsert => true } } } The fingerprint is calculated from a SHA1 hash of two unique fields. This works when logstash sees the same doc in the same index, but since the command that generates the input

Grok parse error while parsing multiple line messages

邮差的信 提交于 2019-12-13 06:51:14
问题 I am trying to figure out grok pattern for parsing multiple messages like exception trace & below is one such log 2017-03-30 14:57:41 [12345] [qtp1533780180-12] ERROR com.app.XYZ - Exception occurred while processing java.lang.NullPointerException: null at spark.webserver.MatcherFilter.doFilter(MatcherFilter.java:162) at spark.webserver.JettyHandler.doHandle(JettyHandler.java:61) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189) at org.eclipse.jetty.server

Handling multiple documents from Elastic Search using Java RestClient API

走远了吗. 提交于 2019-12-13 04:33:19
问题 Am fetching documents from elastic search using Java API. I am able to fetch only one document from the responseBody properly. How can i handle if i get multiple documents as response. Earlier i used RestHighLevelClient with that API i able to handle multiple documents with the help of SearchHit[] searchHits = searchResponse.getHits().getHits(); . With RestClient API, am not able to do that., Please find my below code, that am able to fetch document from elastic search and parsing it as JSON

Customize logs from filebeat in the Lostash's beats.config

自闭症网瘾萝莉.ら 提交于 2019-12-13 04:12:21
问题 I am using ELK with filebeat. I am sending logs from filebeat to Logstash and from there to Elastic and visualizing in Kibana. I am pasting the json result that is displayed in kibana's log result which is as below: { "_index": "filebeat-6.4.2-2018.10.30", "_type": "doc", "_source": { "@timestamp": "2018-10-30T09:15:31.697Z", "fields": { "server": "server1" }, "prospector": { "type": "log" }, "host": { "name": "kushmathapa" }, "message": "{ \"datetime\": \"2018-10-23T18:04:00.811660Z\", \

How to filter a huge list of ids from Solr at runtime

烂漫一生 提交于 2019-12-13 03:29:06
问题 I have an index for products is Solr. I need to serve a customized list of products for each customer such that I have to exclude some specific products for each customer. Currently I am storing this relationship of customer & excluded products in a SQL database and then filtering them in Solr using a terms query. Is there a way I can store this relationship in Solr itself so that I dont have to calculate the exclude list every time from SQL first. Something very similar to what we can do in

Time mismatch in kibana

倾然丶 夕夏残阳落幕 提交于 2019-12-13 02:44:05
问题 We are having ELK setup with Kibana version 5.6.10. We are facing a time mismatch in displaying logs from different servers. We are fetching log from 8 IIS server and parsing via Logstash to Elastic search Kibana. While filtering logs for past hour we could notice only 2 server logs were displayed. We have checked filebeat configuration in each IIS servers and found same configuration setup; also verified IIS log time format and other configurations. We could see indexing is happening