elastic-stack

Kibana - How to extract fields from existing Kubernetes logs

笑着哭i 提交于 2019-12-01 11:07:04
I have a sort of ELK stack, with fluentd instead of logstash, running as a DaemonSet on a Kubernetes cluster and sending all logs from all containers, in logstash format, to an Elasticsearch server. Out of the many containers running on the Kubernetes cluster some are nginx containers which output logs of the following format: 121.29.251.188 - [16/Feb/2017:09:31:35 +0000] host="subdomain.site.com" req="GET /data/schedule/update?date=2017-03-01&type=monthly&blocked=0 HTTP/1.1" status=200 body_bytes=4433 referer="https://subdomain.site.com/schedule/2589959/edit?location=23092&return=monthly"

is Security free in Elastic search Stack Features?

£可爱£侵袭症+ 提交于 2019-12-01 00:32:59
we are building an opensource application which needs elasticsearch security feature. i am trying to find if the security feature is free for elastic search. elastic search website says Xpack is open now. Not sure if it is really opensource. Could someone please share your experience? This blog post explained some of the reasons why Elastic "opened" their XPack code. "Open" here simply means that they merged their private XPack repositories into the open ones. One of the reasons that the blog post above doesn't mention is that this move was mostly motivated to facilitate tedious engineering

How to get logs and it's data having word “error” in then and how to configure logstashPipeLine.conf file for the same?

爱⌒轻易说出口 提交于 2019-11-30 14:49:23
Currently I am working on an application where I need to create documents from particular data from a file at specific location. I have set up logstash pipeline configuration. Here is what it looks like currently: input{ file{ path => "D:\ELK_Info\logstashInput.log" start_position => "beginning" } } #Possible IF condition here in the filter output { #Possible IF condition here http { url => "http://localhost:9200/<index_name>/<type_name>" http_method => "post" format => "json" } } I want to provide IF condition in output before calling API. The condition should be like, "If data from input

What is the point of REDIS in ELK stack?

99封情书 提交于 2019-11-30 05:17:38
I currently have architecture with filebeat as the log shipper, which sends logs to log stash indexer instance and then to managed elastic search in AWS. Due to persistent TCP connections, I cannot load balance using AWS ELB multiple log stash indexer instances since filebeats always picks on of the instances and sends it there. So I decided to use redis. Now seeing how difficult it is to scale redis and make it highly available compontent in ELK stack I want to ask what is even the point of redis. I read a million times it acts as a buffer, but if filebeats stops sending logs to logstash if

analyzed or not_analyzed, what to choose

。_饼干妹妹 提交于 2019-11-29 10:09:48
I'm using only kibana to search ElasticSearch and i have several fields that can only take a few values (worst case, servername, 30 different values). I do understand what analyze do to bigger, more complex fields like this , but the small and simple ones i fail to understand the advance/disadvantage of anaylyzed/not_analyzed fields. So what are the benefits of using analyzed and not_analyzed for a "limited set of values" field (example. servername: server[0-9]* , no special characters to break)? What kind of search types will i lose in kibana? Will i gain any search speed or disk space?

How to log from Node.js with Express to ELK?

随声附和 提交于 2019-11-29 07:50:09
问题 I have a Node.js server application with Express. I would like to log its activity into ElasticSearch and visualize the logs using Kibana. What would be the right way to do that? Should I write a log file of json lines and read it with Logstash? 回答1: I'd recommend log4js. It has a range of useful appenders, and logstash is one of them. It works over UDP. Here is an example taken from the log4js site: var log4js = require('../lib/log4js'); /* Sample logstash config: udp { codec => json port =>

What is the point of REDIS in ELK stack?

故事扮演 提交于 2019-11-29 03:09:02
问题 I currently have architecture with filebeat as the log shipper, which sends logs to log stash indexer instance and then to managed elastic search in AWS. Due to persistent TCP connections, I cannot load balance using AWS ELB multiple log stash indexer instances since filebeats always picks on of the instances and sends it there. So I decided to use redis. Now seeing how difficult it is to scale redis and make it highly available compontent in ELK stack I want to ask what is even the point of

Logstash sprintf formatting for elasticsearch output plugin not working

夙愿已清 提交于 2019-11-28 14:43:44
I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete: { "beat" => { "hostname" => "ca86fed16953", "name" => "ca86fed16953", "version" => "6.5.1" }, "@timestamp" => 2018-12-02T05:13:21.879Z, "host" => { "name" => "ca86fed16953" }, "tags" => [ [0] "beats_input_codec_plain_applied", [1] "_grokparsefailure" ], "fields" => { "env" => "DEV" }, "source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log", "@version" => "1", "prospector" =

Find and replace in elasticsearch all documents

扶醉桌前 提交于 2019-11-28 06:22:08
问题 I wanted to replace the single username in all my elasticsearch index documents. Is there any API query ? I tried searching multiple but couldn't find. Any one has idea? My scenario: curl -XPOST 'http://localhost:9200/test/movies/' -d '{"user":"mad", "role":"tester"}' curl -XPOST 'http://localhost:9200/test/movies/' -d '{"user":"bob", "role":"engineer"}' curl -XPOST 'http://localhost:9200/test/movies/' -d '{"user":"cat", "role":"engineer"}' curl -XPOST 'http://localhost:9200/test/movies/' -d

Logstash sprintf formatting for elasticsearch output plugin not working

感情迁移 提交于 2019-11-27 08:43:31
问题 I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete: { "beat" => { "hostname" => "ca86fed16953", "name" => "ca86fed16953", "version" => "6.5.1" }, "@timestamp" => 2018-12-02T05:13:21.879Z, "host" => { "name" => "ca86fed16953" }, "tags" => [ [0] "beats_input_codec_plain_applied", [1] "_grokparsefailure" ], "fields" => { "env" =>