elastic-stack

Combining log entries with logstash

北城以北 提交于 2019-12-08 10:04:23
问题 I want to collect and process logs from dnsmasq and I´ve decided to use ELK. Dnsmasq is used as a DHCP Server and as a DNS Resolver and hence it creates log entries for both services. My goal is to send to Elasticsearch all DNS Queries with the requester IP, requester hostname (if available) and requester mac address. That will allow me to group the request per mac address regardless if the device IP changed or not, and display the host name. What I would like to do is the following: 1) Read

using query_string query with bool in elastic search causing parsing exception

喜夏-厌秋 提交于 2019-12-08 06:38:24
问题 Why is this query giving me a parsing exception? If I remove the bool it does seem to work. But I need the bool there with the query_string. How can I make this work? { "query": { "filtered": { "query": { "bool": { "must": [ { "terms": { "status_type": [ "5" ] } } ] } }, "filter": { "query_string": { "fields": [ [ "name", "message" ] ], "query": "Arnold AND Schwarz" } } } }, "sort": [ { "total_metrics": { "order": "desc" } } ] } 回答1: You should use the query filter which wraps any query into

ELK stack for storing metering data

北城以北 提交于 2019-12-08 04:28:32
问题 In our project we're using an ELK stack for storing logs in a centralized place. However I've noticed that recent versions of ElasticSearch support various aggregations. In addition Kibana 4 supports nice graphical ways to build graphs. Even recent versions of grafana can now work with Elastic Search 2 datasource. So, does all this mean that ELK stack can now be used for storing metering information gathered inside the system or it still cannot be considered as a serious competitor to

Convert any Elasticsearch response to simple field value format

假如想象 提交于 2019-12-08 04:07:19
问题 On elastic search, when doing a simple query like: GET miindex-*/mytype/_search { "query": { "query_string": { "analyze_wildcard": true, "query": "*" } } } It returns a format like: { "took": 1, "timed_out": false, "_shards": { "total": 1, "successful": 1, "failed": 0 }, "hits": { "total": 28, "max_score": 1, "hits": [ ... So I parse like response.hits.hits to get the actual records. However if you are doing another type of query e.g. aggregation, the response is totally different like: {

Combine logs and query in ELK

时光怂恿深爱的人放手 提交于 2019-12-08 04:03:52
问题 With ELK (Elasticsearch-Logstash-Kibana) stack, I collect syslog logs from *nix boxes to Logstash and send it to Kibana via Elasticsearch. This is classical one scenario. My syslog log includes normal system events, squid access log, captiveportal login logs etc. captiveportal logged as 1423548430 2582 192.168.1.23 xx:ae:xx:e1:xx:99 mike.brown cc9aeb1210b39571 MTI= first and squid access logs logged as: 1423562965.228 482 192.168.1.23 TCP_MISS/200 1254 POST http://ad4.liverail.com/? - DIRECT

Query Elasticsearch JSON

不想你离开。 提交于 2019-12-08 03:53:00
问题 I am trying to query elasticsearch in order to find out what products were bought with a certain product. My data going into logstash from a flat file. OrderNumber ProductName order1 Chicken order2 Banana order3 Chicken order1 Cucumber order2 Chicken order3 Apples order1 Flour order2 Rice order3 Nuts As you can see above i have an Product Name of Chicken which occurs in different Order Numbers . OrderNumber ProductName order1 Chicken order3 Chicken order2 Chicken This is what i would like to

Serilog HTTP sink + Logstash: Splitting Serilog message array into individual log events

南笙酒味 提交于 2019-12-07 08:46:15
问题 We're using Serilog HTTP sink to send the messages to Logstash. But the HTTP message body is like this: { "events": [ { "Timestamp": "2016-11-03T00:09:11.4899425+01:00", "Level": "Debug", "MessageTemplate": "Logging {@Heartbeat} from {Computer}", "RenderedMessage": "Logging { UserName: \"Mike\", UserDomainName: \"Home\" } from \"Workstation\"", "Properties": { "Heartbeat": { "UserName": "Mike", "UserDomainName": "Home" }, "Computer": "Workstation" } }, { "Timestamp": "2016-11-03T00:09:12

Convert any Elasticsearch response to simple field value format

白昼怎懂夜的黑 提交于 2019-12-07 07:53:26
On elastic search, when doing a simple query like: GET miindex-*/mytype/_search { "query": { "query_string": { "analyze_wildcard": true, "query": "*" } } } It returns a format like: { "took": 1, "timed_out": false, "_shards": { "total": 1, "successful": 1, "failed": 0 }, "hits": { "total": 28, "max_score": 1, "hits": [ ... So I parse like response.hits.hits to get the actual records. However if you are doing another type of query e.g. aggregation, the response is totally different like: { "took": 1, "timed_out": false, "_shards": { "total": 1, "successful": 1, "failed": 0 }, "hits": { "total":

Docker container cannot send log to docker ELK stack

泪湿孤枕 提交于 2019-12-06 15:41:51
I deployed ELK stack and another separated docker container of spring boot app. In the java app, I use the LogstashSocketAppender to send logs to logstash. If the java app running standalone without docker, it works fine. But when it's running as a docker container, the logstash cannot receive logs. Can anyone help me figure out it? part of logstash configuration: input { udp { port => 5000 type => syslog codec => json } } dcoker port: logstash$ 5000/udp -> 0.0.0.0:5000 springboot$ 8088/tcp -> 0.0.0.0:32981 elasticsearch$ 9200/tcp -> 0.0.0.0:9200 9300/tcp -> 0.0.0.0:9300 kibana$ 5601/tcp -> 0

ElasticSearch Indexing 100K documents with BulkRequest API using Java RestHighLevelClient

≡放荡痞女 提交于 2019-12-06 14:52:49
问题 Am reading 100k plus file path from the index documents_qa using scroll API. Actual files will be available in my local d:\drive . By using the file path am reading the actual file and converting into base64 and am reindex with the base64 content (of a file) in another index document_attachment_qa . My current implementation is, am reading filePath, convering the file into base64 and indexing document along with fileContent one by one. So its taking more time for eg:- indexing 4000 documents