logstash

Elasticsearch converting a string to number

℡╲_俬逩灬. 提交于 2019-12-21 18:39:28
问题 I am new to Elasticsearch and am just starting up with ELK stack. I am collecting key value type logs in my Logstash and passing it to an index in Elasticsearch. I am using the kv filter plugin in Logstash. Due to this, all the fields are string type by default. When I try to perform aggregation like avg or sum on a numeric field in Elasticsearch, I am getting an Exception: ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch

Getting CloudTrail Logs into Logstash

放肆的年华 提交于 2019-12-21 17:56:03
问题 I am trying to get logs from ClouldTrail into ElasticSearch so that we can see what is going on in our AWS account better. I have set up both Logstash and ElasticSearch on my machine (Ubuntu 14.04), and can push text from stdin to ElasticSearch. However when I try to use the S3 input nothing is added to ElasticSearch. Here is the conf file Im using, I have removed my amazon keys input { s3 { bucket => 'ko-cloudtrail-log-bucket' secret_access_key => '' access_key_id => '' delete => false

Kibana time delta between two fields

徘徊边缘 提交于 2019-12-21 17:14:14
问题 I have two fields as part of a log message saved in our ELK cluster: "EventTime": "2015-07-28 17:03:20", "EventReceivedTime": "2015-07-28 17:03:22" Is there a way to get the time difference between this fields (in this case 2 sec.) in each log message and display it trough Kibana3? If its not possible a direct elasticsearch query would also work. Thanks in advance! 回答1: Yes, I just did it with some test data in Kibana using a scripted field. In Kibana, go to Settings, click on your index

Use grok to add the log filename as a field in logstash

China☆狼群 提交于 2019-12-21 08:44:53
问题 I'm using Grok & Logstash to send access logs from Nginx to Elastic search. I'm giving Logstash all my access logs (with a wildcard, works well) and I would like to get the filename ( some part of it, to be exact ) and use it as a field. My config is as follows : input { file { path => "/var/log/nginx/*.access.log" type => "nginx_access" } } filter { if [type] == "nginx_access" { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } match => { "path" => "%{GREEDYDATA}/%{GREEDYDATA:app}

Java Filter For Logstash

一笑奈何 提交于 2019-12-21 08:18:10
问题 You know how there is a Ruby filter for Logstash which enables me to write code in Ruby and it is usually included in the config file as follows filter { ruby { code => "...." } } Now I have two Jar files that I would like to include in my filter so that the input I have can be processed according to the operations I have in these Jar files. However, I cannot (apparently) include the Jar file in the ruby code. I've been looking for a solution. 回答1: So to answer this, I found this wonderful

Logstash if statement with regex example

陌路散爱 提交于 2019-12-21 07:15:16
问题 Can anyone show me what an if statement with a regex looks like in logstash? My attempts: if [fieldname] =~ /^[0-9]*$/ if [fieldname] =~ "^[0-9]*$" Neither of which work. What I intend to do is to check if the "fieldname" contains an integer 回答1: To combine the other answers into a cohesive answer. Your first format looks correct, but your regex is not doing what you want. /^[0-9]*$/ matches: ^ : the beginning of the line [0-9]* : any digit 0 or more times $ : the end of the line So your

Kibana Logstash ElasticSearch | Unindexed Fields

社会主义新天地 提交于 2019-12-21 07:04:31
问题 I am exploring EKL stack and coming across an issue. I have generated logs, forwarded the logs to logstash, logs are in JSON format so they are pushed directly into ES with only JSON filter in Logstash config, connected and started Kibana pointing to the ES. Logstash Config: filter { json { source => "message" } Now I have indexes created for each day's log and Kibana happily shows all of the logs from all indexes. My issue is: there are many fields in logs which are not enabled/indexed for

Setting Elasticsearch Analyzer for new fields in logstash

拥有回忆 提交于 2019-12-21 05:06:43
问题 By using GROK filter , We can add new field to Logstash. But then, here I am wondering how to set the analyzer for that particular field. For eg: , I have a new id field which has a field like a_b , but the the normal analyzer shipped by Elasticsearch, will break this into a and b . Because of this I can't apply the terms feature on that particular field efficiently and make it useful. Here for the ID field, I want to apply a custom analyzer of my own, which don't tokenize the value but

Docker笔记02-日志平台ELK搭建

删除回忆录丶 提交于 2019-12-21 04:05:09
OS: Centos7 准备工作: 虚拟机中安装Centos, 搭建Docker环境 ELK简介: 略 文档地址 https://elk-docker.readthedocs.io/ 需要注意的是在Beats套件加入ELK Stack后,新的称呼是 Elastic Stack , 本次实践的是 filebeat + elk 由于elk镜像很大7.0.1版本大约1.8G 开始前建议将镜像源设置成国内地址 如阿里镜像库,网易镜像库等 阿里镜像源设置可参考 https://www.cnblogs.com/anliven/p/6218741.html / 1.下载镜像 docker pull sebp/elk 2.运行镜像 docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -v /usr/dockerfile:/data -it -d --name elk sebp/elk 5601 (Kibana web interface). 9200 (Elasticsearch JSON interface). 5044 (Logstash Beats interface, receives logs from Beats such as Filebeat – see the Forwarding logs with Filebeat

【原创】大叔经验分享(28)ELK分析nginx日志

余生颓废 提交于 2019-12-21 04:03:45
提前安装好elk(elasticsearch、logstach、kibana) 一 启动logstash $LOGSTASH_HOME默认位于/usr/share/logstash或/opt/logstash 1 nginx日志使用默认格式 log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; 2 下载geo库 # cd /etc/logstash # wget https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz # tar xvf GeoLite2-City.tar.gz 3 增加logstash配置 # cat /etc/logstash/conf.d/nginx_access.conf input { file { path => [ "/path/to/nginx/access.log" ] start_position => "beginning" ignore_older => 0 } } filter {