logstash

Logstash: Unable to connect to external Amazon RDS Database

点点圈 提交于 2020-06-17 02:14:46
问题 Am relatively new to logstash & Elasticsearch... Installed logstash & Elasticsearch using on macOS Mojave (10.14.2): brew install logstash brew install elasticsearch When I check for these versions: brew list --versions Receive the following output: elasticsearch 6.5.4 logstash 6.5.4 When I open up Google Chrome and type this into the URL Address field: localhost:9200 This is the JSON response that I receive: { "name" : "9oJAP16", "cluster_name" : "elasticsearch_local", "cluster_uuid" :

Logstash: Unable to connect to external Amazon RDS Database

放肆的年华 提交于 2020-06-17 02:12:22
问题 Am relatively new to logstash & Elasticsearch... Installed logstash & Elasticsearch using on macOS Mojave (10.14.2): brew install logstash brew install elasticsearch When I check for these versions: brew list --versions Receive the following output: elasticsearch 6.5.4 logstash 6.5.4 When I open up Google Chrome and type this into the URL Address field: localhost:9200 This is the JSON response that I receive: { "name" : "9oJAP16", "cluster_name" : "elasticsearch_local", "cluster_uuid" :

Multiline pattern for logstash

拥有回忆 提交于 2020-06-13 08:42:50
问题 I've searched SO and of course the searchengine of choice but found no valid solution. I try to parse a multiline logfile with logstash without any success. The logfile looks like: appl.log 2014-02-31 11:06:55,268 - WARN main com.applicationname.commons.shop.OrderDetails java.lang.NullPointerException at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere

Multiline pattern for logstash

女生的网名这么多〃 提交于 2020-06-13 08:42:21
问题 I've searched SO and of course the searchengine of choice but found no valid solution. I try to parse a multiline logfile with logstash without any success. The logfile looks like: appl.log 2014-02-31 11:06:55,268 - WARN main com.applicationname.commons.shop.OrderDetails java.lang.NullPointerException at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere sometexthere at sometexthere

Logstash - specify more than one pipeline

独自空忆成欢 提交于 2020-05-17 06:35:46
问题 I want different fields to be processed in different way. I have two pipelines. One is to process boolean values, another one is to convert a string to array. output { stdout { codec => rubydebug } elasticsearch { action => "index" hosts => ["127.0.0.1:9200"] index => "mini_system" document_id => "%{mini_system_key}" if [source] == "secure_flag" { pipeline => "bool-pipeline" } else if "partners" == %{FIELD} { pipeline => "partners-pipeline" } } } I am trying to do this. But I am not able to

部署文件:filebeat->kafka集群(zk集群)->logstash->es集群->kibana

末鹿安然 提交于 2020-05-09 21:30:50
该压缩包内包含以下文件: 1.install_java.txt 配置java环境,logstash使用 2.es.txt 三节点的es集群 3.filebeat.txt 获取日志输出到kafka集群 4.install_zookeeper_cluster.txt zk集群 5.install_kafka_cluster.txt kafka集群 6.logstash.txt 7.kibana.txt 文件下载地址: https://files.cnblogs.com/files/sanduzxcvbnm/部署文件.zip 扩展: 手动创建kafka消息主题: /opt/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic apache filebeat.yml文件设置 filebeat.inputs: - type: log enabled: true paths: - /etc/filebeat/access.log output.kafka: codec.format: string: '%{[@timestamp]} %{[message]}' hosts: ["192.168.43.192:9092"] topic:

Elasticsearch 7.5.1群集部署

做~自己de王妃 提交于 2020-05-09 08:39:57
注:本博文只用于实现简单群集配置,更深入的资料可以参考官方文档 Elasticsearch官方文档 kibana官方文档 一、准备工作 环境如下 系统 IP 服务 Centos7.3 192.168.171.131 ES1、logstash、ES-Head、logstash、kibana Centos7.3 192.168.171.134 ES2 Centos7.3 192.168.171.135 ES3 1、配置域名解析 [root@node1 ~]# cat > /etc/hosts << EOF > 192.168.171.131 node1 > 192.168.171.134 node2 > 192.168.171.135 node3 > EOF [root@node1 ~]# scp /etc/hosts root@192.168.171.134:/etc/hosts [root@node1 ~]# scp /etc/hosts root@192.168.171.135:/etc/hosts 2、配置java环境 注:以下操作需要在所有节点上进行配置 JDK官方下载 ,由于登录才可以下载,所以这里没有使用wget #卸载自带的java环境 [root@node1 ~]# rpm -qa | grep jdk copy-jdk-configs-1.2-1.el7

filter-grok,dissect匹配数据

末鹿安然 提交于 2020-05-08 20:33:45
Grok(正则捕获)、Dissect(切分): grok使用正则匹配来提取非结构化日志数并据解析为结构化和可查询的内容。 dissect使用多种定界符(非数字和字母的符号,split只能一次只能使用一种定界符)来提取非结构化日志数据。 dissect与grok的不同之处在于它不使用正则表达式并且速度更快。当数据可靠地重复时,解析很有效。当文本结构因行而异时,grok是更好的选择。当线路的一部分可靠重复时,您可以同时使用dissect和grok作为混合用例。dissect过滤可以解构重复行的部分。grok过滤可以处理剩余的字段值,具有更多的正则表达式可预测。 自定义格式: (?<field_name>the pattern here) 示例: [root@node2006 logstash]# bin/logstash -e 'input{stdin{}}filter{grok{match => {"message" => "(?<request_time>\d+\.\d+)" }}}output{stdout{codec=>rubydebug}}' #匹配带有小数点的数字,这里得到的字段值是字符串类型。logstash中只有三种类型,string,integer,float。如果不指定类型,默认string 123.456 ... { "message" => "123.456",

如何把IP转换成经纬度(Java版)

梦想的初衷 提交于 2020-05-08 08:44:04
经常有这种需求,拥有用户的IP地址,想要在地图上显示用户的访问量。这个时候就需要用到经纬度...应为一般的地图插件都是基于经纬度的。 那么问题来了,如何把IP转换成经纬度? 百度API 最国产的方式,就是使用百度API了,百度提供了两种服务: 普通的IP服务: http://lbsyun.baidu.com/index.php?title=webapi/ip-api https://api.map.baidu.com/location/ip?ak=请输入您的AK&coor=bd09ll 返回值: { address: "CN|吉林|长春|None|CERNET|1|None", content: { address: "吉林省长春市", address_detail: { city: "长春市", city_code: 53, district: "", province: "吉林省", street: "", street_number: "" }, point: { x: "125.31364243", y: "43.89833761" } }, status: 0 } 精准的服务: http://lbsyun.baidu.com/index.php?title=webapi/high-acc-ip https://api.map.baidu.com/highacciploc

ELK学习笔记之基于kakfa (confluent)搭建ELK

£可爱£侵袭症+ 提交于 2020-05-08 03:52:40
0x00 概述 测试搭建一个使用kafka作为消息队列的ELK环境,数据采集转换实现结构如下: F5 HSL–>logstash(流处理)–> kafka –>elasticsearch 测试中的elk版本为6.3, confluent版本是4.1.1 希望实现的效果是 HSL发送的日志胫骨logstash进行流处理后输出为json,该json类容原样直接保存到kafka中,kafka不再做其它方面的格式处理。 0x01 测试 192.168.214.138: 安装 logstash,confluent环境 192.168.214.137: 安装ELK套件(停用logstash,只启动es和kibana) confluent安装调试备忘: 像安装elk环境一样,安装java环境先 首先在不考虑kafka的情形下,实现F5 HSL—Logstash–ES的正常运行,并实现简单的正常kibana的展现。后面改用kafka时候直接将这里output修改为kafka plugin配置即可。 此时logstash的相关配置 input { udp { port => 8514 type => 'f5-dns' } } filter { if [type] == 'f5-dns' { grok { match => { "message" => "%{HOSTNAME:F5hostname}