logstash

logstash-input-file配置详解

萝らか妹 提交于 2021-01-09 12:37:02
path   是必须的选项,每一个file配置,都至少有一个path discover_interval logstash 每隔多久去检查一次被监听的 path 下是否有新文件。默认值是 15秒。 exclude 不想被监听的文件可以排除出去,这里跟 path 一样支持 glob 展开。 close_older 一个已经监听中的文件,如果超过这个值的时间内没有更新内容,就关闭监听它的文件句柄。默认是 3600 秒,即一小时。 ignore_older 在每次检查文件列表的时候,如果一个文件的最后修改时间超过这个值,就忽略这个文件。默认是 86400 秒,即一天。 sincedb_path 如果你不想用默认的 $HOME/.sincedb (Windows 平台上在 C:\Windows\System32\config\systemprofile.sincedb ),可以通过这个配置定义 sincedb 文件到其他位置。 sincedb_write_interval logstash 每隔多久写一次 sincedb 文件,默认是 15 秒。 stat_interval logstash 每隔多久检查一次被监听文件状态(是否有更新),默认是 1 秒。 start_position logstash 从什么位置开始读取文件数据,默认是结束位置,也就是说 logstash 进程会以类似

Logstash Grok pattern with multiple matches

拈花ヽ惹草 提交于 2021-01-08 02:44:32
问题 I am attempting to write a grok expression that will result in multiple matches. I'm parsing a line that has 5 repetitions of the same pattern. I've been able to make a simple pattern with a regex that will return multiple matches but it seems that Grok doesn't work that way. I don't really understand Ruby so I haven't really inspected the code. Example input: 222444555 Pattern: (?<number>\d{3})* I would have expected output like this: "number" : [ [ "222", "444", "555" ] ] or something like

Logstash Grok pattern with multiple matches

自闭症网瘾萝莉.ら 提交于 2021-01-08 02:40:19
问题 I am attempting to write a grok expression that will result in multiple matches. I'm parsing a line that has 5 repetitions of the same pattern. I've been able to make a simple pattern with a regex that will return multiple matches but it seems that Grok doesn't work that way. I don't really understand Ruby so I haven't really inspected the code. Example input: 222444555 Pattern: (?<number>\d{3})* I would have expected output like this: "number" : [ [ "222", "444", "555" ] ] or something like

Logstash Grok pattern with multiple matches

♀尐吖头ヾ 提交于 2021-01-08 02:39:03
问题 I am attempting to write a grok expression that will result in multiple matches. I'm parsing a line that has 5 repetitions of the same pattern. I've been able to make a simple pattern with a regex that will return multiple matches but it seems that Grok doesn't work that way. I don't really understand Ruby so I haven't really inspected the code. Example input: 222444555 Pattern: (?<number>\d{3})* I would have expected output like this: "number" : [ [ "222", "444", "555" ] ] or something like

How do I add a custom field to logstash/kibana?

不问归期 提交于 2021-01-07 08:30:45
问题 I am using python-logstash in order to write to logstash. It offers the option to add extra fields but problem is that all fields are under the message field. What I want to accomplish is adding a new field at the higher level. I found the option to do that from the logstash.config (using ruby / grok / mutate plugins) but this solution is not a scalable one (Would have to configure for every machine instance) Something like: logger.info('my message') And in Kibana I will see: { '@timestamp':

How do I add a custom field to logstash/kibana?

百般思念 提交于 2021-01-07 08:26:00
问题 I am using python-logstash in order to write to logstash. It offers the option to add extra fields but problem is that all fields are under the message field. What I want to accomplish is adding a new field at the higher level. I found the option to do that from the logstash.config (using ruby / grok / mutate plugins) but this solution is not a scalable one (Would have to configure for every machine instance) Something like: logger.info('my message') And in Kibana I will see: { '@timestamp':

How to deploy logstash with persistent volume on kubernetes?

此生再无相见时 提交于 2021-01-07 06:31:18
问题 Using GKE to deploy logstash by statefulset kind with pvc. Also need to install an output plugin. When don't use while true; do sleep 1000; done; in container's command args , it can't deploy with pvc successfully. The pod will cause CrashLoopBackOff error. Normal Created 13s (x2 over 14s) kubelet Created container logstash Normal Started 13s (x2 over 13s) kubelet Started container logstash Warning BackOff 11s (x2 over 12s) kubelet Back-off restarting failed container From here I found it can

logstash收集log4j日志

烂漫一生 提交于 2021-01-07 04:47:19
使用logstash收集log4j日志信息 log4j日志文件配置 重要参数详解 mode logstash工作模式,可选"server"或者"client",默认是"server",server就是把logstash看做是日志的服务器,接收log4j主机端生成的日志消息。client则是把logstash看做是tcp的发起者,请求log4j主机返回日志消息。 host 主机地址,字符串类型,如"localhost"或者"192.168.0.1",如果是server模式,就是监听的主机地址,如果是client模式,就是连接的目标地址。 port 端口号,数字类型,如 4567 或者 12345,如果是server模式,就是监听的端口号如果是client模式,就是连接的目标端口号。 data_timeout 超时时间,秒为单位。如果设置-1,则永不超时,默认是5,如果某个tcp连接闲置了,则超过该时间限制,就断开或者关闭连接。 Server模式 server模式就是把logstash作为服务器,输出日志消息的java程序所在的主机作为客户机, logstash配置 : input{ log4j { mode => "server" host => "localhost"#注意这里,这里是Logstash服务器的地址或者主机名 port => 4560 } } output{

SpringBoot微服务的监控与运维

自闭症网瘾萝莉.ら 提交于 2021-01-06 14:33:29
与大部分应用和系统一样, SpringBoot 微服务的开发、发布与部署只占其生命周期的一小部分,应用和系统运维才是重中之重。而运维过程中,监控工作更是占据重要位置。 运维的目的之一是为了保证系统的平稳运行,进而保障公司业务能持续对外服务,为了达到这一目的,我们需要对系统的状态进行持续地观测,以期望一有风吹草动就能发现并作出应对,监控作为一种手段,就是以此为生。 我们会从以下多个层面对 Spring Boot 微服务进行监控: 硬件层面 网络层面 系统层面 SpringBoot 微服务的应用层面 服务访问层面 我们会从所有这些层面采集相应的状态数据,然后汇总,存储,并分析,一旦某项指标超出规定的阈值,则报警,在接收到报警通知之后,我们需要做出应对以改变现在系统状态不健康的局面,这一般通过预置的调控开关来调整应用状态,要么重启或者服务降级,也就是执行监控的“控”,整个过程如图 1 所示。 硬件、网络以及系统层面的监控,现有的一些监控系统和方案已经可以很好地提供支持,比如开源的 Zabbix 系统或者以报警为强项的 Nagios 系统。 本节不对这些层面的监控做过多介绍,我们将更多对 SpringBoot 微服务应用层面的监控进行实践方案的探索。SpringBoot 微服务的内部状态,通过多种方式或者渠道可以知道。 打印的应用日志是一种 SpringBoot 微服务运行状态的反映形式。

ElasticSearch的安装和使用

流过昼夜 提交于 2021-01-05 03:01:23
一、传统版安装 1.下载es: https://www.elastic.co/downloads/past-releases/elasticsearch-5-6-8 2.运行:下载后解压,进入到bin目录,执行:elasticsearch。浏览器输入http://localhost:9200/测试是否启动 3.安装head插件(用于可视化操作es) 1)下载head插件: https://github.com/mobz/elasticsearch-head 2)解压到任意目录,但是要和elasticsearch的安装目录区别开 3)安装node js ,安装cnpm 4)全局安装grunt 。Grunt是基于Node.js的项目构建工具。它可以自动运行你所 设定的任务 npm install -g grunt-cli 5)安装依赖 cnpm install 6)进入elasticsearch-head目录启动head grunt server 7)浏览器输入http://localhost:9100/测试head插件是否成功安装 8)在页面上方文本框输入es的地址(端口9200),点击连接按钮出现跨域问题 4.在es目录下的config/elasticsearch.yml添加两行使es能被head等插件跨域访问。改配置后重启es: http.cors.enabled: true