logstash

ELK(1)

拟墨画扇 提交于 2020-01-26 19:05:33
目录 ELK 0、ELK概述 概述 JDK 1、kibana install authentication 2、elasticsearch 注意事项 3、logstash install ELK 0、ELK概述 角色 套接字 kibana 192.168.80.20:5601 elasticsearch 192.168.80.20:9200 logstash 192.168.80.10:9600 概述 我们在学习ELK之前要明确一下ELK是用来做什么的?为什么要学习ELK?我想这两个基本问题是我们学习任何技能之前必须要回答的问题。 ELK是用来做日志分析的,我们通过日志查看类的工具,也即文本查看类工具( tail -f/less )也可以做日志分析?为什么要专门用ELK呢?我认为主要的原因有以下两点:| ELK自带图形界面,分析结果自动分析、呈现,一目了然。 可通过正则表达式自定义需要的日志内容 个人认为ELK也是一种监控,只不过是日志方面的监控,而非全面性质的监控,而像zabbix这种监控工具就属于全面性质的监控,我们可以把ELK看做是zabbix的一种补充。 日志分析其实有多种的解决方案,比如说国内有家厂商叫建恒信安,他们出售一种设备,这种设备的名字叫日志审计,其实就是一个日志分析,我们公司曾经出售过这样的设备,这种设备不过就是一个普通的服务器然后安装了一个日志分析的服务

Logstash, mongodb and jdbc

萝らか妹 提交于 2020-01-25 23:53:27
问题 I have a problem configuring logstash. I want to be able to put in input jdbc for mongodb. My config : input{ jdbc{ jdbc_driver_library => "mongo-java-driver-3.2.2.jar" jdbc_driver_class => "com.mongodb.MongoClient" jdbc_connection_string => "jdbc:mongodb://localhost:27017" jdbc_user => "" } } output{ stdout{ } } The problem is : :error=>"Java::JavaSql::SQLException: No suitable driver found for jdbc:mongodb://localhost:27017/"} 回答1: More inputs would be good. you must specify the location of

Change date on event created by Elapsed or Aggregate filters

陌路散爱 提交于 2020-01-25 23:47:29
问题 When using new_event_on_match with elapsed filter a new event is created, with a fresh timestamp. The Aggregate filter adds a new event with a fresh timestamp as well. I would like to use the timestamp from the original events, which is now available in the field elapsed_timestamp_start . How can I replace @timestamp in the newly created event? Can I use a Date filter inside an Elapsed filter? 回答1: For starters, just note that only the elapsed filter creates a new event, the aggregate filter

Logstash: Renaming nested fields based on some condition

风流意气都作罢 提交于 2020-01-25 11:58:05
问题 I am trying to rename the nested fields from Elasticsearch while migrating to Amazonelasticsearch In the document, I want to change the 1.If the value field has JSON type. Change the value field to value-keyword and remove "value-whitespace" and "value-standard" if present 2.If the value field has a size of more than 15. Change the value field to value-standard "_source": { "applicationid" : "appid", "interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5", "interactionInfo": [ { "value": """{

Using field as input to Logstash Grok filter pattern

别来无恙 提交于 2020-01-25 04:27:04
问题 I'm wondering if it is possible to use a field in the Logstash message as the input the to Grok pattern. Say I have an entry that looks like: { "message":"10.1.1.1", "grok_filter":"%{IP:client}" } I want to be able to do something like this: filter { grok { match => ["message", ["%{grok_filter}"]] } } The problem is this crashes Logstash as it appears to treat "%{grok_filter}" as the Grok filter itself instead of the value of grok_filter. I get the following after Logstash has crashed: The

Elasticsearch学习之JDBC插件

时光怂恿深爱的人放手 提交于 2020-01-24 23:09:56
有时我们的关系型数据库当达到一定的数据量时,做数据查询操作会非常的缓慢,这时我们就可以把MySQL关系型数据库中的数据导入到Elasticsearch中存储进行查询,因为Elasticsearch是全文索引支持实时搜索所以做数据查询操作会比在数据库中进行查询操作快很多,下面我们开始做这么一个数据导入的实验,以下是实验架构图:       由于我们这里是从MySQL同步数据到Elasticsearch中,所以需要先安装Logstash以及JDBC的输入插件和Elasticsearch的输出插件:logstash-input-jdbc和logstash-output-elasticsearch [ root@node1 ~ ] # cd /usr/local/logstash/bin/ [ root@node1 bin ] # ./logstash-plugin install logstash-input-jdbc Validating logstash-input-jdbc Installing logstash-input-jdbc Installation successful [ root@node1 bin ] # ./logstash-plugin install logstash-output-elasticsearch Validating logstash

logstash 条件判断语句

你说的曾经没有我的故事 提交于 2020-01-24 21:51:47
logstash 条件判断语句 使用条件来决定filter和output处理特定的事件。logstash条件类似于编程语言。条件支持if、else if、else语句,可以嵌套。 比较操作有: 相等: ==, !=, <, >, <=, >= 正则: =~(匹配正则), !~(不匹配正则) 包含: in(包含), not in(不包含) 布尔操作: and(与), or(或), nand(非与), xor(非或) 一元运算符: !(取反) ()(复合表达式), !()(对复合表达式结果取反) 2、if[foo] in "String"在执行这样的语句是出现错误原因是没有找到叫做foo的field,无法把该字段值转化成String类型。所以最好要加field if exist判断。 判断字段是否存在,代码如下: if ["foo"] { mutate { add_field => { "bar" => "%{foo}"} } } 来源: CSDN 作者: h_sn999 链接: https://blog.csdn.net/h_sn9999/article/details/103964403

logstash : Mutate { gsub … } not working

时光怂恿深爱的人放手 提交于 2020-01-24 12:42:08
问题 mutate { add_field => {"eee" => "2016 uaie"} gsub => [ "eee", "2016", "2015" ] } This will indeed create a field "eee", but gsub will not update it. Why? 回答1: add_field runs when the underlying filter succeeds. In your case, the mutate{} is being run and then the add_field is run. To have the mutate{} after the field is added, use two mutate blocks: mutate { add_field => {"eee" => "2016 uaie"} } mutate { gsub => [ "eee", "2016", "2015" ] } 来源: https://stackoverflow.com/questions/34596364

ELK日志监控平台搭建记录

前提是你 提交于 2020-01-23 23:47:39
1:前提 :服务器已搭建java环境 1.8 2:elk步骤 :首先搭建ES做数据存储,然后搭建Kibana做数据可视化。最后搭建Logstash做数据流转 ::1:搭建ES 1:在服务器根目录下 创建名为ELK文件夹 1:cd / 2: mkdir elk 3:cd elk 2: 下载ES包:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.3.tar.gz 3:解压之 tar -zxvf elasticsearch-6.4.3.tar.gz 整体目录如下 修改相关文件配置。由于这个服务器是1核1G的。es默认配置-Xms 和-Xmx都是1G 一般是需要启动将其增加到4G左右。但是 服务器只有1G,还搭了XXL和别的东西 就给了512M 不然无法启动。 然后需要修改系统的虚拟内存大小 vim /etc/sysctl.conf 修改vm.max_map_count=262144 。未定义该参数则新增。 推出vim 后执行 sysctl -p 使其立即生效。 由于ES无法使用root 账户登录 所以新增一个用户 并赋予es文件夹权限 add user useres ;chown -R useres /elk/elasticsearch-6.4.3 然后启动es 执行es文件夹 bin

Logstash not reading file input

本小妞迷上赌 提交于 2020-01-23 01:58:07
问题 I have a strange problem with Logstash. I am providing a log file as input to logstash. The configuration is as follows: input { file { type => "apache-access" path => ["C:\Users\spanguluri\Downloads\logstash\bin\test.log"] } } output { elasticsearch { protocol => "http" host => "10.35.143.93" port => "9200" index => "latestindex" } } I am running elasticsearch server already and verifying if the data is being received with curl queries. The problem is, no data is being received when the