logstash

How to monitor if LogStash is fully loaded?

笑着哭i 提交于 2020-01-14 03:43:08
问题 I am using LogStash to collect the usage logs of my service. I am wondering how could I know if a LogStash is fully loaded and should add more servers to handle the logs? I don't want to miss any of the logs. Any suggestion would be helpful, thanks in advance:) 回答1: First, Logstash is only parsing the logs and send the logs event to a place to store, ex: Elasticsearch! If you are using elasticsearch as your logs storage, you can try to install marvel. It is a plugin of elastcisearch, after

Logstash - import nested JSON into Elasticsearch

我是研究僧i 提交于 2020-01-14 03:09:08
问题 I have a large amount (~40000) of nested JSON objects I want to insert into elasticsearch an index. The JSON objects are structured like this: { "customerid": "10932" "date": "16.08.2006", "bez": "xyz", "birthdate": "21.05.1990", "clientid": "2", "address": [ { "addressid": "1", "tile": "Mr", "street": "main str", "valid_to": "21.05.1990", "valid_from": "21.05.1990", }, { "addressid": "2", "title": "Mr", "street": "melrose place", "valid_to": "21.05.1990", "valid_from": "21.05.1990", } ] } So

es同步mysql同步-logstash

◇◆丶佛笑我妖孽 提交于 2020-01-13 14:33:00
1.下载es https://www.elastic.co/downloads/elasticsearch 修改 config 下elasticsearch.yml ip和端口等配置 2.下载kibana https://www.elastic.co/downloads/kibana 修改 config 下kibana.yml ip和端口es等配置 3.下载logstash https://www.elastic.co/downloads/logstash 下载jar mysql-connector-java-5.1.47.jar config下添加文件jdbc.conf input { stdin { } jdbc { # mysql jdbc connection string to our backup databse jdbc_connection_string => "jdbc:mysql://localhost:3306/testguize" # the user we wish to excute our statement as jdbc_user => "chai" jdbc_password => "chai" # the path to our downloaded jdbc driver jdbc_driver_library => "E:

Logstash Shipper configuration for Jira

时间秒杀一切 提交于 2020-01-13 12:12:21
问题 I am running Jira and Confluence within my company. I would like the logfiles to be shipped to Kibana. This is very easy to do but I do not want to rewrite the Grok filters. I cannot imagine that nobody has done this already. Does anybody have an example of a logstash shipper configuration. Most of the logging like catalina.log is standard. Please help me with examples 回答1: One would think that Java application logs only come in one form, but my experience is that there often are subtle

Can logstash read directly from remote log?

走远了吗. 提交于 2020-01-12 10:47:30
问题 I am new to logstash and I am reading about it from couple of days. Like most of the people, I am trying to have a centralized logging system and store data in elasticsearch and later use kibana to visualize the data. My application is deployed in many servers and hence I need to fetch logs from all those servers. Installing logstash forwarder in all those machines and configuring them seems to be a very tedious task (I will do it if this is the only way). Is there a way for logstash to

logstash 多行匹配

孤者浪人 提交于 2020-01-11 23:02:49
Exception in thread "main" java.lang.NullPointerException at com.example.myproject.Book.getTitle(Book.java:16) at com.example.myproject.Author.getBookTitles(Author.java:25) at com.example.myproject.Bootstrap.main(Bootstrap.java:14) input { stdin { codec => multiline { pattern => "^\s" what => "previous" } } } printf ("%10.10ld \t %10.10ld \t %s\ %f", w, x, y, z ); input { stdin { codec => multiline { pattern => "\\$" what => "next" } } } [2015-08-24 11:49:14,389][INFO ][env ] [Letha] using [1] data paths, mounts [[/ (/dev/disk1)]], net usable_space [34.5gb], net total_space [118.9gb], types

Changing the input data in logstash using a filter

只谈情不闲聊 提交于 2020-01-11 12:54:28
问题 I have my input data from a table. The table data looks like <Customer_id> <Item_id> <Item name> . For every item brought by customer, there is a separate row in the table. For example, if c1 buys i1,i2,i3,i4,i5 It will have 5 rows in the table. Now the data that I want to insert into elasticsearch is in this some way: { "c1": [ { "item_id": "i1", "item_name": "ABC" }, { "item_id": "i2", "item_name": "XYZ" }, ..... ], "c2": [ { "item_id": 4, "item_name": "PQR" } ] } How can I modify the input

记录_20190712

一世执手 提交于 2020-01-11 02:45:10
mysql修改root用户密码 1、进入mysql =》mysql -uroot -p 2、修改密码(两种方式) 其一 1) 修改root的localhost本地访问密码 =》set password for root@localhost = '2019qwe'; 其二 1) use mysql; =》进入数据库 2) update user set authentication_string= password('2019qwe') where host = '%' and user = 'root'; =》修改root的远程主机访问密码 =》(备注:需要先选择数据库,否则会报错) 3、刷新权限 =》flush privileges; 修改密码遇到的问题 1)error 1054 (42s22): unknown column 'password' in 'field list' =》password字段在5.6及其以上版本改成了authentication_string 2)linux上密码修改生效,登陆时要用新密码,而客户端连接工具用旧密码仍然可以登陆 =》mysql针对于root有两个主机,一个是localhost(本地主机),一个是%(远程主机) mybatis动态sql 1)mybatis里面的判断数组长度 =》list用size或size() =

ELK部署

霸气de小男生 提交于 2020-01-10 19:13:18
ELK部署 ELK适用场景 公司网站的访问量有多大,访问高峰期的时间段是多少,最经常访问的热点数据是什么?这一切的一切,虽然我们可以自己通过shell等手段截取出来, 但是如果网站多了,服务器多了,还是非常不方便,而且阅读性也不好,因此ELK应运而生,不仅可以获取访问高峰期,还可以制作图表,让你的领导一目了然, ELK已然成为各大互联往公司必部署的项目,因此接下来我们就来部署一套ELK系统 实验环境 192.168.254.13 ES,Kibana 192.168.254.11 logstash ELK版本:7.5.1 操作系统:CentOS Linux release 7.6.1810 (Core) note: 请确保你的firewalld和selinux关闭 最好确保你的机器是2个cpu以上 最好确保你的机器是2G以上内存 原理 logstash负责收集客户端的日志信息发送给ES服务器,然后通过Kibana以web形式展现出来 note:ES和logstash服务器需要java8以上 部署kibana 1.解压ES和kibana包,并移动到/usr/local下 tar -zxvf kibana-7.5.1-linux-x86_64.tar.gz tar -zxvf elasticsearch-7.5.1-linux-x86_64.tar.gz mv elasticsearch

ELK原理与介绍

风格不统一 提交于 2020-01-10 11:13:42
为什么用到ELK: 一般我们需要进行日志分析场景:直接在日志文件中 grep、awk 就可以获得自己想要的信息。但在规模较大的场景中,此方法效率低下,面临问题包括 日志量太大如何归档、文本搜索太慢怎么办、如何多维度查询。需要集中化的日志管理,所有服务器上的日志收集汇总。 常见解决思路是建立集中式日志收集系统,将所有节点上的日志统一收集,管理,访问。 一般大型系统是一个分布式部署的架构,不同的服务模块部署在不同的服务器上,问题出现时,大部分情况需要根据问题暴露的关键信息,定位到具体的服务器和服务模块,构建一套集中式日志系统,可以提高定位问题的效率。 一个完整的集中式日志系统,需要包含以下几个主要特点: 收集-能够采集多种来源的日志数据 传输-能够稳定的把日志数据传输到中央系统 存储-如何存储日志数据 分析-可以支持 UI 分析 警告-能够提供错误报告,监控机制 ELK提供了一整套解决方案,并且都是开源软件, 之间互相配合使用,完美衔接,高效的满足了很多场合的应用。目前主流的一种日志系统。 ELK简介: ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash