logstash

docker部署elk日志采集系统(kafka方式)

你说的曾经没有我的故事 提交于 2020-01-01 03:11:23
一、logback + elk,tcp方式发送 环境搭建参考上一篇博客: https://www.cnblogs.com/alan6/p/11667758.html tcp方式存在的问题:tcp方式在日志量比较大,并发量较高的情况下,可能导致日志丢失。可以考虑采用 kafka 保存日志消息,做一个流量削峰。 二、logback + kafka + elk 1、docker安装 zookeeper + kafka 拉镜像: docker pull wurstmeister/zookeeper docker pull wurstmeister/kafka 运行zookeeper: docker run -d --name zookeeper --restart always --publish 2181:2181 --volume /etc/localtime:/etc/localtime wurstmeister/zookeeper:latest 运行kafka: docker run -d --name kafka --restart always --publish 9092:9092 --link zookeeper --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \ --env KAFKA_ADVERTISED_HOST_NAME

spring cloud 日志收集ELK

半城伤御伤魂 提交于 2020-01-01 03:10:33
最近因微服务开发需要,搭建了一个日志收集框架ELK,途中踩过坑,故此留帖记录下   首先下载安装一个 Elasticsearch,官网地址:https://www.elastic.co/downloads/elasticsearch 根据不同的环境下载不同的版本安装解压即可 然后找到如下位置双击启动 页面出现这种情况说明启动成功: Elasticsearch还可以根据自己的喜好设置分片命名 同理继续下载kibana ,地址:https://www.elastic.co/downloads/kibana 这里可以注意就是下面文件中配置的 Elasticsearch路径要和之前安装路径一致,双击启动 最后安装现在logstash,路径:https://www.elastic.co/downloads/logstash, 这里注意解压是解压路径看看是否有空白存在, 如果有记得换个路径,不然可能启动失败, 然后在 改路径下新建一个 logstash.conf(名字任意), 内容如下(采用官方推荐的redis方式): input { redis { data_type => "list" #存储类型 type => "redis-input" key => "logstash:redis"#key值,后面要与spring boot中key保持一致 host => "127.0.0.1"

springboot向elk写日志

只谈情不闲聊 提交于 2020-01-01 03:09:28
springboot里连接elk里的logstash,然后写指定index索引的日志,而之后使用kibana去查询和分析日志,使用elasticsearch去保存日志。 添加引用 implementation 'net.logstash.logback:logstash-logback-encoder:5.3' 添加配置 <?xml version="1.0" encoding="UTF-8"?> <configuration debug="false"> <!--定义日志文件的存储地址 勿在 LogBack 的配置中使用相对路径--> <property name="LOG_HOME" value="./logs" /> <!-- 控制台输出 --> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"> <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度%msg:日志消息,%n是换行符--> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level

Where do .raw fields come from when using Logstash with Elasticsearch output?

泪湿孤枕 提交于 2020-01-01 02:42:20
问题 When using Logstash and Elasticsearch together, fields with .raw are appended for analyzed fields, so that when querying Elasticsearch with tools like Kibana, it's possible to use the field's value as-is without per-word splitting and what not. I built a new installation of the ELK stack with the latest greatest versions of everything, and noticed my .raw fields are no longer being created as they were on older versions of the stack. There are a lot of folks posting solutions of creating

ubuntu搭建elk服务器

北城余情 提交于 2020-01-01 00:34:20
转载自:http://blog.topspeedsnail.com/archives/4825 如果是自己试验的性质,可考虑不装ssh。 Ubuntu 16.04 搭建 ELK 日志分析平台 我要搭建的ELK S tack图示: ELK服务器建议配置: 内存不少于4G CPU:2 Ubuntu 16.04 #1 安装Java JDK Elasticsearch和Logstash都是使用java写的,所以我们需要安装Java, Elasticsearch建议安装 Oracle Java 8(OpenJdk应该也行) : Ubuntu 16.04安装Java JDK #2 安装 Elasticsearch 导入 Elasticsearch的GPG公钥: 1 $ wget - qO - https : / / packages .elastic .co / GPG - KEY - elasticsearch | sudo apt - key add - 添加 Elasticsearch仓库源: 1 $ echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee - a / etc / apt / sources .list .d / elasticsearch - 2.x

logstash实现mysql数据同步到es

放肆的年华 提交于 2019-12-31 20:49:34
安装logstash(跟elasticsearch安装的版本保持一致) wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.0.tar.gz 解压: tar -zxvf logstash-6.5.0.tar.gz java -version(没有jdk则需安装) 如上报错需要安装jdk wget https://download.oracle.com/otn/java/jdk/8u231-b11/5b13a193868b4bf28bcb45c792fce896/jdk-8u231-linux-x64.tar.gz 配置环境变量: vi /etc/profile JAVA_HOME=/data/jdk1.8.0_161 CLASSPATH=.: J A V A H O M E / l i b / t o o l s . j a r : JAVA_HOME/lib/tools.jar: J A V A H ​ O M E / l i b / t o o l s . j a r : JAVA_HOME/lib/dt.jar PATH= J A V A H O M E / b i n : JAVA_HOME/bin: J A V A H ​ O M E / b i n : HOME/bin: H O M E /

Parse Apache2 Error logs with Grok for Logstash

北城以北 提交于 2019-12-31 14:56:13
问题 Im trying to parse my apache2 error log and im having a bit of trouble.. It doesnt seem to be matching the filter. Im pretty sure the timestamp piece is wrong, but im not sure, and i cant really find any documentation to figure it out. Also, is there a way to get what is in fields.errmsg to me @message ? Log [Wed Jun 26 22:13:22 2013] [error] [client 10.10.10.100] PHP Fatal error: Uncaught exception '\Foo\Bar' Shipper Config input { file { 'path' => '/var/log/apache2/*-error.log' 'type' =>

Elasticsearch index much larger than the actual size of the logs it indexed?

有些话、适合烂在心里 提交于 2019-12-31 10:04:30
问题 I noticed that elasticsearch consumed over 30GB of disk space over night. By comparison the total size of all the logs I wanted to index is only 5 GB...Well, not even that really, probably more like 2.5-3GB. Is there any reason for this and is there a way to re-configure it? I'm running the ELK stack. 回答1: There are a number of reasons why the data inside of Elasticsearch would be much larger than the source data. Generally speaking, Logstash and Lucene are both working to add structure to

Expected one of #, input, filter, output in logstash

佐手、 提交于 2019-12-31 00:50:10
问题 I am trying to make logstash installation work by simply executing the command given in the documentation to echo back what ever typed.But that gives me the following error. My command C:\logstash-1.4.0\bin>logstash.bat agent -e 'input{stdin{}}output{stdout{}}' And the error Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) aft er You may be interested in the '--configtest' flag which you can use to validate logstash's configuration before you choose to restart a

Parse multiline JSON with grok in logstash

余生长醉 提交于 2019-12-30 06:47:12
问题 I've got a JSON of the format: { "SOURCE":"Source A", "Model":"ModelABC", "Qty":"3" } I'm trying to parse this JSON using logstash. Basically I want the logstash output to be a list of key:value pairs that I can analyze using kibana. I thought this could be done out of the box. From a lot of reading, I understand I must use the grok plugin (I am still not sure what the json plugin is for). But I am unable to get an event with all the fields. I get multiple events (one even for each attribute