logstash

Logstash XML Parse Failed

假装没事ソ 提交于 2020-12-30 04:01:12
问题 I'm running latest ELK stack 6.6 on deviantony/docker-elk image. I have the following XML file which I try to parse into ES JSON object: <?xml version="1.0" encoding="UTF-8"?> <root> <ChainId>7290027600007</ChainId> <SubChainId>001</SubChainId> <StoreId>001</StoreId> <BikoretNo>9</BikoretNo> <DllVerNo>8.0.1.3</DllVerNo> </root> My conf file is: input { file { path => "/usr/share/logstash/logs/example1.xml" type => "xml" start_position => "beginning" sincedb_path => "/dev/null" codec =>

微博众筹的架构设计

做~自己de王妃 提交于 2020-12-17 02:41:18
微博众筹的架构设计 导读:我们每一天都能感受到互联网金融的成长和进步,在 6 月 19 日,微博商业产品部联合天弘基金(余额宝),小米支付、还有创业公司付钱拉等金融技术团队策划了首届互联网金融系统沙龙,围绕在互联网金融过程中碰到核心技术架构、系统安全、数据一致性、业务开发模式等与业界进行分享及交流。本文是陈杰在本次沙龙的演讲,授权高可用架构首发。 陈杰,新浪微博资深系统架构师,毕业于清华大学化学系,从 2004 年开始做过测试、GIS 二次开发、游戏开发、搞过创业,2011 年底到新浪开始接触互联网,喜欢做程序员喜欢做的事。 互联网金融已经影响生活方方面面,我们可以拿着手机,不用银行卡、不用现金来体验新时代的衣食住行。互联网金融现在已经成为互联网巨头争相布局的一个领域,BAT、微博、小米都已经在发力金融。今年年初京东金融融资 66.5 亿人民币,4 月份微博金融发布微博众筹第一款产品。就在本月小米刚刚高调宣布进军民营银行,以获得中国银监会复批。 我今天讲的主题是微博众筹架构,我叫陈杰,2004 年清华化学系毕业,之后不小心进入 IT 这行。最开始做的是测试,后来慢慢接触一些 VB,做了两年的游戏开发,再后来跟朋友去创业,2011 年来到微博开始做互联网相关的一些事情。 本人有两个标签,第一我喜欢骑自行车,在创业之前那时候比较年轻,基本上每周会去两天香山骑车,另外一个标签是奶爸。

Unable to ingest XML file into Elastic Search using Logstash XML filter

情到浓时终转凉″ 提交于 2020-12-16 02:20:40
问题 I have this XML file which I stored in D:\ in Window 10: <?xml version="1.0" encoding="UTF-8"?> <root> <ChainId>7290027600007</ChainId> <SubChainId>001</SubChainId> <StoreId>001</StoreId> <BikoretNo>9</BikoretNo> <DllVerNo>8.0.1.3</DllVerNo> </root> I have installed Elastic Search and able to access it at http://localhost:9200/. I have installed Logstash and created logstash-xml.conf to ingest the above XML file. In logstash-xml.conf, the configuration is below: input { file { path => "D:

Unable to ingest XML file into Elastic Search using Logstash XML filter

徘徊边缘 提交于 2020-12-16 02:20:07
问题 I have this XML file which I stored in D:\ in Window 10: <?xml version="1.0" encoding="UTF-8"?> <root> <ChainId>7290027600007</ChainId> <SubChainId>001</SubChainId> <StoreId>001</StoreId> <BikoretNo>9</BikoretNo> <DllVerNo>8.0.1.3</DllVerNo> </root> I have installed Elastic Search and able to access it at http://localhost:9200/. I have installed Logstash and created logstash-xml.conf to ingest the above XML file. In logstash-xml.conf, the configuration is below: input { file { path => "D:

ELK平台搭建

耗尽温柔 提交于 2020-12-12 17:32:29
我看大部分教程都是在linux搭建的,那么俺就搭建在win上面吧。 首先,下载E+L+K,下面这个地址都有这3个相关的软件包 Elasticsearch+Logstash+Kibana https://www.elastic.co/downloads 1、安装Logstash,Logstash的版本为:6.3.1 解压后,进入bin文件夹下,新建配置文件logstash.conf,让后输入下面内容 input { kafka{ bootstrap_servers => ["192.168.1.1:9092"] client_id => "test" group_id => "test" topics => ["test"] auto_offset_reset => "latest" codec => json{charset=>"UTF-8"} } } output { elasticsearch { hosts => ["192.168.1.1:9200"] codec => json{charset=>"UTF-8"} index=> "%{source}" } } stdin表示在控制台输入,也可以用其他方式输入,elasticsearch表示输出到elasticsearch,index表示json传参进来的index值映射到ES的index上面去。注意,如果传递有中文

Send spring boot logs directly to logstash with no file

女生的网名这么多〃 提交于 2020-12-08 06:52:09
问题 So, I'm building a full cloud solution using kubernetes and spring boot. My spring boot application is deployed to a container and logs directly on the console. As containers are ephemerals I'd like to send logs also to a remote logstash server, so that they can be processed and sent to elastic. Normally I would install a filebeat on the server hosting my application, and I could, but isn't there any builtin method allowing me to avoid writing my log on a file before sending it? Currently I'm

Send spring boot logs directly to logstash with no file

亡梦爱人 提交于 2020-12-08 06:51:48
问题 So, I'm building a full cloud solution using kubernetes and spring boot. My spring boot application is deployed to a container and logs directly on the console. As containers are ephemerals I'd like to send logs also to a remote logstash server, so that they can be processed and sent to elastic. Normally I would install a filebeat on the server hosting my application, and I could, but isn't there any builtin method allowing me to avoid writing my log on a file before sending it? Currently I'm

Logstash-安装logstash-filter-multiline

南笙酒味 提交于 2020-12-06 03:02:27
ELK-logstash在搬运日志的时候会出现多行日志,普通的搬运会造成保存到ES中单条单条,很丑,而且不方便读取,logstash-filter-multiline可以解决该问题 github地址:https://github.com/logstash-plugins/logstash-filter-multiline 其他插件的地址:https://github.com/logstash-plugins 官网地址:https://www.elastic.co/cn/products/logstash 接下来演示下问题: 普通日志如下: 2018-08-31 15:04:41.375 [http-nio-18081-exec-1] ERROR c.h.h.control.**-自定义的msg java.lang.ArithmeticException: / by zero at com.hikvision.hikserviceassign.control.ServiceMonitorManageController.reAssign(ServiceMonitorManageController.java:170) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect

Generating filebeat custom fields

女生的网名这么多〃 提交于 2020-12-05 08:17:49
问题 I have an elasticsearch cluster (ELK) and some nodes sending logs to the logstash using filebeat. All the servers in my environment are CentOS 6.5. The filebeat.yml file in each server is enforced by a Puppet module (both my production and test servers got the same configuration). I want to have a field in each document which tells if it came from a production/test server. I wanted to generate a dynamic custom field in every document which indicates the environment (production/test) using