logstash

Changing the default analyzer in ElasticSearch or LogStash

房东的猫 提交于 2019-12-23 09:15:02
问题 I've got data coming in from Logstash that's being analyzed in an overeager manner. Essentially, the field "OS X 10.8" would be broken into "OS" , "X" , and "10.8" . I know I could just change the mapping and re-index for existing data, but how would I change the default analyzer (either in ElasticSearch or LogStash) to avoid this problem in future data? Concrete Solution: I created a mapping for the type before I sent data to the new cluster for the first time. Solution from IRC: Create an

Logstash Web UI doesn't start

混江龙づ霸主 提交于 2019-12-23 08:54:40
问题 I'm facing exceptions when I try to start Logstash with web front-end despite I followed all steps at official tutorial. First of all I tried logstash-1.1.10-flatjar.jar , it even didn't start too. Then I found this issue so I downloaded logstash-1.1.11.dev-flatjar.jar as it is advised. Now it doesn't give any errors when I initialize it. But when I locate my browser to myserver:9292 I see errors on both console and web ui like these: Errno::ENOENT: No such file or directory - file:/home

mysql数据,通过logstash同步到elasticsearch,数据丢失

六眼飞鱼酱① 提交于 2019-12-23 08:28:23
测试同学反馈,某业务商品数据查询不到,但数据库里面存在。 我们商品数据,从es中查询的。通过logstash,将mysql里面的数据同步到es中。 查看了下es中,该数据确实不存在。 重新构建index 然后做了一次全量数据同步,发现es中的数据总量低于mysql中数据源的数据总量。 原因: sql有问题。数据查询sql,未指定排序。使用的默认排序,库表使用的InnoDB引擎。 当数据发生delete、update时,排序会发生变化。 后来指定根据主键排序,数据同步正常。es中数据和mysql数据量一样 来源: CSDN 作者: 【随风飘流】 链接: https://blog.csdn.net/LG772EF/article/details/103635898

How can I parse custom Metricbeat dictionary for Kibana?

元气小坏坏 提交于 2019-12-23 04:47:07
问题 I have logstash set up sending to Kibana, and it tags each log file with two custom fields -- Cluster and Node: I would like to add the same two fields to my Metricbeat configuration using the "fields" option, but when I do this it comes through as a dictionary in Kibana: Here is the Metricbeat config file I'm using: metricbeat.modules: - module: system metricsets: # CPU stats - cpu # System Load stats - load # Per CPU core stats #- core # IO stats #- diskio # Per filesystem stats -

搭建ELK集群

可紊 提交于 2019-12-23 04:25:01
环境准备 基础环境介绍 操作系统 部署应用 应用版本号 IP地址 主机名 CentOS 7.4 Elasticsearch/Logstash 6.4.3 192.168.1.1 elk1 CentOS 7.4 Elasticsearch/Logstash/Redis 6.4.3 192.168.1.2 elk2 CentOS 7.4 Elasticsearch/Kibana 6.4.3 192.168.1.3 elk3 基础环境配置 安装基本软件包以及配置hosts yum -y install vim net-tools epel-release wget cat /etc/hosts 192.168.1.1 elk1 192.168.1.2 elk2 192.168.1.3 elk3 修改文件描述符以及内核参数 vim /etc/sysctl.conf vm.max_map_count = 655360 sysctl -p /etc/sysctl.conf vim /etc/security/limits.conf * soft nofile 65536 * hard nofile 131072 * soft nproc 2048 * hard nproc 4096 不需要重启,退出重连即可显示最新配置。使用 ulimit -n 查看。 安装及配置Java环境 yum -y

Using an id of a table for sql_last_value in logstash?

无人久伴 提交于 2019-12-23 03:47:08
问题 I'm having a MySQL statement as such within my jdbc plugin in logstash input. statement => "SELECT * from TEST where id > :sql_last_value" My table doesn't have any date or datetime field as such. So I'm trying to update the index, by checking minute by minute using a scheduler , whether any new rows have been added to the table. I should only be able to update the new records, rather than updating the existing value changes from an existing record. So to do this I'm having this kinda of a

Issue with conditionals in logstash with fields from Kafka ----> FileBeat prospectors

风格不统一 提交于 2019-12-23 02:38:08
问题 I have the following scenario: FileBeat ----> Kafka -----> Logstash -----> Elastic ----> Kibana In Filebeat I have 2 prospectors the in YML file,,. and I add some fields to identify the log data. But, the issue is: in Logstash I haven't be able to validate this fields. The configuration files are: 1. filebeat.yml filebeat.prospectors: - input_type: log paths: - /opt/jboss/server.log* tags: ["log_server"] fields: environment: integracion log_type: log_server document_type: log_server fields

How to add uuid to log4j for logging into file?

我与影子孤独终老i 提交于 2019-12-23 02:22:22
问题 I have this log4j2.xml file: <?xml version="1.0" encoding="UTF-8"?> <Configuration xmlns="http://logging.apache.org/log4j/2.0/config"> <Appenders> <File name="FILE" fileName="logfile.log" append="true"> <PatternLayout pattern="%p | [%t] %l | message : %m%n"/> </File> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout pattern="%p | [%t] %l | message : %m%n"/> </Console> </Appenders> </Configuration> And my goal is to add in the RestEndpoint a unique id with uuid, but i dont know how to

logstash cannot find log4j2.properties file

断了今生、忘了曾经 提交于 2019-12-23 01:11:29
问题 I have been trying to setup a proof of concept elasticsearch/kibana/logstash environment, but it is not working at the moment. The LOGSTASH_HOME is: c:\_work\issues\log4j_socketappender\logstash-5.0.1\ In the console log of logstash I found the following line: Could not find log4j2 configuration at path /_work/issues/log4j_socketappender/logstash-5.0.1/config/log4j2.properties. Using default config which logs to console You can see logstash is trying to look for log4j2.properties in the right

logstash cannot find log4j2.properties file

懵懂的女人 提交于 2019-12-23 01:11:26
问题 I have been trying to setup a proof of concept elasticsearch/kibana/logstash environment, but it is not working at the moment. The LOGSTASH_HOME is: c:\_work\issues\log4j_socketappender\logstash-5.0.1\ In the console log of logstash I found the following line: Could not find log4j2 configuration at path /_work/issues/log4j_socketappender/logstash-5.0.1/config/log4j2.properties. Using default config which logs to console You can see logstash is trying to look for log4j2.properties in the right