kibana

Cannot add node to cluster (elasticsearch)

淺唱寂寞╮ 提交于 2021-01-29 06:57:47
问题 I'm trying to make the health of my cluster green. According to the following elasticsearch documentation: When you add more nodes to a cluster, it automatically allocates replica shards. When all primary and replica shards are active, the cluster state changes to green. source: https://www.elastic.co/guide/en/elasticsearch/reference/current/add-elasticsearch-nodes.html So I created 2 elasticsearch instances with the following configuration files: # Config File 1 cluster.name : PL node.name :

How to send data from HTTP input to ElasticSearch using Logstash ans jdbc_streaming filter?

耗尽温柔 提交于 2021-01-29 00:56:23
问题 I want to send data from Http to elasticsearch using logstash and I want to enrich my data using jdbc_streaming filter plugin. This is my logstash config: input { http { id => "sensor_data_http_input" user => "sensor_data" password => "sensor_data" } } filter { jdbc_streaming { jdbc_driver_library => "E:\ElasticStack\mysql-connector-java-8.0.18\mysql-connector-java-8.0.18.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://localhost:3306/sensor_metadata"

How to send data from HTTP input to ElasticSearch using Logstash ans jdbc_streaming filter?

雨燕双飞 提交于 2021-01-29 00:55:32
问题 I want to send data from Http to elasticsearch using logstash and I want to enrich my data using jdbc_streaming filter plugin. This is my logstash config: input { http { id => "sensor_data_http_input" user => "sensor_data" password => "sensor_data" } } filter { jdbc_streaming { jdbc_driver_library => "E:\ElasticStack\mysql-connector-java-8.0.18\mysql-connector-java-8.0.18.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://localhost:3306/sensor_metadata"

How to apply background colors to Kibana tables in the same dashboard (or not)

那年仲夏 提交于 2021-01-28 19:09:58
问题 I have a challenge to implement colorized tables in a Kibana Dashboard and tried to find out the best approach in internet, but no glue up to now. So, since I wouldn't like to re-invent the wheels and create from scratch, I would like to hear from you an updated status for this implementation. By the way, I know that we can define a cell color based on its value, but it cannot be only the cell color, it must be all table lines or at least, one full line. The challenge is to draw two simple

Kubernetes logs split in kibana

两盒软妹~` 提交于 2021-01-28 10:32:49
问题 I have Kubernetes system in Azure and used the following instrustions to install fluent, elasticsearch and kibana: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch I am able to see my pods logs in kibana but when i send logs more then 16k chars its just split. if i send 35k chars . its split into 3 logs. how can i increase the limit of 1 log? I want to able to see the 36k chars in one log. image here 回答1: https://github.com/fluent-plugins-nursery

Elastic search (Kibana) - intersect between boolean results

微笑、不失礼 提交于 2021-01-28 09:43:35
问题 I am facing a problem in Kibana how to correctly filter a data. Basically my aim is to filter PASSED or FAILED tests from the following data structure. { "_index":"qa-reporting-2020-04", "_type":"qa-reporting", "_id":"456.0", "_version":1, "_score":null, "_source":{ "TestId":"CXXX01", "TestStatus":0, "Issues":[ ], "MetaData":{ "TestName":"Test1", "LastException":null, "DurationMs":1980.5899000000002, "Platform":{ "BrowserName":"chrome", "OS":"windows", "OsVersion":"10" }, "Categories":[ "Cat1

python-logstash not working

有些话、适合烂在心里 提交于 2021-01-27 12:46:02
问题 I have an elasticsearch cluster (ELK) and some nodes sending logs to the logstash using filebeat. Lately I added a new application server, who is sending logs to my logstash using python-logstash . My logstash input configuration looks something like this : input { beats { type => beats port => 5044 } udp { port => 5044 } } My application server sends the logs successfully to the logstash. On my logstash machine I tried to run the following command: tcpdump -nn | grep x.x.x.x x.x.x.x is the

ELK系列(二):.net core中使用ELK

天涯浪子 提交于 2021-01-26 08:45:19
ELK安装好后,我们现在.net Core中使用一下,大体思路就是结合NLog日志组件将数据写入ELK中,其它语言同理。 ELK的安装还是有些复杂的,我们也可以在Docker中安装ELK:docker run -it --rm -p 9200: 9200 -p 5601: 5601 -- name esk nshou/elasticsearch-kibana 这条命令执行完成后,我们就在本地运行了elasticsearch和Kibana,没有错误的话我们就可以通过localhost:5601直接访问Kibana界面了: 这里我们可以看到在一个容器里运行了多个程序,这样节省了资源,同样增加了管理的复杂性,不建议在生产环境中这样使用。 同样我们也可以通过localhost:9200访问elasticsearch,返回如下数据: 有了elasticsearch和kibana我们还需要logstash,我这里以阿里云上安装的logstash为例,首先进到目录下,我们需要新增一个nlog.conf配置文件: 内容如下: 这里使用最简单的配置(其实是复杂的配置我一时还没看懂。。。),这里我们指定监听端口8001,同时指定数据输出到elasticsearch中,下面是它的IP和端口。 添加完配置文件后在logstash文件夹下通过:bin/logstash -f nlog.conf 运行当前配置

Is it possible for users to set their own timezone in Kibana?

99封情书 提交于 2021-01-25 07:26:21
问题 I'm familiar with the timezone option under the advanced settings area of Kibana, but was wondering if anyone has found a way for individual users to override this with their own setting? For reference - https://www.elastic.co/guide/en/kibana/current/advanced-options.html. The global setting for Kibana is below and not what I want to change. Ideally I'd like for each user to be able to set this to whatever they'd like. 回答1: Just leave it as Browser , this way Kibana will use the same timezone

Is it possible for users to set their own timezone in Kibana?

泄露秘密 提交于 2021-01-25 07:25:57
问题 I'm familiar with the timezone option under the advanced settings area of Kibana, but was wondering if anyone has found a way for individual users to override this with their own setting? For reference - https://www.elastic.co/guide/en/kibana/current/advanced-options.html. The global setting for Kibana is below and not what I want to change. Ideally I'd like for each user to be able to set this to whatever they'd like. 回答1: Just leave it as Browser , this way Kibana will use the same timezone