kibana

Logstash output from json parser not being sent to elasticsearch

一世执手 提交于 2020-01-05 09:07:56
问题 This is kind of a follow up from another one of my questions: JSON parser in logstash ignoring data? But this time I feel like the problem is more clear then last time and might be easier for someone to answer. I'm using the JSON parser like this: json #Parse all the JSON { source => "MFD_JSON" target => "PARSED" add_field => { "%{FAMILY_ID}" => "%{[PARSED][platform][family_id][1]}_%{[PARSED][platform][family_id][0]}" } } The part of the output for one the logs in logstash.stdout looks like

Vega-lite heat map text properties

◇◆丶佛笑我妖孽 提交于 2020-01-05 07:08:24
问题 Good time of day! all text - https://github.com/vega/vega-lite/issues/5697 When building data in a block, I would like to change the font size and position of the text in the block. Used the documentation -https://vega.github.io/vega-lite/docs/title.html, but it does not work. block: { "mark": "text" "encoding": { "text": {"field": "z", "type": "quantitative"} "color": {"value": "black"} "fontSize": 40 } Changing the position will allow for the addition of a second text: full code: { "$schema

Mutliple config files causing duplicate message

两盒软妹~` 提交于 2020-01-05 06:36:07
问题 I have a Logstash machine running in AWS. In Logstash I have 3 config files each having 1 input defined on them. These inputs are reading logs from following sources From s3 From http input From filebeat The problem is that I am getting duplicate messages in Kibana. So for 1 message generated by Filebeat I am seeing 3 messages in Kibana. I tried to remove 1 config file and the count got reduced to 2. So I am pretty sure that this is due to these config files. What is confusing me is that why

Unable to connect Kibana to Elasticsearch

纵饮孤独 提交于 2020-01-05 05:22:45
问题 I've installed ES 7.5 and Kibana 7.5 on RHEL7, but after starting Kibana and checking the UI, I'm seeing the error, "Kibana server is not ready yet." Checking the Kibana log, I see that it is not properly connecting to ES. Any help is appreciated! Here is the output of journalctl --unit kibana : Dec 11 10:03:05 mcjca033031 systemd[1]: kibana.service holdoff time over, scheduling restart. Dec 11 10:03:05 mcjca033031 systemd[1]: Started Kibana. Dec 11 10:03:05 mcjca033031 systemd[1]: Starting

Unable to connect Kibana to Elasticsearch

感情迁移 提交于 2020-01-05 05:22:27
问题 I've installed ES 7.5 and Kibana 7.5 on RHEL7, but after starting Kibana and checking the UI, I'm seeing the error, "Kibana server is not ready yet." Checking the Kibana log, I see that it is not properly connecting to ES. Any help is appreciated! Here is the output of journalctl --unit kibana : Dec 11 10:03:05 mcjca033031 systemd[1]: kibana.service holdoff time over, scheduling restart. Dec 11 10:03:05 mcjca033031 systemd[1]: Started Kibana. Dec 11 10:03:05 mcjca033031 systemd[1]: Starting

Kibana doesn't show any results in “Discover” tab

戏子无情 提交于 2020-01-04 05:24:07
问题 I have setup elasticsearch(version 1.7.3) and Kibana(version 4.1.2) for indexing our application's Elmah XML files errors. I am using .Net to parse the xml files and Nest ElasticSearch client to insert the indexes into ElasticSearch. The issue is that Kibana doesn't display any data in the "Discover" tab. When I run curl -XGET localhost:9200/.kibana/index-pattern/eol? command, I get the following response: {"_index":".kibana","_type":"index-pattern","_id":"eol","_version":2,"found":tru e,"

【docker系列】Docker 安装 kibana7.4.2

試著忘記壹切 提交于 2020-01-03 21:22:29
1、拉取docker elasticsearch 镜像 [root@hadoop-keda config]# docker pull kibana:7.4.2 7.4.2: Pulling from library/kibana 2、先启动简洁版的容器 docker run -tid \ --net docker-hadoop-net \ --ip 172.170.0.17 \ --restart=always \ --hostname=hadoop_kibana \ --name=hadoop-kibana \ -p 15601:5601 \ -v /usr/docker/software/kibana/config/:/usr/share/kibana/config/ \ -v /usr/docker/software/kibana/data/:/usr/share/kibana/data/ \ -v /usr/docker/software/kibana/plugins/:/usr/share/kibana/plugins/ \ -v /etc/localtime:/etc/localtime \ -e TZ='Asia/Shanghai' \ -e LANG="en_US.UTF-8" \ kibana:7.4.2 3、copy容器中的文件,到宿主机上 [root

Is it Possible to Use Histogram Facet or Its Curl Response in Kibana

无人久伴 提交于 2020-01-03 18:57:33
问题 Is it possible to use a manually created histogram facet (or the results of its curl request) like this one in a Kibana dashboard: { "query" : { "match_all" : {} }, "facets" : { "histo1" : { "histogram" : { "key_script" : "doc['date'].date.minuteOfHour * factor1", "value_script" : "doc['num1'].value + factor2", "params" : { "factor1" : 2, "factor2" : 3 } } } } } Thanks 回答1: It looks like it will be supported in Kibana4, but there doesn't seem to be much more info out there than that. For

Not seeing any Fields for a non-Count Y-Axis aggregation

空扰寡人 提交于 2020-01-03 14:21:07
问题 I'm trying to graph out average response time from http logs. When I go to Visualize and try either a bar or line graph, any time that select a different Aggregation type besides Count(ie Average, Sum, Max, etc), I never get any values in the Field drop down. I believe that the X-Axis should/could just be a Date Histogram. My query looks like this: "host:'hostname' AND file:'access.log'", which generates a ton of results as a Count, but again, can't seem to figure out how to graph out that

Delete data older than 10 days in elasticsearch

一世执手 提交于 2020-01-03 01:28:10
问题 I am new to elasticsearch and I want to delete documents in my elasticsearch index which are older than 10 days. I want to keep only last 10 days of data.So is there any way to delete last 11nth day index automatically. What I have tried.. DELETE logstash-*/_query { "query": { "range": { "@timestamp": { "lte": "now-10d" } } } } Error I'm getting while running on kibana dev tools { "error": "Incorrect HTTP method for uri [/logstash-*/_query?pretty] and method [DELETE], allowed: [POST]",