metrics

Spring Boot Actuator/Micrometer Metrics Disable Some

一世执手 提交于 2019-12-06 02:02:07
Is there a way to turn off some of the returned metric values in Actuator/Micrometer? Looking at them now I'm seeing around 1000 and would like to whittle them down to a select few say 100 to actually be sent to our registry. Meter filters can help in 3 ways that have been discussed with the Micrometer slack channel: Disabling metrics Combining dimensions High cardinality capping filter Micrometer comes with the first type of meter filter built in. It also support hierarchical enabling/disabling similar to how logging works (As in if have meter like my.request.total and my.request.latency you

Cannot include Prometheus metrics in spring boot 2 (version 2.0.0.M7)

心不动则不痛 提交于 2019-12-05 18:51:10
Cannot include Prometheus metrics in spring boot 2 (version 2.0.0.M7) project. According micrometer docs added spring-boot-starter-actuator dependency and in application.yaml added management.endpoints.web.expose: prometheus but when calling /actuator/prometheus get { "timestamp": 1518159066052, "path": "/actuator/prometheus", "message": "Response status 404 with reason \"No matching handler\"", "status": 404, "error": "Not Found" } Tell me please why I wasn't getting prometheus metrics? Did you add micrometer-registry-prometheus to your dependecies? Micrometer has a pluggable architecture

springboot 实时监控 spring-boot-starter-actuator 包

时光怂恿深爱的人放手 提交于 2019-12-05 17:14:38
对java工程实时监控方式很多,本文主要讲在springboot框架中的监控。 springboot框架,自带了actuator监控,在pom中引入jar包即可,如下 1.引入jar <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> 在2.0版本之后改动很大,我这里是用的2.0.2 2.启动项目后即可访问 http://localhost:8081/webproject/actuator/health http://localhost:8081/webproject/actuator/info 如果想让url个性化一点,在 application.propertie 配置文件中 加入 management.endpoints.web.base-path=/jiankong 那么就可以访问 http://localhost:8081/webproject/jiankong/health 3.actuator 提供了很多api(称为:节点) 默认只开放了 health、info两个节点 如果需要公开所有 则在配置文件中继续加入 management.endpoints.web

Prometheus 自动发现

有些话、适合烂在心里 提交于 2019-12-05 17:03:34
目录 简介 环境说明 静态配置 重新加载配置文件 基于文件发现配置 重新加载配置文件 添加主机测试 基于DNS的A记录 修改配置文件 重新加载配置文件 基于DNS的SRV记录自动发现 修改配置文件 重新加载配置文件 动态添加解析测试 简介 在实际的配置中,经常会遇到增加或者减少监控主机的状况,如果每次都需要修改 prometheus.yml 配置文件,实在是太麻烦了。 此时就需要配置自动发现,而 prometheus 只是很多种的自动发现 支持: 基于公有云 基于私有云 基于文件,file的自动发现 基于DNS 的自动发现,分为SRV解析自动发现和A记录解析的自动发现 ........等等,有很多类型,本文中介绍基于静态文件 和 DNS 的自动发现 环境说明 增加一台 10.0.20.12 node_export 主机及 DNS 使用的是 bind9 如需 bind9 文档, 点击这里是bind9文档 以下的所有演示都是案例,可根据自己的情况做变更。 静态配置 简单讲解下直接修改 prometheus.yml 主配置文件中的静态配置。 修改配置如下: [root@es01 config]# cat prometheus.yml global: scrape_interval: 15s evaluation_interval: 15s alerting: alertmanagers:

hystrix源码之metrics

人走茶凉 提交于 2019-12-05 15:27:31
hystrix通过rxjava消息模式来获取和监听命令的metrics信息。   metrics主体结构包括一下部分:    hystrix metrics主要分为三个部分,命令执行metrics,线程池metrics,合并命令metrics。分别来统计命令执行过程中产生的metrics,线程池执行过程中产生的metrics,合并命令执行过程汇总产生的metrics。各个metrics内部有两类消息流组件,当各个行为发生时,首先向消息接收流组件发生消息,各类消息分析流组件监听消息接收流组件,对接收到的数据进行统计输出。各个metrics组件再通过监听消息分析流组件来获取统计后的消息。 单例模式   三个metrics都使用了单利模式,以HystrixCommandMetrics为例,key为commandkey。 // String is HystrixCommandKey.name() (we can't use HystrixCommandKey directly as we can't guarantee it implements hashcode/equals correctly) private static final ConcurrentHashMap<String, HystrixCommandMetrics> metrics = new

Springboot with Spring-cloud-aws and cloudwatch metrics

孤街浪徒 提交于 2019-12-05 08:03:17
I would like to start using metrics in my Springboot app and I would also like to publish them my amazon cloudwatch I know that with Springboot we can activate spring-actuator that provides in memory metrics and published them to the /metrics endpoint. I stumbled across Spring-cloud that seems to have some lib to periodically publish these metrics to Cloudwatch, however I have no clue how to set them up? There is absolutely 0 examples of how to use it. Anyone could explain what are the step to enable the metric to be sent to cloudwatch? You can check my article here: https://dkublik.github.io

yammer @Timed leaving values at zero

心已入冬 提交于 2019-12-05 05:45:45
This is a follow-up to my struggle using yammer timing annotations as described here . My spring context file has simply: <metrics:annotation-driven /> I have the following class: import com.yammer.metrics.annotation.ExceptionMetered; import com.yammer.metrics.annotation.Metered; import com.yammer.metrics.annotation.Timed; ... @Component public class GetSessionServlet extends HttpServlet { private final static Logger log = LoggerFactory.getLogger(GetSessionServlet.class); @Override public void init(final ServletConfig config) throws ServletException { super.init(config);

Spring Boot Actuator 'http.server.requests' metric MAX time

萝らか妹 提交于 2019-12-05 05:35:10
I have a Spring Boot application and I am using Spring Boot Actuator and Micrometer in order to track metrics about my application. I am specifically concerned about the 'http.server.requests' metric and the MAX statistic: { "name": "http.server.requests", "measurements": [ { "statistic": "COUNT", "value": 2 }, { "statistic": "TOTAL_TIME", "value": 0.079653001 }, { "statistic": "MAX", "value": 0.032696019 } ], "availableTags": [ { "tag": "exception", "values": [ "None" ] }, { "tag": "method", "values": [ "GET" ] }, { "tag": "status", "values": [ "200", "400" ] } ] } I suppose the MAX statistic

How can I visualize changes in a large code base quality?

廉价感情. 提交于 2019-12-05 04:59:47
One of the things I’ve been thinking about a lot off and on is how we can use metrics of some kind to measure change, are we going backwards or not? This is in the context of a large, legacy code base which we are improving. Most of the code is C++ with a C heritage. Some new functions and the GUI are written in C#. To start with, we could at least be checking if the simple complexity level was changing over time in the code. The difficulty is in having a representation – we can maybe do a 3D surface where a 2D map represents the code and we have a heat-map of color representing complexity

Apache Flink 进阶(八):详解 Metrics 原理与实战

99封情书 提交于 2019-12-05 04:42:14
作者:刘彪 整理:毛鹤 本文由 Apache Flink Contributor 刘彪分享,本文对两大问题进行了详细的介绍,即什么是 Metrics、如何使用 Metrics,并对 Metrics 监控实战进行解释说明。 什么是 Metrics? Flink 提供的 Metrics 可以在 Flink 内部收集一些指标,通过这些指标让开发人员更好地理解作业或集群的状态。由于集群运行后很难发现内部的实际状况,跑得慢或快,是否异常等,开发人员无法实时查看所有的 Task 日志,比如作业很大或者有很多作业的情况下,该如何处理?此时 Metrics 可以很好的帮助开发人员了解作业的当前状况。 Metric Types Metrics 的类型如下: 首先,常用的如 Counter,写过 mapreduce 作业的开发人员就应该很熟悉 Counter,其实含义都是一样的,就是对一个计数器进行累加,即对于多条数据和多兆数据一直往上加的过程。 第二,Gauge,Gauge 是最简单的 Metrics,它反映一个值。比如要看现在 Java heap 内存用了多少,就可以每次实时的暴露一个 Gauge,Gauge 当前的值就是heap使用的量。 第三,Meter,Meter 是指统计吞吐量和单位时间内发生“事件”的次数。它相当于求一种速率,即事件次数除以使用的时间。 第四,Histogram