statsd

Graphite: sum all stats that match a pattern?

丶灬走出姿态 提交于 2019-12-04 03:48:01
I'm sending stats to a Graphite server via statsd . My stats are fairly fine-grained, and can be easily added by developers. I'd like to roll up all statistics matching a certain pattern ( stats.timers.api.*.200.count , for example). Is that possible within Graphite? If not, are there other systems that I should be looking at that can generate that type of roll-up data from statsd ? Or is this the sort of thing that I should do within my statsd configuration directly? If you after a blanket sum of all the data that matches, then you can use 1 sumSeries. an example: sumSeries(stats.timers.api.*

Can writing to a UDP socket ever block?

大兔子大兔子 提交于 2019-12-03 23:57:10
And if so, under what conditions? Or, phrased alternately, is it safe to run this code inside of twisted: class StatsdClient(AbstractStatsdClient): def __init__(self, host, port): super(StatsdClient, self).__init__() self.addr = (host, port) self.server_hostname = socket.gethostname() self.udp_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) def incr(self, stat, amount=1): data = {"%s|c" % stat: amount} self._send(data) def _send(self, data): for stat, value in data.iteritems(): self.udp_sock.sendto("servers.%s.%s:%s" % (self.server_hostname, stat, value), self.addr) Yes, oddly, a UDP

Deleted/Empty Graphite Whisper Files Automatically Re-Generating

筅森魡賤 提交于 2019-12-03 13:07:43
问题 I am trying to delete some old graphite test whisper metrics without any success. I can delete the metrics by removing the files. (See: How to cleanup the graphite whisper's data? ) But, within a few seconds of blowing away the files they regenerate (they are empty of metrics and stay that way since nothing is creating new metrics in those files). I've tried stopping carbon (carbon-cache.py stop) before deleting the files, but when I restart carbon (carbon-cache.py --debug start &) they just

Getting accurate graphite stats_counts

心已入冬 提交于 2019-12-03 07:03:34
问题 We have etsy/statsd node application running that flushes stats to carbon/whisper every 10 seconds. If you send 100 increments (counts), in the first 10 seconds, graphite displays them properly, like: localhost:3000/render?from=-20min&target=stats_counts.test.count&format=json [{"target": "stats_counts.test.count", "datapoints": [ [0.0, 1372951380], [0.0, 1372951440], ... [0.0, 1372952460], [100.0, 1372952520]]}] However, 10 seconds later, and this number falls to 0, null and or 33.3.

Deleted/Empty Graphite Whisper Files Automatically Re-Generating

╄→尐↘猪︶ㄣ 提交于 2019-12-03 03:25:22
I am trying to delete some old graphite test whisper metrics without any success. I can delete the metrics by removing the files. (See: How to cleanup the graphite whisper's data? ) But, within a few seconds of blowing away the files they regenerate (they are empty of metrics and stay that way since nothing is creating new metrics in those files). I've tried stopping carbon (carbon-cache.py stop) before deleting the files, but when I restart carbon (carbon-cache.py --debug start &) they just come back. How do I permanently delete these files/metics so they never come back? Are you running

Exporting Spring Boot Actuator Metrics (& Dropwizard Metrics) to Statsd

匿名 (未验证) 提交于 2019-12-03 01:23:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I'm trying to export all of the metrics which are visible at the endpoint /metrics to a StatsdMetricWriter . I've got the following configuration class so far: package com . tonyghita . metricsdriven . service . config ; import com . codahale . metrics . MetricRegistry ; import com . ryantenney . metrics . spring . config . annotation . EnableMetrics ; import org . slf4j . Logger ; import org . slf4j . LoggerFactory ; import org . springframework . beans . factory . annotation . Autowired ; import org . springframework . beans .

Why use statsd when graphite's Carbon aggregator can do the same job?

可紊 提交于 2019-12-03 01:02:45
问题 I have been exploring the Graphite graphing tool for showing metrics from multiple servers, and it seems that the 'recommended' way is to send all metrics data to StatsD first. StatsD aggregates the data and sends it to graphite (or rather, Carbon). In my case, I want to do simple aggregations like sum and average on metrics across servers and plot that in graphite. Graphite comes with a Carbon aggregator which can do this. StatsD does not even provide aggregation of the kind I am talking

Having trouble getting accurate numbers from graphite

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-03 00:33:20
I have an application that publishes a number of stats to graphite via statsd. One of the stats simply sends a stat increment to statsd every time a message is received by the service. I need to display a graph that shows the the relative traffic over time for this stat. Generally speaking, I should be able to display a graph that refreshes every, say 10 seconds, and displays how many messages were recived in those 10 seconds as well as the history for a given period of time. However, no matter how I format my API query I cannot seem to get accurate data. I've read a number of articles

Getting accurate graphite stats_counts

我只是一个虾纸丫 提交于 2019-12-02 20:42:17
We have etsy/statsd node application running that flushes stats to carbon/whisper every 10 seconds. If you send 100 increments (counts), in the first 10 seconds, graphite displays them properly, like: localhost:3000/render?from=-20min&target=stats_counts.test.count&format=json [{"target": "stats_counts.test.count", "datapoints": [ [0.0, 1372951380], [0.0, 1372951440], ... [0.0, 1372952460], [100.0, 1372952520]]}] However, 10 seconds later, and this number falls to 0, null and or 33.3. Eventually it settles at a value 1/6th of the initial number of increments, in this case 16.6 . /opt/graphite

Why use statsd when graphite's Carbon aggregator can do the same job?

南楼画角 提交于 2019-12-02 16:20:48
I have been exploring the Graphite graphing tool for showing metrics from multiple servers, and it seems that the 'recommended' way is to send all metrics data to StatsD first. StatsD aggregates the data and sends it to graphite (or rather, Carbon). In my case, I want to do simple aggregations like sum and average on metrics across servers and plot that in graphite. Graphite comes with a Carbon aggregator which can do this. StatsD does not even provide aggregation of the kind I am talking about. My question is - should I use statsd at all for my use case? Anything I am missing here? StatsD