stackdriver

How can we visualize the Dataproc job status in Google Cloud Plarform?

随声附和 提交于 2021-02-20 04:17:05
问题 How can we visualize (via Dashboards) the Dataproc job status in Google Cloud Platform? We want to check if jobs are running or not, in addition of their status like running, delay, blocked. On top of it we want to set alerting (Stackdriver Alerting) as well. 回答1: In this page, you have all the metrics available in Stackdriver https://cloud.google.com/monitoring/api/metrics_gcp#gcp-dataproc You could use cluster/job/submitted_count , cluster/job/failed_count and cluster/job/running_count to

Adding custom jmx metrics to google cloud monitoring collectd configuration

∥☆過路亽.° 提交于 2021-02-11 15:19:20
问题 I've added the JVM Monitoring plugin as described here That's all working great and I can, but now I'd like to add more JMX metrics. e.g. MemoryPool specific counters So I've added this config to /opt/stackdriver/collectd/etc/collectd.d/jvm-sun-hotspot.conf <MBean "jvm_localhost_MemoryPool"> ObjectName "java.lang:type=MemoryPool,name=*" InstanceFrom "name" <Value> Type "gauge" InstancePrefix "memorypool-usage_used" Table false Attribute "Usage.used" </Value> </MBean> and Collect "jvm

Adding custom jmx metrics to google cloud monitoring collectd configuration

蓝咒 提交于 2021-02-11 15:16:45
问题 I've added the JVM Monitoring plugin as described here That's all working great and I can, but now I'd like to add more JMX metrics. e.g. MemoryPool specific counters So I've added this config to /opt/stackdriver/collectd/etc/collectd.d/jvm-sun-hotspot.conf <MBean "jvm_localhost_MemoryPool"> ObjectName "java.lang:type=MemoryPool,name=*" InstanceFrom "name" <Value> Type "gauge" InstancePrefix "memorypool-usage_used" Table false Attribute "Usage.used" </Value> </MBean> and Collect "jvm

Can I get Incidents of stackdriver policy using API?

可紊 提交于 2021-02-10 07:55:31
问题 I was looking on stackdriver dashboard and I found the following http request: https://app.google.stackdriver.com/api/alerting/violation?project={project-id}&page=0&pageSize=8&policyId={policy-id} But I didn't found any docs about this D: 回答1: The alerting methods for Stackdriver Monitoring appear in the the Google Cloud Platform documentation. There does not seem to be an endpoint to list triggered alerts at the moment. The best option for now would be to add a webhook as a notification

Can I get Incidents of stackdriver policy using API?

[亡魂溺海] 提交于 2021-02-10 07:54:34
问题 I was looking on stackdriver dashboard and I found the following http request: https://app.google.stackdriver.com/api/alerting/violation?project={project-id}&page=0&pageSize=8&policyId={policy-id} But I didn't found any docs about this D: 回答1: The alerting methods for Stackdriver Monitoring appear in the the Google Cloud Platform documentation. There does not seem to be an endpoint to list triggered alerts at the moment. The best option for now would be to add a webhook as a notification

Can I get Incidents of stackdriver policy using API?

风格不统一 提交于 2021-02-10 07:54:34
问题 I was looking on stackdriver dashboard and I found the following http request: https://app.google.stackdriver.com/api/alerting/violation?project={project-id}&page=0&pageSize=8&policyId={policy-id} But I didn't found any docs about this D: 回答1: The alerting methods for Stackdriver Monitoring appear in the the Google Cloud Platform documentation. There does not seem to be an endpoint to list triggered alerts at the moment. The best option for now would be to add a webhook as a notification

Why is my export sink from Stackdriver only loading the latest audit logs into BigQuery and no historical?

我与影子孤独终老i 提交于 2021-02-10 05:51:31
问题 I created an export sink in Stackdriver to load audit logs into BigQuery. I want to be able to see audit logs from the past 3 months. However, when I queried the tables in BigQuery, I am only seeing logs from today and no earlier. I applied the following filters to my export sink. I also tried removing the timestamp filter but still only seeing logs from today and no prior. resource.type="bigquery_dataset" timestamp > "2019-05-01T23:59:09.739Z" 回答1: Exports only work for new entries. Per the

How do I manually set the severity of a Google App Engine request log?

情到浓时终转凉″ 提交于 2021-02-07 10:19:54
问题 I have an app in the Google App Engine Python 3 Standard Environment. I have it set up to group log entries by their request, as described in Writing Application Logs (in the "Viewing related request log entries" section). That page notes: The highest severity from the "child" log entries does not automatically apply to the top-level entry. If that behavior is desired, manually set the highest severity in the top-level entry. The top-level entry in question is the request log that App Engine

How do I manually set the severity of a Google App Engine request log?

£可爱£侵袭症+ 提交于 2021-02-07 10:16:32
问题 I have an app in the Google App Engine Python 3 Standard Environment. I have it set up to group log entries by their request, as described in Writing Application Logs (in the "Viewing related request log entries" section). That page notes: The highest severity from the "child" log entries does not automatically apply to the top-level entry. If that behavior is desired, manually set the highest severity in the top-level entry. The top-level entry in question is the request log that App Engine

How to log Stackdriver log messages correlated by trace id using stdout Go 1.11

邮差的信 提交于 2021-02-06 11:57:10
问题 I'm using Google App Engine Standard Environment with the Go 1.11 runtime. The documentation for Go 1.11 says "Write your application logs using stdout for output and stderr for errors". The migration from Go 1.9 guide also suggests not calling the Google Cloud Logging library directly but instead logging via stdout. https://cloud.google.com/appengine/docs/standard/go111/writing-application-logs With this in mind, I've written a small HTTP Service (code below) to experiment logging to