amazon-cloudwatch

AWS Lambda - CloudWatch Event type

帅比萌擦擦* 提交于 2019-12-05 09:18:58
问题 When writing an AWS Java Lambda function that's triggered by Cloudwatch scheduled events, which event object gets passed to the Lambda handler function? For example, for a Lambda function triggered by an S3 event, AWS invokes the function and passes an S3Event object. Similarly, it would pass an SNSEvent object to a function triggered by an SNS message. public class LambdaHandler { public void eventHandler(S3Event event, Context context) { } OR public class LambdaHandler { public void

Springboot with Spring-cloud-aws and cloudwatch metrics

孤街浪徒 提交于 2019-12-05 08:03:17
I would like to start using metrics in my Springboot app and I would also like to publish them my amazon cloudwatch I know that with Springboot we can activate spring-actuator that provides in memory metrics and published them to the /metrics endpoint. I stumbled across Spring-cloud that seems to have some lib to periodically publish these metrics to Cloudwatch, however I have no clue how to set them up? There is absolutely 0 examples of how to use it. Anyone could explain what are the step to enable the metric to be sent to cloudwatch? You can check my article here: https://dkublik.github.io

Can I use AWS LightSail with AWS CloudWatch?

末鹿安然 提交于 2019-12-05 06:46:55
I've recently started testing out LightSail , but I would like to keep my logging centralized in CloudWatch , but cannot seem to find anything that would enable this. Interestingly LightSail instances do not appear in the EC2 Dashboard. I thought they were just EC2 instances beneath the surface. I thought they were just EC2 instances beneath the surface. Yes... but. Conceptually speaking, you are the customer of Lightsail, and Lightsail is the customer of EC2. It's as though there were an intermediary between you and AWS. The Lightsail resources are in EC2, but they're not in your EC2. They

AWS Cloudwatch logs with Docker Container - NoCredentialProviders: no valid providers in chain

痴心易碎 提交于 2019-12-05 03:58:51
My docker-compose file: version: '2' services: scraper: build: ./Scraper/ logging: driver: "awslogs" options: awslogs-region: "eu-west-1" awslogs-group: "doctors-logs" awslogs-stream: "scrapers-stream" volumes: - ./Scraper/spiders:/spiders I have added my AWS credentials to my mac using the aws configure command and the credentials are stored correctly in ~/.aws/credentials When I run docker-compose up I get the following error: ERROR: for scraper Cannot start service scraper: Failed to initialize logging driver: NoCredentialProviders: no valid providers in chain. Deprecated. For verbose

How to pass and retrieve constant json data to lambda function

左心房为你撑大大i 提交于 2019-12-05 01:33:18
I have lambda function defined sth like : def lambda_handler(event, context): #get constant json argument passed from cloudwatch event rule ... What is the way to get the values defined in Target/Configure Input /Constant(Json text). Dominic Nguyen As I read in AWS documents, json passed to python as dict type. And then I simply call the value like this: passed json: {"type": "daily", "retention": 7} Then in your handler: def lambda_handler(event, context): type = event["type"] rententionDay = event["retention"] ... Use this I was able to make an automation snapshot for all ebs volumes. Hope

How to parse mixed text and JSON log entries in AWS CloudWatch for Log Metric Filter

孤者浪人 提交于 2019-12-05 01:27:38
I am trying to parse log entries which are mix of text and JSON. First line is text representation and next lines are JSON payload of the event. One of the possible examples are: 2016-07-24T21:08:07.888Z [INFO] Command completed lessonrecords-create { "key": "lessonrecords-create", "correlationId": "c1c07081-3f67-4ab3-a5e2-1b3a16c87961", "result": { "id": "9457ce88-4e6f-4084-bbea-14fff78ce5b6", "status": "NA", "private": false, "note": "Test note", "time": "2016-02-01T01:24:00.000Z", "updatedAt": "2016-07-24T21:08:07.879Z", "createdAt": "2016-07-24T21:08:07.879Z", "authorId": null, "lessonId":

Amazon EC2 AutoScaling CPUUtilization Alarm- INSUFFICIENT DATA

本小妞迷上赌 提交于 2019-12-05 01:06:47
So I've been using Boto in Python to try and configure autoscaling based on CPUUtilization, more or less exactly as specified in this example: http://boto.readthedocs.org/en/latest/autoscale_tut.html However both alarms in CloudWatch just report: State Details: State changed to 'INSUFFICIENT_DATA' at 2012/11/12 16:30 UTC. Reason: Unchecked: Initial alarm creation Auto scaling is working fine but the alarms aren't picking up any CPUUtilization data at all. Any ideas for things I can try? Edit: The instance itself reports CPU utilisation data, just not when I try and create an alarm in

Amazon Cloudwatch alarm not triggered

浪尽此生 提交于 2019-12-05 00:48:13
I have a cloudwatch alarm configured : Threshold : "GreaterThan 0" for 1 consecutive period, Period : 1 minute, Statistic : Sum The alarm is configured on top of AWS SQS NumberOfMessagesSent. The queue was empty and no messages were being published to it. I sent a message manually. I could see the spike in metric but state of alarm was still OK. I am a bit confused why this alarm is not changing its state even though all the conditions to trigger this are met. I just overcame this problem with the help of AWS support. You need to set the period on your alarm to ~15 minutes. It's got to do with

AWS Cloudwatch Heartbeat Alarm

丶灬走出姿态 提交于 2019-12-05 00:00:01
I have an app that puts a custom Cloudwatch metric to AWS every minute. This is supposed to act as a heartbeat so I know the app is alive. Now I want to put an alarm on this metric to notify me if the heartbeat stops. I have tried to accomplish this using different cloudwatch alarm statistics including "average" and "data samples" and setting an alarm threshold less than 1 over a given period. However, in all cases, if my app dies and stops reporting the heartbeat, the alarm will only go into an "Insufficient Data" state and never into an "Alarm" state. I understand I can put a notification on

Strange CloudWatch alarm behaviour

喜夏-厌秋 提交于 2019-12-04 22:07:04
问题 I have a backup script that runs every 2 hours. I want to use CloudWatch to track the successful executions of this script and CloudWatch's Alarms to get notified whenever the script runs into problems. The script puts a data point on a CloudWatch metric after every successful backup: mon-put-data --namespace Backup --metric-name $metric --unit Count --value 1 I have an alarm that goes to ALARM state whenever the statistic "Sum" on the metric is less than 2 in a 6-hour period. In order to