amazon-cloudwatch

How to keep desired amount of AWS Lambda function containers warm

有些话、适合烂在心里 提交于 2019-12-02 23:45:12
On my project there is REST API which implemented on AWS API Gateway and AWS Lambda. As AWS Lambda functions are serverless and stateless while we make a call to it, AWS starts a container with code of the Lambda function which process our call. According AWS documentation after finishing of lambda function execution AWS don't stop the container and we are able to process next call in that container. Such approach improves performance of the service - only in time of first call AWS spend time to start container (cold start of Lambda function) and all next calls are executed faster because

how to view aws log real time (like tail -f)

心已入冬 提交于 2019-12-02 20:01:30
I can view the log using the following command. aws logs get-log-events --log-group-name groupName --log-stream-name streamName --limit 100 what is the command to get feature like tail -f so that i can see the log real time Have a look at awslogs . If you happen to be working with Lambda/API Gateway specifically, have a look at apilogs . I was really disappointed with awslogs and cwtail so I made my own tool called Saw that efficiently streams CloudWatch logs to the console (and colorizes the JSON output): You can install it on MacOS with: brew tap TylerBrock/saw brew install saw It has a

Can't get AWS Lambda function to log (text output) to CloudWatch

随声附和 提交于 2019-12-02 19:55:39
I'm trying to set up a Lambda function that will process a file when it's uploaded to an S3 bucket. I need a way to see the output of console.log when I upload a file, but I can't figure out how to link my Lambda function to CloudWatch. I figured about by looking at the context object that my log group is /aws/lambda/wavToMp3 and the log stream is 2016/05/23/[$LATEST]hex_code_redacted . So I created that group and stream in CloudWatch, yet nothing is being logged to it. For the lambda function to create log stream and publish logs to cloudwatch, the lambda execution role needs to have the

Subsequent CloudWatch Alarm notifications to SNS

走远了吗. 提交于 2019-12-02 06:52:31
问题 When I use CloudWatch alarm to trigger an AutoScaling action, it repeatedly trigger the ASG actions. In other words, a subsequent set of N Alarms with state ALARM will trigger N actions on ASG. This behavior was not observed for a SNS action, instead it's triggering only within the 1st event when the Alarm changes from OK to ALARM. Is it possible achieve the same action behavior on SNS similar to ASG? 回答1: An Amazon CloudWatch alarm will only trigger an Amazon SNS notification when the alarm

Subsequent CloudWatch Alarm notifications to SNS

一笑奈何 提交于 2019-12-02 03:45:45
When I use CloudWatch alarm to trigger an AutoScaling action, it repeatedly trigger the ASG actions. In other words, a subsequent set of N Alarms with state ALARM will trigger N actions on ASG. This behavior was not observed for a SNS action, instead it's triggering only within the 1st event when the Alarm changes from OK to ALARM. Is it possible achieve the same action behavior on SNS similar to ASG? An Amazon CloudWatch alarm will only trigger an Amazon SNS notification when the alarm enters the ALARM state . That is, it triggers only once, and only when moving from something that isn't

How do I create an Alarm to detect DynamoDb limits have reached a certain percent and then increase it

送分小仙女□ 提交于 2019-12-01 18:22:31
I'm writing a web application that has steadily increasing traffic through the day. I'd like to create an Alarm that can detect if my read / write limits have reached a certain percentage (like 80%), and then increase that limit. I will then decrease it again at midnight. I've tried creating an Alarm - "Average" seems a bit useless and is always 1.0. "Sum" is more useful so I assume i should use this. I also assume i should use Consumed Write/Read Capacity at the metric name. Problems: Sum seems to use an absolute value of "Count" for its limits. If my DynamoDB is set to 100 writes, and i

Email notification through SNS and Lambda

风格不统一 提交于 2019-12-01 14:26:39
I am facing an issue. My main motive is to send an email whenever there is state change happened to ec2 instances. I tried cloud watch events directly with SNS and its work also but the email template which I am receiving is not having the proper information to understood. I was expecting Server name and its IP in the email template which SNS does not give me the option to modify it. So What I am thinking is to involve lambda so that cloudwatch events to monitor EC2 instances state change and give input to Lambda which will have customized email template which is then invoke SNS to send email

Email notification through SNS and Lambda

感情迁移 提交于 2019-12-01 13:27:58
问题 I am facing an issue. My main motive is to send an email whenever there is state change happened to ec2 instances. I tried cloud watch events directly with SNS and its work also but the email template which I am receiving is not having the proper information to understood. I was expecting Server name and its IP in the email template which SNS does not give me the option to modify it. So What I am thinking is to involve lambda so that cloudwatch events to monitor EC2 instances state change and

Can AWS CloudWatch alarms be paused/disabled during specific hours?

一世执手 提交于 2019-11-30 13:14:26
问题 I want to automatically toggle alarms on/off during specific periods of time so that they do not fire during maintenance windows. I'm doubting that an easy or direct method exists since I could not find such a thing in the documentation. Does anyone know of a different approach to achieve this while still using CloudWatch alarms, or did I miss an obvious solution? 回答1: It's not automatic but it can be done: http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_EnableAlarmActions

My AWS Cloudwatch bill is huge. How do I work out which log stream is causing it?

蹲街弑〆低调 提交于 2019-11-30 12:00:27
I got a $1,200 invoice from Amazon for Cloudwatch services last month (specifically for 2 TB of log data ingestion in "AmazonCloudWatch PutLogEvents"), when I was expecting a few tens of dollars. I've logged into the Cloudwatch section of the AWS Console, and can see that one of my log groups used about 2TB of data, but there are thousands of different log streams in that log group, how can I tell which one used that amount of data? On the CloudWatch console, use the IncomingBytes metrics to find the amount of data ingested by each log group for a particular time period in uncompressed bytes