amazon-cloudwatch

No logs appear on Cloudwatch log group for elastic beanstalk environment

醉酒当歌 提交于 2019-12-04 19:22:52
I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following: Log groups: /aws/elasticbeanstalk/my-environment Log streaming: Enabled Retention: 3 days Lifecycle: Keep after termination . However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs. Since this is a dockerized application, Logs for the server

How to debug failed fargate task initialization

梦想与她 提交于 2019-12-04 09:19:41
I have a fargate task which I have scheduled to run with CloudWatch Event rules, and output a timestamp to a database on a successful run. It also outputs a logfile to CloudWatch for every time it runs. However, there was 1 time where the log file was not created, and the database not updated. I suspect the task was never even started, or had failed to start. In CloudWatch, the event rule shows trigger and invocation at the time I expected the task to run, so I assume the task at least attempted to start. My question is: is there any way I can debug or log information about the cluster failing

Use cloudwatch to determine if linux service is running

試著忘記壹切 提交于 2019-12-04 07:59:47
Suppose I have an ec2 instance with service /etc/init/my_service.conf with contents script exec my_exec end script How can I monitor that ec2 instance such that if my_service stopped running I can act on it? BestPractices You can publish a custom metric to CloudWatch in the form of a "heart beat". Have a small script running via cron on your server checking the process list to see whether my_service is running and if it is, make a put-metric-data call to CloudWatch. The metric could be as simple as pushing the number "1" to your custom metric in CloudWatch. Set up a CloudWatch alarm that

spark streaming throughput monitoring

半城伤御伤魂 提交于 2019-12-04 06:55:51
Is there a way to monitor the input and output throughput of a Spark cluster, to make sure the cluster is not flooded and overflowed by incoming data? In my case, I set up Spark cluster on AWS EC2, so I'm thinking of using AWS CloudWatch to monitor the NetworkIn and NetworkOut for each node in the cluster. But my idea seems to be not accurate and network does not meaning incoming data for Spark only, maybe also some other data would be calculated too. Is there a tool or way to monitor specifically for Spark cluster streaming data status ? Or there's already a built-in tool in Spark that I

Trigger AWS lambda function after ECR event

自古美人都是妖i 提交于 2019-12-04 05:54:22
I am trying to get an AWS Lambda function to run whenever a new image is pushed to an AWS container registry. I have created and tested the function which works fine. I have then created a simple CloudWatch event rule with the pattern: { "source": [ "aws.ecr" ] } which I believe will trigger on any event from ECR. The rule has a target of the lambda function. The problem is the function is not called when a new image is pushed to the registry (or deleted etc). Nothing appears in the CloudWatch logs for the function. Is there something missing from the event rule or a way to diagnose what could

CloudWatch alarm to SNS in different region

扶醉桌前 提交于 2019-12-04 05:32:41
I'm trying to notify an SNS topic from a CloudWatch alarm that's in a different region. The reason is that I want SMS alerting, which isn't available in the region where my services are. If I enter the ARN of the subscription and save the changes in the console, I get "There was an error saving the alarm. Please try again." Trying again does not help. Using a topic in the local region does work, but that's not what I need. Is there a way to notify a topic in a different region? If not, is there another easy way I can achieve my goal? Didn't find any docs that explicitly say this can't be done

How to choose different Lambda function while Start Streaming to Amazon Elasticsearch Service

回眸只為那壹抹淺笑 提交于 2019-12-04 05:01:55
问题 Following this Streaming CloudWatch Logs Data to Amazon Elasticsearch Service, it's working fine to stream cloud watch log to ELK having one log group and one Lambda function. But now I want to change target lambda function for my other logs group, but I am not able to do that as there is no option in AWS console. Any Help will be appreciated. Thanks 回答1: I was streaming to ELK using the AWS console option which is Start Streaming to Amazon Elasticsearch Service , But I failed to change or

AWS Lambda - CloudWatch Event type

你说的曾经没有我的故事 提交于 2019-12-03 22:20:51
When writing an AWS Java Lambda function that's triggered by Cloudwatch scheduled events, which event object gets passed to the Lambda handler function? For example, for a Lambda function triggered by an S3 event, AWS invokes the function and passes an S3Event object. Similarly, it would pass an SNSEvent object to a function triggered by an SNS message. public class LambdaHandler { public void eventHandler(S3Event event, Context context) { } OR public class LambdaHandler { public void eventHandler(SNSEvent event, Context context) { } For a Cloudwatch Scheduled Event driven function, what would

My AWS Cloudwatch bill is huge. How do I work out which log stream is causing it?

蹲街弑〆低调 提交于 2019-12-03 19:52:30
问题 I got a $1,200 invoice from Amazon for Cloudwatch services last month (specifically for 2 TB of log data ingestion in "AmazonCloudWatch PutLogEvents"), when I was expecting a few tens of dollars. I've logged into the Cloudwatch section of the AWS Console, and can see that one of my log groups used about 2TB of data, but there are thousands of different log streams in that log group, how can I tell which one used that amount of data? 回答1: On the CloudWatch console, use the IncomingBytes

Shutdown EC2 Instance if idle right before another billable hour

若如初见. 提交于 2019-12-03 16:28:08
At unpredictable times (user request) I need to run a memory-intensive job. For this I get a spot or on-demand instance and mark it with a tag as non_idle . When the job is done (which may take hours), I give it the tag idle . Due to the hourly billing model of AWS, I want to keep that instance alive until another billable hour is incurred in case another job comes in. If a job comes in, the instance should be reused and marked it as non_idle . If no job comes in during that time, the instance should terminate. Does AWS offer a ready solution for this? As far as I know, CloudWatch can't set