What AWS CloudWatch Logs are using for storage?

那年仲夏 提交于 2019-12-10 11:37:55

问题


I started working with amazon CloudWatch Logs. The question is, are AWS using Glacier or S3 to store the logs? They are using Kinesis to process the logs using filters. Can anyone please tell the answer?


回答1:


They are probably using DynamoDB. S3 (and Glacier) would not be good for files that are appended to on a very frequent basis.




回答2:


AWS is likely to use S3, not Glacier.

Glacier would make problems if you would want access older logs as to get data stored in Amazon Glaciers can take few hours and this is definitely not the reaction time one expects from CloudWatch log analysing solution.

Also the price set for storing 1 GB of ingested logs seems to be derived from 1 GB stored on AWS S3. S3 price for one GB stored a month is 0.03 USD, and price for storing 1 GB of logs per month is also 0.03 USD.

On CloudWatch pricing page is a note:

*** Data archived by CloudWatch Logs includes 26 bytes of metadata per log event and is compressed using gzip level 6 compression. Archived data charges are based on the sum of the metadata and compressed log data size.

According to Henry Hahn (AWS) presentation on CloudWatch it is "3 cents per GB and we compress it," ... " so you get 3 cents per 10 GB".

This makes me believe, they store it on AWS S3.



来源:https://stackoverflow.com/questions/25093946/what-aws-cloudwatch-logs-are-using-for-storage

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!