Strategy for storing application logs in Azure Table Storage

跟風遠走 提交于 2019-12-06 02:29:45

问题


I am to determine a good strategy for storing logging information in Azure Table Storage. I have the following:

PartitionKey: The name of the log.

RowKey: Inversed DateTime ticks,

The only issue here is that partitions could get very large (millions of entities) and the size will increase with time.

But that being said, the type of queries being performed will always include the PartitionKey (no scanning) AND a RowKey filter (a minor scan).

For example (in a natural language):

where `PartitionKey` = "MyApiLogs" and
where `RowKey` is between "01-01-15 12:00" and "01-01-15 13:00"

Provided that the query is done on both PartitionKey and RowKey, I understand that the size of the partition doesn't matter.


回答1:


Take a look at our new Table Design Patterns Guide - specifically the log-data anti-pattern as it talks about this scenario and alternatives. Often when people write log files they use a date for the PK which results in a partition being hot as all writes go to a single partition. Quite often Blobs end up being a better destination for log data - as people typically end up processing the logs in batches anyway - the guide talks about this as an option.



来源:https://stackoverflow.com/questions/28605328/strategy-for-storing-application-logs-in-azure-table-storage

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!