azure-table-storage

Why isn't my TakeLimit honored by TableQuery?

帅比萌擦擦* 提交于 2019-12-03 07:10:22
I'd like to fetch top n rows from my Azure Table with a simple TableQuery. But with the code below, all rows are fetched regardless of my limit with the Take. What am I doing wrong? int entryLimit = 5; var table = GetFromHelperFunc(); TableQuery<MyEntity> query = new TableQuery<MyEntity>() .Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "MyPK")) .Take(entryLimit); List<FeedEntry> entryList = new List<FeedEntry>(); TableQuerySegment<FeedEntry> currentSegment = null; while (currentSegment == null || currentSegment.ContinuationToken != null) { currentSegment =

Azure table storage - Simplest possible example

蹲街弑〆低调 提交于 2019-12-03 06:03:41
Whenever learning new technologies I like to write the simplest possible example. Usually this means a console app with the least number of references. I've been trying, with little success, to write an app that reads and writes to Azure table storage. I've used this how-to guide as a basis, but try to do everything in the Main method. Similar approach worked well with the blob storage, but the table storage is giving trouble. I was able to create a table with this code. static void Main(string[] args) { Microsoft.WindowsAzure.Storage.Table.CloudTableClient tableClient = new Microsoft

About Azure Table Storage Row 1MB Limit, How it counts for UTF8 Code?

你离开我真会死。 提交于 2019-12-03 06:02:13
Lets first quote: Combined size of all of the properties in an entity cannot exceed 1MB. (for a ROW/Entity) from msdn My Questions is : Since everything is XMLed data, so for 1MB, is 1MB of what, 1MB of ASCII Chars, or 1MB of UTF8 Chars, or something else? Sample : Row1: PartitionKey="A', RowKey="A", Data="A" Row2: PartitionKey="A', RowKey="A", Data="A" (this is a UTF8 unicode A) Is Row1 and Row2 same size (in length), or Row2.Length=Row1.Length+1 ? RyanFishman Single columns such as "Data" in your example are limited to 64 KB of binary data and single rows are limited to 1 MB of data. Strings

Multiple filter conditions Azure table storage

半城伤御伤魂 提交于 2019-12-03 04:28:29
How can I set multiple filters on a Azure Table Storage? This is what I've tried: string partitionFilter = TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "partition1"); string date1 = TableQuery.GenerateFilterCondition("Date", QueryComparisons.GreaterThanOrEqual, "31-8-2013T14:15:14Z"); string date2 = TableQuery.GenerateFilterCondition("Date", QueryComparisons.LessThanOrEqual, "31-8-2013T14:15:14Z"); string finalFilter = TableQuery.CombineFilters(partitionFilter, TableOperators.And, date1); This doesn't work because TableQuery.CombineFilters() only takes 3

Getting started with Azure storage: Blobs vs Tables vs SQL Azure

邮差的信 提交于 2019-12-03 04:23:11
It's quite a topic, blobs vs tables vs SQL , and despite all I read so far I still can't find some proper reasoning on what to use when. We have a multi-tenant SaaS web-application which we are about to move to Azure. We use an SQL Server 2008 database. We store documents and log information that belongs to the documents. Kinda like dropbox does. The forums state that you better use Azure Tables when you are considering "large" objects. We typically store hundreds of documents per user where the size of the documents vary from 5kb to 30mb where the vast majority will be around 1MB? Are there

Code First & Identity with Azure Table Storage

☆樱花仙子☆ 提交于 2019-12-03 03:54:56
问题 I'm working on a small web app and I've just hit the point in development where I need to start making database decisions. My original plan was to go EF Code First with MSSQL on Azure because it just simplifies the process of working with a database. However, when investigating my database hosting capabilities on Azure, I discovered Azure Table Storage which opened up the world of NoSQL to me. While the Internet is ablaze with chatter about the features of NoSQL, one of the biggest reason I

How to partition Azure tables used for storing logs

烂漫一生 提交于 2019-12-03 03:12:19
We have recently updated our logging to use Azure table storage, which owing to its low cost and high performance when querying by row and partition is highly suited to this purpose. We are trying to follow the guidelines given in the document Designing a Scalable Partitioning Strategy for Azure Table Storage . As we are making a great number of inserts to this table (and hopefully an increasing number, as we scale) we need to ensure that we don't hit our limits resulting in logs being lost. We structured our design as follows: We have a Azure storage account per environment (DEV, TEST, PROD).

Is Azure CloudTable thread-safe?

南笙酒味 提交于 2019-12-03 01:23:33
I'm writing to Azure Table storage using Storage SDK 2.0 from different threads (ASP.NET application). Is CloudTable object thread-safe ? Can I initialize CloudStorageAccount, CloudTableClient and CloudTable only once (for example, in static constuctor) and then use them in different threads? Or is it better to create all CloudStorageAccount, CloudTableClient and CloudTable objects each time from a blank (like it's shown in this article )? Does it affect the performance in any way? What is a prefered way of getting instance of CloudTable each time executing an operation against the table?

Azure Tables or SQL Azure?

放肆的年华 提交于 2019-12-03 01:22:45
I am at the planning stage of a web application that will be hosted in Azure with ASP.NET for the web site and Silverlight within the site for a rich user experience. Should I use Azure Tables or SQL Azure for storing my application data? Azure Table Storage appears to be less expensive than SQL Azure. It is also more highly scalable than SQL Azure. SQL Azure is easier to work with if you've been doing a lot of relational database work. If you were porting an application that was already using a SQL database, then moving it to SQL Azure would be the obvious choice, but that's the only

Painfully slow Azure table insert and delete batch operations

狂风中的少年 提交于 2019-12-03 01:10:28
问题 I am running into a huge performance bottleneck when using Azure table storage. My desire is to use tables as a sort of cache, so a long process may result in anywhere from hundreds to several thousand rows of data. The data can then be quickly queried by partition and row keys. The querying is working pretty fast (extremely fast when only using partition and row keys, a bit slower, but still acceptable when also searching through properties for a particular match). However, both inserting