azure-table-storage

Azure Worker role Inner Exception of 'One of the request inputs is out of range.'

你离开我真会死。 提交于 2019-12-09 19:54:29
问题 I'm calling the CloudTableClient.CreateTableIfNotExist Method in my worker role and I'm getting an exception with an Inner Exception of 'One of the request inputs is out of range.' I did a little research and found that this is caused by naming the table an illegal table name, however, I've tried name my table several different names and all have not succeeded. Some of the table names I've tried are: exceptions exceptioninfo listoferrors All of them have failed with the same error. EDIT Here

How to chain Azure Data Factory pipelines

旧城冷巷雨未停 提交于 2019-12-09 12:58:16
问题 I have a data factory with multiple pipelines and each pipeline has around 20 copy activities to copy azure tables between 2 storage accounts. Each pipeline handles a snapshot of each azure table hence i want to run pipelines sequentially to avoid the risk of overwriting latest data with old data. I know that giving first pipeline output as input to the 2nd pipeline we can achieve this. But as i have many activities in a pipeline, i am not sure which activity will complete last. Is there

Are there any limits on the number of Azure Storage Tables allowed in one account?

半城伤御伤魂 提交于 2019-12-08 22:02:55
问题 I'm currently trying to store a fairly large and dynamic data set. My current design is tending towards a solution where I will create a new table every few minutes - this means every table will be quite compact, it will be easy for me to search my data (I don't need everything in one table) and it should make it easy for me to delete stale data. I've looked and I can't see any documented limits - but I wanted to check: Is there any limit on the number of tables allowed within one Azure

Azure table storage and caching

戏子无情 提交于 2019-12-08 19:39:30
问题 Is it worth caching data from Azure Table storage with the Azure Caching Preview? Or is the table storage fast enough in large scale applications? Thanks 回答1: The short answer is it depends . In the application I am currently working on there is some information that we use caching for to handle both the latency of retrieving data from Table Storage and to accommodate the desired number of transactions per second. We started out serving the information from Table Storage and moved to caching

Create a TableEntity with Array or List property?

半腔热情 提交于 2019-12-08 19:38:04
问题 I have stored in an Azure Table some enumerations like this pk rk | en fr de ... foo 1 | 'Eune' 'Fune' 'Dune' ... foo 2 | 'Edoe' 'Fdoe' 'Ddoe' ... bar 1 | 'Unee' 'Unef' 'Trid' ... bar 2 | 'Diee' 'Dief' 'Died' ... bar 3 | 'Trie' 'Tref' 'Trid' ... en , fr , de etc... are the language codes, and respectively the column names in the table. What kind of TableEntity should I create in order to load it properly public class FooEntity : TableEntity { public Dictionary<string, string> Descriptions

Get all records from azure table storage

社会主义新天地 提交于 2019-12-08 14:46:35
问题 Using this code block try { StorageCredentials creds = new StorageCredentials(accountName, accountKey); CloudStorageAccount account = new CloudStorageAccount(creds, useHttps: true); CloudTableClient client = account.CreateCloudTableClient(); CloudTable table = client.GetTableReference("serviceAlerts"); TableOperation retrieveOperation = TableOperation.Retrieve<ServiceAlertsEntity>("ServiceAlerts", "b9ccd839-dd99-4358-b90f-46781b87f933"); TableResult query = table.Execute(retrieveOperation);

Storing DateTime in azure table storage

浪子不回头ぞ 提交于 2019-12-08 11:21:11
问题 I am using default example for storing a datetime value in table storage. One the field is calculated as follows DateTime accMonth = new DateTime(DateTime.Now.Year, DateTime.Now.Month, 1); Usually above means a date with time being 00:00 . However when I save this in table storage I see this time as 2018-04-01T18:30:00.000Z Which looks strange to me! anyone knows why? 回答1: You're getting a different value because you're creating a date/time with local time zone (India is GMT+5:30). In Azure

WCF Data Service and Azure Table Storage: How can I use PartitionKey / RowKey as primary keys

空扰寡人 提交于 2019-12-08 09:58:43
问题 Why does the following code of my entity "Person" generates an error in my WCF Data Service: [System.Data.Services.Common.DataServiceKey("PartitionKey", "RowKey")] public class Person : TableServiceEntity { public string Name { get; set; } public int Age { get; set; } ... etc Error: Request Error The server encountered an error processing the request. The exception message is 'On data context type 'PersonDataServiceContext', there is a top IQueryable property 'Person' whose element type is

Synchronous request in Windows Azure?

不问归期 提交于 2019-12-08 09:39:18
问题 So in my server code, variable invites is undefined outside of the success function. function getInvites(id){ var InvitesTable = tables.getTable("Invites").where({"PlanID": id}).select("UserID","Attending"); var invites; InvitesTable.read({ success: function(resultss) { invites = resultss; console.log(invites); //works here }}); console.log(invites); //undefined here } From similar questions, I realize its because of it being asynchronous. So the success function call is run after the console

Azure Storage Table design with multiple query points

随声附和 提交于 2019-12-08 03:47:10
问题 I have the following Azure Storage Table. PositionData table: PartitionKey: ClientID + VehicleID RowKey: GUID Properties: ClientID, VehicleID, DriverID, Date, GPSPosition Each vehicle will log up to 1,000,000 entities per year per client. Each client could have thousands of vehicles. So, I decided to partition by ClientID + VehicleID so to have small, manageable partitions. When querying by ClientID and VehicleID , the operation performs quickly because we are narrowing the search down to one