azure-storage

What is the content-type and x-ms-version to be used to load the azure datalake file to azure datalake gen2?

℡╲_俬逩灬. 提交于 2020-06-17 14:16:27
问题 I have to load data lake file(csv format) to azure datalake storage gen2 using logic app.I have created logic app using http action,able to create the file and appended the data.for the next http action need to give the length.what is content-type to be used for files to load data into datalake storage gen2.i'm getting error like The uploaded data is not contiguous or the position query parameter value is not equal to the length of the file after appending the uploaded data and errocode:

Performance of Azure SDK v12 vs Storage Data Movement Library?

雨燕双飞 提交于 2020-06-17 09:45:06
问题 I know that the Storage Data Movement Library is supposed to be faster when uploading and downloading files to and from blob storage, but I am not seeing the performance benefits of it when compared to Azure SDK v12. I got an average of 37.463 seconds with Azure SDK v12 and 41.863 seconds using Storage Data Movement Library (SDML). Here is the code using SDML: namespace FunctionApp { using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.Storage; using Microsoft.Azure.Storage.Blob; using

Performance of Azure SDK v12 vs Storage Data Movement Library?

放肆的年华 提交于 2020-06-17 09:45:05
问题 I know that the Storage Data Movement Library is supposed to be faster when uploading and downloading files to and from blob storage, but I am not seeing the performance benefits of it when compared to Azure SDK v12. I got an average of 37.463 seconds with Azure SDK v12 and 41.863 seconds using Storage Data Movement Library (SDML). Here is the code using SDML: namespace FunctionApp { using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.Storage; using Microsoft.Azure.Storage.Blob; using

Renaming spark output csv in azure blob storage

主宰稳场 提交于 2020-06-17 02:56:48
问题 I have a Databricks notebook setup that works as the following; pyspark connection details to Blob storage account Read file through spark dataframe convert to pandas Df data modelling on pandas Df convert to spark Df write to blob storage in single file My problem is, that you can not name the file output file, where I need a static csv filename. Is there way to rename this in pyspark? ## Blob Storage account information storage_account_name = "" storage_account_access_key = "" ## File

Azure table storage names - invalid characters

不想你离开。 提交于 2020-06-16 03:05:33
问题 I have the following table names in Azure table storage. Table names are generated automatically in my application and then created using table.CreateIfNotExists(tableName). Some work and some don't. When I dig into the error the extended error information tells me that the resource name contains invalid characters - however I am at a loss to work out what is invalid in the failing names - can anyone spot this? 8836461cc98249bea59dc5f6790d40edstk365developmentusers – the specified resource

Azure table storage names - invalid characters

≡放荡痞女 提交于 2020-06-16 03:05:07
问题 I have the following table names in Azure table storage. Table names are generated automatically in my application and then created using table.CreateIfNotExists(tableName). Some work and some don't. When I dig into the error the extended error information tells me that the resource name contains invalid characters - however I am at a loss to work out what is invalid in the failing names - can anyone spot this? 8836461cc98249bea59dc5f6790d40edstk365developmentusers – the specified resource

Append to CloudBlockBlob stream

£可爱£侵袭症+ 提交于 2020-06-11 20:05:19
问题 We have a file system abstraction that allows us to easily switch between local and cloud (Azure) storage. For reading and writing files we have the following members: Stream OpenRead(); Stream OpenWrite(); Part of our application "bundles" documents into one file. For our local storage provider OpenWrite returns an appendable stream: public Stream OpenWrite() { return new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite, BufferSize, useAsync: true); }

Append to CloudBlockBlob stream

霸气de小男生 提交于 2020-06-11 20:01:16
问题 We have a file system abstraction that allows us to easily switch between local and cloud (Azure) storage. For reading and writing files we have the following members: Stream OpenRead(); Stream OpenWrite(); Part of our application "bundles" documents into one file. For our local storage provider OpenWrite returns an appendable stream: public Stream OpenWrite() { return new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite, BufferSize, useAsync: true); }

SharedKeyCredential is not a constructor - Azure Blob Storage + Nodejs

六眼飞鱼酱① 提交于 2020-05-30 08:41:33
问题 I'm trying to delete an image in my aucitonImages container, but when I execute the function from postman, I get SharedKeyCredential is not a constructor I've been following the documentation and I think I have everything setup, but I don't see what's different in my code from the docs. I appreciate any help! app.delete("/api/removeauctionimages", upload, async (req, res, next) => { const { ContainerURL, ServiceURL, StorageURL, SharedKeyCredential } = require("@azure/storage-blob"); const

Azure Data Factory connecting to Blob Storage via Access Key

佐手、 提交于 2020-05-29 04:12:27
问题 I'm trying to build a very basic data flow in Azure Data Factory pulling a JSON file from blob storage, performing a transformation on some columns, and storing in a SQL database. I originally authenticated to the storage account using Managed Identity, but I get the error below when attempting to test the connection to the source: com.microsoft.dataflow.broker.MissingRequiredPropertyException: account is a required property for [myStorageAccountName]. com.microsoft.dataflow.broker