azure-data-factory-2

SQL Server complains about invalid json

走远了吗. 提交于 2019-12-24 10:38:30
问题 I am writing an ETL tool using Azure Data Factory and Azure SQL Database. The Data Factory captures the output of a Mapping Data Flow and inserts it into the StatusMessage column of a SQL Server table (Audit.OperationsEventLog) as a string. The StatusMessage column is varchar(8000) and is intended to store data formatted as valid json. SELECT * FROM Audit.OperationsEventLog lg CROSS APPLY OPENJSON(lg.StatusMessage) dt When I query the json string from the table using the query above, it

In ADF V2 - how to append date (“yyyyMMdd”)) to filename dynamically for S3 dataset

喜欢而已 提交于 2019-12-24 07:39:20
问题 I'm currently working to automate a pipeline in ADFv2 where the source data sits in S3. A new file is created daily and is structured "data_20180829.csv" I have tried to instrument dynamic content to accomplish this in the fileName field of Copy Data Activity. However even when I try something as simple as @{concat('data_','20180829.csv')} (that should resolve to the correct value) the source fails. Is there any way to see what the dynamic content will resolve to? 回答1: This should just be a

Azure Data Factory deflate without creating a folder

穿精又带淫゛_ 提交于 2019-12-24 03:23:59
问题 I have a Data Factory v2 job which copies files from an SFTP server to an Azure Data Lake Gen2. There is a mix of .csv files and .zip files (each containing only one csv file). I have one dataset for copying the csv files and another for copying zip files (with Compressoin type set to ZipDeflate). The problem is that the ZipDeflate creates a new folder that contains the csv file and I need this to respect the folder hierarchy without creating any folders. Is this possible in Azure Data

Disable activity in Azure Data factory pipeline without removing it

谁说胖子不能爱 提交于 2019-12-23 15:28:03
问题 So I am testing each of the activities of the pipeline and I want to disable some of the activities in it. Essentially there is an activity of sending emails which I want to disable as I wanted to see the output of prior activities. Offcourse I dont want to remove the email sending activity because it is in the prod environment and not developed by me. Is there any way to disable it? 回答1: You cannot disable one, but what you want to do is possible with the debug option in the editor. Just

In DataFactory, what is a good strategy to migrate data into Dynamics365 using Dynamics Web API?

99封情书 提交于 2019-12-20 06:09:24
问题 I need to migrate data using DataFactory to Dynamics365. The Dynamics365 connector is not enough for me since one of the requirements is to only update those attributes that have been modified since last migration - not the whole register. The other requirement is that sometimes we have to 'null' values in destination. I believe that I can do that by generating a different JSON for register and migrate them using the Web API. I thought in putting these calls in an Azure Functions, but I

How to copy CosmosDb docs to Blob storage (each doc in single json file) with Azure Data Factory

放肆的年华 提交于 2019-12-20 05:33:09
问题 I'm trying to backup my Cosmos Db storage using Azure Data Factory(v2). In general, it's doing its job, but I want to have each doc in Cosmos collection to correspond new json file in blobs storage. With next copying params i'm able to copy all docs in collection into 1 file in azure blob storage: { "name": "ForEach_mih", "type": "ForEach", "typeProperties": { "items": { "value": "@pipeline().parameters.cw_items", "type": "Expression" }, "activities": [ { "name": "Copy_mih", "type": "Copy",

use adf pipeline parameters as source to sink columns while mapping

与世无争的帅哥 提交于 2019-12-20 03:52:13
问题 I have an ADF pipeline with copy activity, I'm copying data from blob storage CSV file to SQL database, this is working as expected. I need to map Name of the CSV file (this coming from pipeline parameters) and save it in the destination table. I'm wondering if there is a way to map parameters to destination columns. 回答1: Column name can't directly use parameters. But you can use parameter for the whole structure property in dataset and columnMappings property in copy activity. This might be

Azure data factory | incremental data load from SFTP to Blob

帅比萌擦擦* 提交于 2019-12-18 18:30:33
问题 I created a (once run) DF (V2) pipeline to load files (.lta.gz) from a SFTP server into an azure blob to get historical data. Worked beautifully. Every day there will be several new files on the SFTP server (which cannot be manipulated or deleted). So I want to create an incremental load pipeline which checks daily for new files - if so ---> copy new files. Does anyone have any tips for me how to achieve this? 回答1: Thanks for using Data Factory! To incrementally load newly generated files on

Azure Data Factory mapping 2 columns in one column

China☆狼群 提交于 2019-12-17 06:55:49
问题 Can somebody help me solving the error am getting in concatenating the two columns i.e first name and last name from my text file and merging the two columns into one name column in my Azure SQL database as a sink in Azure Data Factory and another question is that I want to choose the first letter of the column gender that is M or F for male and female respectively from the source text file and changing it to one letter M or F in my gender column in the Azure data factory pipeline enter image

Get Metadata Activity ADF V2

陌路散爱 提交于 2019-12-13 04:45:30
问题 Can anyone explain me, what is the use of Get Metadata Activity that is newly introduced in ADF V2? Actually, the information that is given in docs.microsoft.com isn't enough to understand the uses of this Activity. 回答1: Main purpose of the Get Metadata Activity is: Validate the metadata information of any data Trigger a pipeline when data is ready/ available The following example shows how to incrementally load changed files from a folder using the Get Metadata Activity getting filenames and