How do I store run-time data in Azure Data Factory between pipeline executions?
问题 I have been following Microsoft's tutorial to incrementally/delta load data from an SQL Server database. It uses a watermark (timestamp) to keep track of changed rows since last time. The tutorial stores the watermark to an Azure SQL database using the "Stored Procedure" activity in the pipeline so it can be reused in the next execution. It seems overkill to have an Azure SQL database just to store that tiny bit of meta information (my source database is read-only btw). I'd rather just store