Friday 15 December 2017

The advantage of longRetry parameter in Azure Data Factory

We are using Azure Data Factory to load data from Azure storage blobs to SQL Server on-premises.
During data loading, we faced SQL Error: 10054 , which was due to "Connection Forcibly Closed by Remote Server".

On further analysis, it was due to some server patch activity happening on our SQL Server environment. So, this error is bound to happen once in a while, when server goes down.

The solution for solving this problem is enabling retry mechanism. But, even retry immediately will not solve the issue. So, we have to use longRetry option in the ADF activity policy.

Originally, the activity policy was set as:

       

"policy": {
          "concurrency": 1,
          "executionPriorityOrder": "OldestFirst",
          "style": "StartOfInterval",
          "retry": 1,      
          "timeout": "23.23:23:23"
        },


We changed the activity policy to:

      

"policy": {
          "concurrency": 1,
          "executionPriorityOrder": "OldestFirst",
          "style": "StartOfInterval",
          "retry": 1,
          "longRetry": 3,
          "longRetryInterval": "00:20:00",
          "timeout": "23.23:23:23"
        },


The number of times slice will be attempted is: Retry times x No. of Long Retry times
So, in the above case, it will be 3 x 3 = 9 times, the activity slice will try to run.

We are keeping the longRetryInterval as 20 minutes, hoping that the server patch activity will get completed within 20 minutes and retry will be successful.

We can read more about it: https://github.com/twright-msft/azure-content/blob/master/articles/data-factory/data-factory-create-pipelines.md

How to Handle SSIS Database movement from one environment to another

Below are the steps to follow the movement of SSISDB from one environment to another: -- opening the existing Database master key in S...