Coutinho is ready move into new Barcelona gear

Philippe Coutinho ended a long saga when he officially signed for Barcelona a year ago, for a transfer fee of over $180 million. But his move to the Catalan giants has experienced a lot of ups and…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Working with Data Movement in Azure Data Factory

The Azure portal was used to create a data factory, then the Copy Data tool was used to create a pipeline that copies data from a SQL Server database to Azure Blob Storage. I completed this lab from Whizlabs as part of the DP-203 certification course.

This is my experience of following along with the lab, information pulled from my Project on Notion.

First, the lab gave an introduction to Azure Data Factory:

What is Azure Data Factory?

Instance details

3. Now, we needed to create a container. From the overview page of the storage account, under Data storage, I selected Containers. I created a new container meeting the following requirements:

4. Now we needed to add a SQL database. Instead of going the “Create a resource” route, this time we searched for “SQL Server” in the search box at the top. I then selected SQL Server, then Create+.

I created a SQL Database Server with the following requirements:

Server details

Authentication

I left the rest of the options default and clicked Review and Create, then Create to start the deployment.

5. The next step required installing SQL Server Management Studio locally on my PC. I followed the instructions to copy/paste the server name from the SQL database server I created in Azure, then selected SQL Server Authentication, and entered the credentials I made for the database in the earlier task.

From there, I created a new database, then selected New Query from SSMS. I copied/pasted the SQL query Whizlabs provided to create a new table named dbo.whiz, but I decided to have some fun with it and add several more than the 2 example entries they provided.

6. With the SQL database created, I followed along with Whizlabs instructions and created a new resource of a data factory. From there, I entered the information according to the resource requirements:

I then selected Git configuration > checked the box for configure Git later. I left the rest of the settings as default, selected Review and Create, then Create when the validation passed to start the deployment.

7. After having just created the Azure data factory, from the overview page I opened Azure Data Factory Studio. I selected Ingest to launch the copy data tool. Very exciting, as I’ve seen this process in the course work videos, but never live in lab. I selected Built-in copy task under Task type, then chose Run once now under Task cadence or task schedule, then selected Next. On the Source data store page, I selected +New Connection then searched for SQL Server, selected it then continued.

In the new connection dialog box, I named the SQL Server SqlServerLinkedService, then selected +New under Connect via integration runtime.

In the Integration runtime setup, I selected Self-Hosted, then Continue. This was explained as:

In the Integration runtime setup, I named it WhizIntegrationRuntime, then created it. I then launched the express setup for this computer option to download the integration runtime to my local PC, then installed it. This registered the runtime with Data Factory from my PC.

I then launched the Integration Runtime Configuration Manager on my PC, and followed Whizlabs instructions to go to Settings > Remote Access from intranet > Change. I then entered the required information:

This forced the Configuration Manager to restart the service locally to force the change.

I then went back into the Azure portal where we left the new connection dialog open for the SQL Server. I selected WhizIntegrationRuntime under Connect via integration runtime, then entered the required information:

After the creation was successful, I was greeted with the Source data store page. In the Source tables section, I chose Existing Tables, then selected the dbo.whiz table I created earlier.

On the Apply filter page, I previewed the data to view the table I created earlier. Very cool to visually see the SQL code come to life!

I clicked Next, then selected +New connection on the Destination data store page. I searched for Azure Blob Storage and selected Continue.

I then entered the required information:

After a successful creation, on the page displayed I selected Browse under Folder path then selected the Container I created in the Storage account earlier. I clicked Next, then left the default settings on the File format settings page. On the Settings page, I named the Task CopyFromSqlToBlob, then selected Next.

On the Summary page, I reviewed the values for the pipeline I just created to copy data!

Once the Deployment completed, I selected Monitor to monitor the pipeline. I then refreshed the pipeline to see that it succeeded.

I clicked the pipeline then to see a visual representation that it ran. Just to confirm, I then went to the previously created container under the Storage account I created earlier and sure enough, there’s a dbo.whiz.txt file there. The table data has been copied here successfully!

Add a comment

Related posts:

How to help protect yourself from click fraud

One of the biggest hesitations for advertisers who want to explore paid search is click fraud. It’s actually a very valid concern for many PPC practitioners but click fraud should not be the reason…