Skip to content

Commit 2c20a0a

Browse files
committed
Fix broken links
1 parent fbf75ce commit 2c20a0a

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

articles/data-factory/connector-azure-sql-data-warehouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -397,7 +397,7 @@ Learn more about how to use PolyBase to efficiently load SQL Data Warehouse in t
397397

398398
## Use PolyBase to load data into Azure SQL Data Warehouse
399399

400-
Using [PolyBase](https://docs.microsoft.com/sql/relational-databases/polybase/polybase-guide) is an efficient way to load a large amount of data into Azure SQL Data Warehouse with high throughput. You'll see a large gain in the throughput by using PolyBase instead of the default BULKINSERT mechanism. See [Performance reference](copy-activity-performance.md#performance-reference) for a detailed comparison. For a walkthrough with a use case, see [Load 1 TB into Azure SQL Data Warehouse](v1/data-factory-load-sql-data-warehouse.md).
400+
Using [PolyBase](https://docs.microsoft.com/sql/relational-databases/polybase/polybase-guide) is an efficient way to load a large amount of data into Azure SQL Data Warehouse with high throughput. You'll see a large gain in the throughput by using PolyBase instead of the default BULKINSERT mechanism. For a walkthrough with a use case, see [Load 1 TB into Azure SQL Data Warehouse](v1/data-factory-load-sql-data-warehouse.md).
401401
402402
* If your source data is in **Azure Blob, Azure Data Lake Storage Gen1 or Azure Data Lake Storage Gen2**, and the **format is PolyBase compatible**, you can use copy activity to directly invoke PolyBase to let Azure SQL Data Warehouse pull the data from source. For details, see **[Direct copy by using PolyBase](#direct-copy-by-using-polybase)**.
403403
* If your source data store and format isn't originally supported by PolyBase, use the **[Staged copy by using PolyBase](#staged-copy-by-using-polybase)** feature instead. The staged copy feature also provides you better throughput. It automatically converts the data into PolyBase-compatible format. And it stores the data in Azure Blob storage. It then loads the data into SQL Data Warehouse.

articles/data-factory/copy-activity-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ The following template of a copy activity contains an exhaustive list of support
141141

142142
## Monitoring
143143

144-
You can monitor the copy activity run on Azure Data Factory "Author & Monitor" UI or programmatically. You can then compare the performance and configuration of your scenario to Copy Activity's [performance reference](copy-activity-performance.md#performance-reference) from in-house testing.
144+
You can monitor the copy activity run on Azure Data Factory "Author & Monitor" UI or programmatically.
145145

146146
### Monitor visually
147147

articles/data-factory/copy-activity-performance.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,11 +38,11 @@ After reading this article, you will be able to answer the following questions:
3838

3939
ADF offers a serverless architecture that allows parallelism at different levels, which allows developers to build pipelines to fully utilize your network bandwidth as well as storage IOPS and bandwidth to maximize data movement throughput for your environment. This means the throughput you can achieve can be estimated by measuring the minimum throughput offered by the source data store, the destination data store, and network bandwidth in between the source and destination. The table below calculates the copy duration based on data size and the bandwidth limit for your environment.
4040

41-
![copy duration estimation](media\copy-activity-performance\copy-duration-estimation.png)
41+
![copy duration estimation](media/copy-activity-performance/copy-duration-estimation.png)
4242

4343
ADF copy is scalable at different levels:
4444

45-
![how ADF copy scales](media\copy-activity-performance\adf-copy-scalability.png)
45+
![how ADF copy scales](media/copy-activity-performance/adf-copy-scalability.png)
4646

4747
- A single copy activity can take advantage of scalable compute resources: when using Azure Integration Runtime, you can specify [up to 256 DIUs](#data-integration-units) for each copy activity in a serverless manner; when using self-hosted Integration Runtime, you can manually scale up the machine or scale out to multiple machines ([up to 4 nodes](create-self-hosted-integration-runtime.md#high-availability-and-scalability)), and a single copy activity will partition its file set across all nodes.
4848
- A single copy activity reads from and writes to the data store using multiple threads.

0 commit comments

Comments
 (0)