Skip to content

Commit ebac0e8

Browse files
authored
Update 2020-08-11-efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus.md
y. One correction in the “HIGH-PERFORMANCE” section: “Below is a benchmark of AIStore with WebDataset clients using 10 server nodes and 120 rotational drives each.” That should be “12 server nodes with 10 rotational drives each” (for a total 120 rotational drives, the above implies 1200).
1 parent fa4c276 commit ebac0e8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2020-08-11-efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ We will be adding more examples giving benchmarks and showing how to use WebData
5555
## High-Performance
5656
For high-performance computation on local clusters, the companion open-source [AIStore](https://github.com/NVIDIA/AIStore) server provides full disk to GPU I/O bandwidth, subject only to hardware constraints. [This Bigdata 2019 Paper](https://arxiv.org/abs/2001.01858) contains detailed benchmarks and performance measurements. In addition to benchmarks, research projects at NVIDIA and Microsoft have used WebDataset for petascale datasets and billions of training samples.
5757

58-
Below is a benchmark of AIStore with WebDataset clients using 10 server nodes and 120 rotational drives each.
58+
Below is a benchmark of AIStore with WebDataset clients using 12 server nodes with 10 rotational drives each.
5959

6060
<div class="text-center">
6161
<img src="{{ site.url }}/assets/images/pytorchwebdataset1.png" width="100%">

0 commit comments

Comments
 (0)