From b371d264db0fac2b7258874aa37528c8d70d9355 Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 20 Jul 2020 13:48:41 -0700
Subject: [PATCH 01/32] Create 1.6 blog post
---
_posts/2020-7-20-pytorch-1.6-released.md | 70 ++++++++++++++++++++++++
1 file changed, 70 insertions(+)
create mode 100644 _posts/2020-7-20-pytorch-1.6-released.md
diff --git a/_posts/2020-7-20-pytorch-1.6-released.md b/_posts/2020-7-20-pytorch-1.6-released.md
new file mode 100644
index 000000000000..ad6911ed444f
--- /dev/null
+++ b/_posts/2020-7-20-pytorch-1.6-released.md
@@ -0,0 +1,70 @@
+---
+layout: blog_detail
+title: 'PyTorch 1.6 released w/ Native AMP Support, Microsoft joins as maintainers for Windows'
+author: Team PyTorch
+---
+
+Today, we’re announcing the availability of PyTorch 1.6, along with updated domain libraries. We are also excited to announce the team at Microsoft is now maintaining Windows builds and binaries and will also be supporting the community on GitHub as well as the [PyTorch Windows discussion forums](https://discuss.pytorch.org/c/windows/).
+
+## TUTORIALS HOME PAGE UPDATE
+The tutorials home page now provides clear actions that developers can take. For new PyTorch users, there is an easy-to-discover button to take them directly to “A 60 Minute Blitz”. Right next to it, there is a button to view all recipes which are designed to teach specific features quickly with examples.
+
+
+
+
+
+In addition to the existing left navigation bar, tutorials can now be quickly filtered by multi-select tags. Let’s say you want to view all tutorials related to “Production” and “Quantization”. You can select the “Production” and “Quantization” filters as shown in the image shown below:
+
+
+
+
+
+The following additional resources can also be found at the bottom of the Tutorials homepage:
+* [PyTorch Cheat Sheet](https://pytorch.org/tutorials/beginner/ptcheat.html)
+* [PyTorch Examples](https://github.com/pytorch/examples)
+* [Tutorial on GitHub](https://github.com/pytorch/tutorials)
+
+## PYTORCH RECIPES
+Recipes are new bite-sized, actionable examples designed to teach researchers and developers how to use specific PyTorch features. Some notable new recipes include:
+* [Loading Data in PyTorch](https://pytorch.org/tutorials/recipes/recipes/loading_data_recipe.html)
+* [Model Interpretability Using Captum](https://pytorch.org/tutorials/recipes/recipes/Captum_Recipe.html)
+* [How to Use TensorBoard](https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html)
+
+View the full recipes [here](http://pytorch.org/tutorials/recipes/recipes_index.html).
+
+## LEARNING PYTORCH
+This section includes tutorials designed for users new to PyTorch. Based on community feedback, we have made updates to the current [Deep Learning with PyTorch: A 60 Minute Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) tutorial, one of our most popular tutorials for beginners. Upon completion, one can understand what PyTorch and neural networks are, and be able to build and train a simple image classification network. Updates include adding explanations to clarify output meanings and linking back to where users can read more in the docs, cleaning up confusing syntax errors, and reconstructing and explaining new concepts for easier readability.
+
+## DEPLOYING MODELS IN PRODUCTION
+This section includes tutorials for developers looking to take their PyTorch models to production. The tutorials include:
+* [Deploying PyTorch in Python via a REST API with Flask](https://pytorch.org/tutorials/intermediate/flask_rest_api_tutorial.html)
+* [Introduction to TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html)
+* [Loading a TorchScript Model in C++](https://pytorch.org/tutorials/advanced/cpp_export.html)
+* [Exploring a Model from PyTorch to ONNX and Running it using ONNX Runtime](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html)
+
+## FRONTEND APIS
+PyTorch provides a number of frontend API features that can help developers to code, debug, and validate their models more efficiently. This section includes tutorials that teach what these features are and how to use them. Some tutorials to highlight:
+* [Introduction to Named Tensors in PyTorch](https://pytorch.org/tutorials/intermediate/named_tensor_tutorial.html)
+* [Using the PyTorch C++ Frontend](https://pytorch.org/tutorials/advanced/cpp_frontend.html)
+* [Extending TorchScript with Custom C++ Operators](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html)
+* [Extending TorchScript with Custom C++ Classes](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html)
+* [Autograd in C++ Frontend](https://pytorch.org/tutorials/advanced/cpp_autograd.html)
+
+## MODEL OPTIMIZATION
+Deep learning models often consume large amounts of memory, power, and compute due to their complexity. This section provides tutorials for model optimization:
+* [Pruning](https://pytorch.org/tutorials/intermediate/pruning_tutorial.html)
+* [Dynamic Quantization on BERT](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html)
+* [Static Quantization with Eager Mode in PyTorch](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html)
+
+## PARALLEL AND DISTRIBUTED TRAINING
+PyTorch provides features that can accelerate performance in research and production such as native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++. This section includes tutorials on parallel and distributed training:
+* [Single-Machine Model Parallel Best Practices](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html)
+* [Getting started with Distributed Data Parallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)
+* [Getting started with Distributed RPC Framework](https://pytorch.org/tutorials/intermediate/rpc_tutorial.html)
+* [Implementing a Parameter Server Using Distributed RPC Framework](https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html)
+
+Making these improvements are just the first step of improving PyTorch.org for the community. Please submit your suggestions [here](https://github.com/pytorch/tutorials/pulls).
+
+Cheers,
+
+Team PyTorch
From 1094936476983c451a5e652d7e3d12245c7c518f Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 13:43:06 -0700
Subject: [PATCH 02/32] Update and rename 2020-7-20-pytorch-1.6-released.md to
2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
...-with-pytorch-automatic-mixed-precision.md | 10 +++
_posts/2020-7-20-pytorch-1.6-released.md | 70 -------------------
2 files changed, 10 insertions(+), 70 deletions(-)
create mode 100644 _posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
delete mode 100644 _posts/2020-7-20-pytorch-1.6-released.md
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
new file mode 100644
index 000000000000..ff8fae8d82fa
--- /dev/null
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -0,0 +1,10 @@
+---
+layout: blog_detail
+title: 'Accelerating Training on NVIDIA GPUs with PyTorch Automatic Mixed Precision'
+author: Michael Carilli, Mengdi Huang, Chetan Tekur
+---
+
+Most deep learning frameworks, including PyTorch, train with 32-bit floating point (FP32) arithmetic by default. However this is not essential to achieve full accuracy for many deep learning models. In 2017, NVIDIA researchers developed a methodology for [mixed-precision training](https://developer.nvidia.com/blog/mixed-precision-training-deep-neural-networks/), which combined [single-precision](https://blogs.nvidia.com/blog/2019/11/15/whats-the-difference-between-single-double-multi-and-mixed-precision-computing/) (FP32) with half-precision (e.g. FP16) format when training a network, and achieved the same accuracy as FP32 training using the same hyperparameters, with additional performance benefits on NVIDIA GPUs:
+
+* Shorter training time;
+* Lower memory requirements, enabling larger batch sizes, larger models, or larger inputs.
diff --git a/_posts/2020-7-20-pytorch-1.6-released.md b/_posts/2020-7-20-pytorch-1.6-released.md
deleted file mode 100644
index ad6911ed444f..000000000000
--- a/_posts/2020-7-20-pytorch-1.6-released.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-layout: blog_detail
-title: 'PyTorch 1.6 released w/ Native AMP Support, Microsoft joins as maintainers for Windows'
-author: Team PyTorch
----
-
-Today, we’re announcing the availability of PyTorch 1.6, along with updated domain libraries. We are also excited to announce the team at Microsoft is now maintaining Windows builds and binaries and will also be supporting the community on GitHub as well as the [PyTorch Windows discussion forums](https://discuss.pytorch.org/c/windows/).
-
-## TUTORIALS HOME PAGE UPDATE
-The tutorials home page now provides clear actions that developers can take. For new PyTorch users, there is an easy-to-discover button to take them directly to “A 60 Minute Blitz”. Right next to it, there is a button to view all recipes which are designed to teach specific features quickly with examples.
-
-
-
-
-
-In addition to the existing left navigation bar, tutorials can now be quickly filtered by multi-select tags. Let’s say you want to view all tutorials related to “Production” and “Quantization”. You can select the “Production” and “Quantization” filters as shown in the image shown below:
-
-
-
-
-
-The following additional resources can also be found at the bottom of the Tutorials homepage:
-* [PyTorch Cheat Sheet](https://pytorch.org/tutorials/beginner/ptcheat.html)
-* [PyTorch Examples](https://github.com/pytorch/examples)
-* [Tutorial on GitHub](https://github.com/pytorch/tutorials)
-
-## PYTORCH RECIPES
-Recipes are new bite-sized, actionable examples designed to teach researchers and developers how to use specific PyTorch features. Some notable new recipes include:
-* [Loading Data in PyTorch](https://pytorch.org/tutorials/recipes/recipes/loading_data_recipe.html)
-* [Model Interpretability Using Captum](https://pytorch.org/tutorials/recipes/recipes/Captum_Recipe.html)
-* [How to Use TensorBoard](https://pytorch.org/tutorials/recipes/recipes/tensorboard_with_pytorch.html)
-
-View the full recipes [here](http://pytorch.org/tutorials/recipes/recipes_index.html).
-
-## LEARNING PYTORCH
-This section includes tutorials designed for users new to PyTorch. Based on community feedback, we have made updates to the current [Deep Learning with PyTorch: A 60 Minute Blitz](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) tutorial, one of our most popular tutorials for beginners. Upon completion, one can understand what PyTorch and neural networks are, and be able to build and train a simple image classification network. Updates include adding explanations to clarify output meanings and linking back to where users can read more in the docs, cleaning up confusing syntax errors, and reconstructing and explaining new concepts for easier readability.
-
-## DEPLOYING MODELS IN PRODUCTION
-This section includes tutorials for developers looking to take their PyTorch models to production. The tutorials include:
-* [Deploying PyTorch in Python via a REST API with Flask](https://pytorch.org/tutorials/intermediate/flask_rest_api_tutorial.html)
-* [Introduction to TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html)
-* [Loading a TorchScript Model in C++](https://pytorch.org/tutorials/advanced/cpp_export.html)
-* [Exploring a Model from PyTorch to ONNX and Running it using ONNX Runtime](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html)
-
-## FRONTEND APIS
-PyTorch provides a number of frontend API features that can help developers to code, debug, and validate their models more efficiently. This section includes tutorials that teach what these features are and how to use them. Some tutorials to highlight:
-* [Introduction to Named Tensors in PyTorch](https://pytorch.org/tutorials/intermediate/named_tensor_tutorial.html)
-* [Using the PyTorch C++ Frontend](https://pytorch.org/tutorials/advanced/cpp_frontend.html)
-* [Extending TorchScript with Custom C++ Operators](https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html)
-* [Extending TorchScript with Custom C++ Classes](https://pytorch.org/tutorials/advanced/torch_script_custom_classes.html)
-* [Autograd in C++ Frontend](https://pytorch.org/tutorials/advanced/cpp_autograd.html)
-
-## MODEL OPTIMIZATION
-Deep learning models often consume large amounts of memory, power, and compute due to their complexity. This section provides tutorials for model optimization:
-* [Pruning](https://pytorch.org/tutorials/intermediate/pruning_tutorial.html)
-* [Dynamic Quantization on BERT](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html)
-* [Static Quantization with Eager Mode in PyTorch](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html)
-
-## PARALLEL AND DISTRIBUTED TRAINING
-PyTorch provides features that can accelerate performance in research and production such as native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++. This section includes tutorials on parallel and distributed training:
-* [Single-Machine Model Parallel Best Practices](https://pytorch.org/tutorials/intermediate/model_parallel_tutorial.html)
-* [Getting started with Distributed Data Parallel](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)
-* [Getting started with Distributed RPC Framework](https://pytorch.org/tutorials/intermediate/rpc_tutorial.html)
-* [Implementing a Parameter Server Using Distributed RPC Framework](https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html)
-
-Making these improvements are just the first step of improving PyTorch.org for the community. Please submit your suggestions [here](https://github.com/pytorch/tutorials/pulls).
-
-Cheers,
-
-Team PyTorch
From d511a067b06d10d97b37628703fd02894e757593 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 13:54:20 -0700
Subject: [PATCH 03/32] Update 2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
... Ngpus-with-pytorch-automatic-mixed-precision.md | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
index ff8fae8d82fa..027fdd7c978c 100644
--- a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -8,3 +8,16 @@ Most deep learning frameworks, including PyTorch, train with 32-bit floating poi
* Shorter training time;
* Lower memory requirements, enabling larger batch sizes, larger models, or larger inputs.
+
+In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed [Apex](https://developer.nvidia.com/blog/apex-pytorch-easy-mixed-precision-training/) in 2018, which is a lightweight PyTorch extension with [Automatic Mixed Precision](https://developer.nvidia.com/automatic-mixed-precision) (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy.
+
+For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, torch.cuda.amp . torch.cuda.amp is more flexible and intuitive compared to apex.amp. Some of apex.amp's known pain points that torch.cuda.amp has been able to fix:
+
+* Guaranteed PyTorch version compatibility, because it's part of PyTorch
+* No need to build extensions
+* Windows support
+* Bitwise accurate saving/restoring of checkpoints
+* DataParallel and intra-process model parallelism (although we still recommend torch.nn.DistributedDataParallel with one GPU per process as the most performant approach)
+* Gradient penalty (double backward)
+* torch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to
+
From 09a8092cefdc28ac844d1b33717f03dd530fe56c Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 14:10:09 -0700
Subject: [PATCH 04/32] Update 2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
...n Ngpus-with-pytorch-automatic-mixed-precision.md | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
index 027fdd7c978c..87613f3cab13 100644
--- a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -11,13 +11,15 @@ Most deep learning frameworks, including PyTorch, train with 32-bit floating poi
In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed [Apex](https://developer.nvidia.com/blog/apex-pytorch-easy-mixed-precision-training/) in 2018, which is a lightweight PyTorch extension with [Automatic Mixed Precision](https://developer.nvidia.com/automatic-mixed-precision) (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy.
-For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, torch.cuda.amp . torch.cuda.amp is more flexible and intuitive compared to apex.amp. Some of apex.amp's known pain points that torch.cuda.amp has been able to fix:
+For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, `torch.cuda.amp` < Missing link>. `torch.cuda.amp` is more flexible and intuitive compared to `apex.amp`. Some of `apex.amp`'s known pain points that torch.cuda.amp has been able to fix:
* Guaranteed PyTorch version compatibility, because it's part of PyTorch
* No need to build extensions
* Windows support
-* Bitwise accurate saving/restoring of checkpoints
-* DataParallel and intra-process model parallelism (although we still recommend torch.nn.DistributedDataParallel with one GPU per process as the most performant approach)
-* Gradient penalty (double backward)
-* torch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to
+* Bitwise accurate [saving/restoring](https://pytorch.org/docs/master/amp.html#torch.cuda.amp.GradScaler.load_state_dict) of checkpoints
+* [DataParallel](https://pytorch.org/docs/master/notes/amp_examples.html#dataparallel-in-a-single-process) and intra-process model parallelism (although we still recommend [torch.nn.DistributedDataParallel](https://pytorch.org/docs/master/notes/amp_examples.html#distributeddataparallel-one-gpu-per-process) with one GPU per process as the most performant approach)
+* [Gradient penalty](https://pytorch.org/docs/master/notes/amp_examples.html#gradient-penalty) (double backward)
+* torch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to [apex.amp.initialize()](https://github.com/NVIDIA/apex/issues/439) (including [cross-validation)](https://github.com/NVIDIA/apex/issues/392#issuecomment-610038073) without difficulty. Multiple convergence runs in the same script should each use a fresh [GradScaler instance](https://github.com/NVIDIA/apex/issues/439#issuecomment-610028282), but GradScalers are lightweight and self-contained so that's not a problem.
+* Sparse gradient support
+
From d7689191037c5305f65a142d9a1631c6539b5d65 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 15:03:32 -0700
Subject: [PATCH 05/32] Update 2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
...-with-pytorch-automatic-mixed-precision.md | 93 +++++++++++++++++++
1 file changed, 93 insertions(+)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
index 87613f3cab13..914c3582eab0 100644
--- a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -21,5 +21,98 @@ For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed preci
* [Gradient penalty](https://pytorch.org/docs/master/notes/amp_examples.html#gradient-penalty) (double backward)
* torch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to [apex.amp.initialize()](https://github.com/NVIDIA/apex/issues/439) (including [cross-validation)](https://github.com/NVIDIA/apex/issues/392#issuecomment-610038073) without difficulty. Multiple convergence runs in the same script should each use a fresh [GradScaler instance](https://github.com/NVIDIA/apex/issues/439#issuecomment-610028282), but GradScalers are lightweight and self-contained so that's not a problem.
* Sparse gradient support
+
+With AMP being added to PyTorch core, we have started the process of deprecating **apex.amp.** We have moved **apex.amp** to maintenance mode and will support customers using **apex.amp.** However, we highly encourage **apex.amp** customers to transition to using **torch.cuda.amp** from PyTorch Core.
+
+## Example Walkthrough
+Please see official docs for usage:
+https://pytorch.org/docs/stable/amp.html
+https://pytorch.org/docs/stable/notes/amp_examples.html
+
+Example:
+
+```Python
+import torch
+# Creates once at the beginning of training
+scaler = torch.cuda.amp.GradScaler()
+
+for data, label in data_iter:
+optimizer.zero_grad()
+# Casts operations to mixed precision
+with torch.cuda.amp.autocast():
+loss = model(data)
+# Scales the loss, and calls backward()
+# to create scaled gradients
+scaler.scale(loss).backward()
+
+# Unscales gradients and calls
+# or skips optimizer.step()
+scaler.step(optimizer)
+
+# Updates the scale for next iteration
+scaler.update()
+```
+
+## Performance Benchmarks
+In this section, we discuss the accuracy and performance of mixed precision training with AMP on the latest NVIDIA GPU A100 and also previous generation V100 GPU. The mixed precision performance is compared to FP32 performance, when running Deep Learning workloads in the [NVIDIA pytorch:20.06-py3 container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) from NGC.
+
+## Accuracy: AMP (FP16), FP32
+The advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy while providing improved training performance. To illustrate this point, for [Resnet 50 v1.5 training](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5#training-accuracy-nvidia-dgx-a100-8x-a100-40gb), we see the following accuracy results where higher is better. Please note that the below accuracy numbers are sample numbers that are subject to run to run variance of up to 0.4%. Accuracy numbers for other models including BERT, Transformer, ResNeXt-101, Mask-RCNN, DLRM can be found at [NVIDIA Deep Learning Examples Github](https://github.com/NVIDIA/DeepLearningExamples).
+
+Training accuracy: NVIDIA DGX A100 (8x A100 40GB)
+
+
+
+## Speedup Performance:
+
+### FP16 on NVIDIA V100 vs. FP32 on V100
+AMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.
+
+
+
+
+
+
+
From 7e8a530d61963010761dc7154a9959d3941408f3 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 15:11:44 -0700
Subject: [PATCH 06/32] Update 2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
...raining on Ngpus-with-pytorch-automatic-mixed-precision.md | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
index 914c3582eab0..cb4267fcb334 100644
--- a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -109,7 +109,9 @@ Training accuracy: NVIDIA DGX-1 (8x V100 16GB)
### FP16 on NVIDIA V100 vs. FP32 on V100
AMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.
-
+
+
+
From a101675a723fd0216d4924cf04746203e370016a Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 15:48:04 -0700
Subject: [PATCH 07/32] Update 2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
...gpus-with-pytorch-automatic-mixed-precision.md | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
index cb4267fcb334..bc5873a938fd 100644
--- a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -110,10 +110,23 @@ Training accuracy: NVIDIA DGX-1 (8x V100 16GB)
AMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.
-
+
+Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better. [ALT: Performance of mixed precision training on NVIDIA 8xV100 vs FP32 training on 8xV100 GPU]
+FP16 on NVIDIA A100 vs. FP16 on V100
+AMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.
+
+## Call to action
+AMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning [examples](https://github.com/NVIDIA/DeepLearningExamples). NVIDIA PyTorch with native AMP support is available from the [PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) version 20.06. We highly encourage existing **apex.amp** customers to transition to **using torch.cuda.amp** from PyTorch Core available in the latest PyTorch 1.6 release .
+
+
+
+
+
+
+Figure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better. [ALT: Performance of mixed precision training on NVIDIA 8xA100 vs 8xV100 GPU]
From c8d265624eef4938b065aa9168b3d6a34547132d Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Fri, 24 Jul 2020 15:53:09 -0700
Subject: [PATCH 08/32] Update 2020-7-20-Accelerating Training on
Ngpus-with-pytorch-automatic-mixed-precision.md
---
...n Ngpus-with-pytorch-automatic-mixed-precision.md | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
index bc5873a938fd..d1fa8a313066 100644
--- a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
+++ b/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
@@ -114,20 +114,18 @@ AMP with FP16 is the most performant option for DL training on the V100. In Tabl
Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better. [ALT: Performance of mixed precision training on NVIDIA 8xV100 vs FP32 training on 8xV100 GPU]
-FP16 on NVIDIA A100 vs. FP16 on V100
+## FP16 on NVIDIA A100 vs. FP16 on V100
AMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.
-## Call to action
-AMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning [examples](https://github.com/NVIDIA/DeepLearningExamples). NVIDIA PyTorch with native AMP support is available from the [PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) version 20.06. We highly encourage existing **apex.amp** customers to transition to **using torch.cuda.amp** from PyTorch Core available in the latest PyTorch 1.6 release .
-
-
-
-
+
Figure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better. [ALT: Performance of mixed precision training on NVIDIA 8xA100 vs 8xV100 GPU]
+## Call to action
+AMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning [examples](https://github.com/NVIDIA/DeepLearningExamples). NVIDIA PyTorch with native AMP support is available from the [PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) version 20.06. We highly encourage existing **apex.amp** customers to transition to **using torch.cuda.amp** from PyTorch Core available in the latest PyTorch 1.6 release .
+
From 0fe31d901dce5af27e2faf8a282f73c91b8d10c9 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Sun, 26 Jul 2020 13:24:31 -0700
Subject: [PATCH 09/32] Create Microsoft becomes
maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch
---
...intainer-of-the-windows-version-of-pyTorch | 41 +++++++++++++++++++
1 file changed, 41 insertions(+)
create mode 100644 _posts/Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch
diff --git a/_posts/Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch b/_posts/Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch
new file mode 100644
index 000000000000..751158b3898f
--- /dev/null
+++ b/_posts/Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch
@@ -0,0 +1,41 @@
+---
+layout: blog_detail
+title: 'Microsoft becomes maintainer of the Windows version of PyTorch'
+author: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, [Jiachen Pu](https://github.com/peterjc123) - Engineer at Facebook
+
+---
+
+Along with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and is taking ownership of the development and maintenance of the PyTorch build for Windows.
+According to the latest [Stack Overflow developer survey](https://insights.stackoverflow.com/survey/2020#technology-developers-primary-operating-systems), Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). [Jiachen Pu](https://github.com/peterjc123) initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self.
+
+
+In the PyTorch 1.6 release, we have improved the core quality of the Windows build by bringing test coverage up to par with Linux for core PyTorch and its domain libraries and by automating tutorial testing. Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio. In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.
+In addition to the native Windows experience, Microsoft released a preview adding [GPU compute support to Windows Subsystem for Linux (WSL) 2](https://blogs.windows.com/windowsdeveloper/2020/06/17/gpu-accelerated-ml-training-inside-the-windows-subsystem-for-linux/) distros, with a focus on enabling AI and ML developer workflows. WSL is designed for developers that want to run any Linux based tools directly on Windows. This preview enables valuable scenarios for a variety of frameworks and Python packages that utilize [NVIDIA CUDA](https://developer.nvidia.com/cuda/wsl) for acceleration and only support Linux. This means WSL customers using the preview can run native Linux based PyTorch applications on Windows unmodified without the need for a traditional virtual machine or a dual boot setup.
+
+## Getting started with PyTorch on Windows
+It's easy to get started with PyTorch on Windows. To install PyTorch using Anaconda with the latest GPU support, run the command below. To install different supported configurations of PyTorch, refer to the installation instructions on [pytorch.org](https://pytroch.org).
+
+`conda install pytorch torchvision cudatoolkit=10.2 -c pytorch`
+
+Once you install PyTorch, learn more by visiting the [PyTorch Tutorials](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) and [documentation](https://pytorch.org/docs/stable/index.html).
+
+
+
+
+
+
+## Getting started with PyTorch on Windows Subsystem for Linux
+The [preview of NVIDIA CUDA support in WSL](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) is now available to Windows Insiders running Build 20150 or higher. In WSL, the command to install PyTorch using Anaconda is the same as the above command for native Windows. If you prefer pip, use the command below.
+
+`pip install torch torchvision`
+
+You can use the same tutorials and documentation inside your WSL environment as on native Windows. This functionality is still in preview so if you run into issues with WSL please share feedback via the [WSL GitHub repo](https://github.com/microsoft/WSL) or with NVIDIA CUDA support share via NVIDIA’s [Community Forum for CUDA on WSL](https://forums.developer.nvidia.com/c/accelerated-computing/cuda/cuda-on-windows-subsystem-for-linux/303).
+
+## Feedback
+If you find gaps in the PyTorch experience on Windows, please let us know on the [PyTorch discussion forum](https://discuss.pytorch.org/c/windows/26) or file an issue on [GitHub](https://github.com/pytorch/pytorch) using #module: windows label.
+
+
+
+
+
+
From 25354da9becb2bb4781c52f7e1d83c415dbfda88 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Sun, 26 Jul 2020 13:39:42 -0700
Subject: [PATCH 10/32] Rename Microsoft becomes
maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch to
micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch
---
...icrosoft-becomes-maintainer-of-the-windows-version-of-pytorch} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch => micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch} (100%)
diff --git a/_posts/Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch b/_posts/micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch
similarity index 100%
rename from _posts/Microsoft becomes maintamicrosoft-becomes-maintainer-of-the-windows-version-of-pyTorch
rename to _posts/micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch
From eecd3471f46f4c1716a2067063c1581b04ec5266 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Mon, 27 Jul 2020 08:47:52 -0700
Subject: [PATCH 11/32] Rename
micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch to
2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
---
...icrosoft-becomes-maintainer-of-the-windows-version-of-pytorch} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch => 2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch} (100%)
diff --git a/_posts/micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch b/_posts/2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
similarity index 100%
rename from _posts/micmicrosoft-becomes-maintainer-of-the-windows-version-of-pytorch
rename to _posts/2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
From 6648b3273b19499c8de3158579d1f800005bc767 Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 27 Jul 2020 08:53:30 -0700
Subject: [PATCH 12/32] Updating file name
---
...g-training-on-ngpus-with-pytorch-automatic-mixed-precision.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md => 2020-7-20-accelerating-training-on-ngpus-with-pytorch-automatic-mixed-precision.md} (100%)
diff --git a/_posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-accelerating-training-on-ngpus-with-pytorch-automatic-mixed-precision.md
similarity index 100%
rename from _posts/2020-7-20-Accelerating Training on Ngpus-with-pytorch-automatic-mixed-precision.md
rename to _posts/2020-7-20-accelerating-training-on-ngpus-with-pytorch-automatic-mixed-precision.md
From 9fff70a5a849f859b5257da053a0983eaf7b24fd Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 27 Jul 2020 08:54:19 -0700
Subject: [PATCH 13/32] Updating title
---
...ning-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{2020-7-20-accelerating-training-on-ngpus-with-pytorch-automatic-mixed-precision.md => 2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md} (100%)
diff --git a/_posts/2020-7-20-accelerating-training-on-ngpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
similarity index 100%
rename from _posts/2020-7-20-accelerating-training-on-ngpus-with-pytorch-automatic-mixed-precision.md
rename to _posts/2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
From b9be3b859bc256e6d57ad6c894def8c672bc282c Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 27 Jul 2020 08:58:57 -0700
Subject: [PATCH 14/32] Updating file name
---
...icrosoft-becomes-maintainer-of-the-windows-version-of-pytorch} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch => 2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch} (100%)
diff --git a/_posts/2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch b/_posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
similarity index 100%
rename from _posts/2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
rename to _posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
From cd66a672294d270dbfb61e3eb83da5a93f6f4ac2 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Mon, 27 Jul 2020 08:59:12 -0700
Subject: [PATCH 15/32] Create
2020-07-23-feature-classification-changes-in-pyTorch.org
---
...ture-classification-changes-in-pyTorch.org | 43 +++++++++++++++++++
1 file changed, 43 insertions(+)
create mode 100644 _posts/2020-07-23-feature-classification-changes-in-pyTorch.org
diff --git a/_posts/2020-07-23-feature-classification-changes-in-pyTorch.org b/_posts/2020-07-23-feature-classification-changes-in-pyTorch.org
new file mode 100644
index 000000000000..33b56c706a28
--- /dev/null
+++ b/_posts/2020-07-23-feature-classification-changes-in-pyTorch.org
@@ -0,0 +1,43 @@
+---
+layout: blog_detail
+title: 'Feature Classification Changes in PyTorch.org'
+author: PyTorch Team
+---
+
+**Background:** This document inventories all of the changes we need to make to pytorch.org (http://pytorch.org/) in order to implement the new feature classification rubric called out in the post [here](https://fb.prod.workplace.com/groups/ToffeeInternal/permalink/667964690448681/).
+
+
+## New Feature Designations (need to be wordsmithed for external consumption)
+
+**Stable** - the value-add is proven, the API isn’t expected to change, the feature is performant and all documentation exists to support end user adoption.
+
+Level of commitment: We are committing to maintaining the [Backwards Compatibility](https://www.internalfb.com/intern/wiki/PyTorch/PyTorchDev/BCBreakingProcess/), performance, and documentation going forward.
+
+**Beta**- The value add of the general feature area has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage is not yet complete.
+
+Level of commitment: We are committing to seeing the feature through to Stable / GA. We are not committing to [Backwards Compatibility](https://www.internalfb.com/intern/wiki/PyTorch/PyTorchDev/BCBreakingProcess/) Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of our solution may change.
+
+**Prototype** - The feature is not broadly available (except maybe behind compile-time or run-time flags), but we would like to get high bandwidth partner feedback (see [#Dogfooding](https://fb.workplace.com/groups/ToffeeInternal/permalink/654476565130827/)) ahead of a real release in order to gauge utility and any changes we need to make to the UX.
+
+Level of commitment: We are committing to gathering high bandwidth partner feedback only. Based on this feedback and potential further engagement, we will decide if we want to upgrade the level of commitment or to fail fast.
+
+## Changes to [PyTorch.org](http://pytorch.org/)
+
+1. Add a landing page in [pytorch.org/docs](http://pytorch.org/docs) called ‘Feature Designations’ with:
+ * 1. the above definitions
+ * 2. A list of the early stage features (with hyperlinks) called out below
+2. Update feature level designations:
+ * 1. beta (was experimental): high level autograd APIs (Greg) - DONE
+ * 2. beta (was experimental): eager mode quant (Joe) - DONE
+ * 3. prototype (was experimental): named tensors (Greg) - DONE
+ * 4. prototype (was experimental): torchscript/rpc (Joe) - DONE
+ * 5. Beta (was experimental): channels last (Greg) - DONE
+ * 6. Beta (was experimental): custom C++ Classes (Greg) - DONE
+ * 7. Beta (was experimental): PyTorch Mobile (Joe) - DONE
+ * 8. Beta (was experimental): Java Bindings (Joe)
+
+
+
+
+
+
From 4d03e9763925f856ce766f71770070523be7f549 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Mon, 27 Jul 2020 09:44:43 -0700
Subject: [PATCH 16/32] Rename
2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch to
2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
Adding .md extension
---
...osoft-becomes-maintainer-of-the-windows-version-of-pytorch.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch => 2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md} (100%)
diff --git a/_posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch b/_posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
similarity index 100%
rename from _posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch
rename to _posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
From 1fe2b3fc7ee0362dcb8aba614d03e0292a5b1046 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Mon, 27 Jul 2020 09:48:26 -0700
Subject: [PATCH 17/32] Rename
2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
to
2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
---
...osoft-becomes-maintainer-of-the-windows-version-of-pytorch.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md => 2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md} (100%)
diff --git a/_posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md b/_posts/2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
similarity index 100%
rename from _posts/2020-7-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
rename to _posts/2020-07-22-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
From 16ccf120b56fd9fcb1b12ddc6630941ee8a93d62 Mon Sep 17 00:00:00 2001
From: andresruizfacebook
<68402331+andresruizfacebook@users.noreply.github.com>
Date: Mon, 27 Jul 2020 09:49:14 -0700
Subject: [PATCH 18/32] Rename
2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
to
2020-07-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
---
...ning-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename _posts/{2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md => 2020-07-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md} (100%)
diff --git a/_posts/2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-07-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
similarity index 100%
rename from _posts/2020-7-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
rename to _posts/2020-07-20-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
From e68892004311b6f3eb5c6ad6d93aaf47ff9ee0a7 Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 27 Jul 2020 09:50:11 -0700
Subject: [PATCH 19/32] Update and rename
2020-07-23-feature-classification-changes-in-pyTorch.org to
2020-07-20-pytorch-feature-classification-changes.md
---
...-pytorch-feature-classification-changes.md | 47 +++++++++++++++++++
...ture-classification-changes-in-pyTorch.org | 43 -----------------
2 files changed, 47 insertions(+), 43 deletions(-)
create mode 100644 _posts/2020-07-20-pytorch-feature-classification-changes.md
delete mode 100644 _posts/2020-07-23-feature-classification-changes-in-pyTorch.org
diff --git a/_posts/2020-07-20-pytorch-feature-classification-changes.md b/_posts/2020-07-20-pytorch-feature-classification-changes.md
new file mode 100644
index 000000000000..21cb85b254a8
--- /dev/null
+++ b/_posts/2020-07-20-pytorch-feature-classification-changes.md
@@ -0,0 +1,47 @@
+---
+layout: blog_detail
+title: 'PyTorch Feature Classification Changes'
+author: Team PyTorch
+---
+
+Traditionally features in PyTorch were classified as either stable or experimental with an implicit third option of testing bleeding edge features by building master or through installing nightly builds (available via prebuilt whls). This has, in a few cases, caused some confusion around the level of readiness, commitment to the feature and backward compatibility that can be expected from a user perspective. Moving forward, we’d like to better classify the 3 types of features as well as define explicitly here what each mean from a user perspective.
+
+# New Feature Designations
+
+We will continue to have three designations for features but, as mentioned, with a few changes: Stable, Beta (previously experimental) and Prototype (previously ‘nightlies’). Below is a brief description of each and a comment on the backward compatibility expected:
+
+## Stable
+Nothing changes here. A stable feature means that the user value-add is or has been proven, the API isn’t expected to change, the feature is performant and all documentation exists to support end user adoption.
+
+*Level of commitment*: We expect to maintain these features long term and generally there should be no major performance limitations, gaps in documentation and we also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).
+
+## Beta
+We previously called these features ‘Experimental’ and we found that this created confusion amongst some of the users. In the case of a Beta level features, the value add, similar to a Stable feature, has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works and is documented. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage across operators is not yet complete.
+
+*Level of commitment*: We are committing to seeing the feature through to the Stable classification. We are however not committing to Backwards Compatibility. Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of this feature may change.
+
+## Prototype
+Previously these were features that were known about by developers who paid close attention to RFCs and to features that land in master. In this case the feature is not available as part of binary distributions like PyPI or Conda (except maybe behind run-time flags), but we would like to get high bandwidth partner feedback ahead of a real release in order to gauge utility and any changes we need to make to the UX. To test these kinds of features we would, depending on the feature, recommend building from master or using the nightly whls that are made available on pytorch.org. For each prototype feature, a pointer to draft docs or other instructions will be provided.
+
+*Level of commitment*: We are committing to gathering high bandwidth feedback only. Based on this feedback and potential further engagement between community members, we as a community will decide if we want to upgrade the level of commitment or to fail fast. Additionally, while some of these features might be more speculative (e.g. new Frontend APIs), others have obvious utility (e.g. model optimization) but may be in a state where gathering feedback outside of high bandwidth channels is not practical, e.g. the feature may be in an earlier state, may be moving fast (PRs are landing too quickly to catch a major release) and/or generally active development is underway.
+
+# What changes for current features?
+
+First and foremost, you can find these designations on [pytorch.org/docs](http://pytorch.org/docs). We will also be linking any early stage features here for clarity.
+
+Additionally, the following features will be reclassified under this new rubric:
+
+1. [High Level Autograd APIs](https://pytorch.org/docs/stable/autograd.html#functional-higher-level-api): Beta (was Experimental)
+2. [Eager Mode Quantization](https://pytorch.org/docs/stable/quantization.html): Beta (was Experimental)
+3. [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html): Prototype (was Experimental)
+4. [TorchScript/RPC](https://pytorch.org/docs/stable/rpc.html#rpc): Prototype (was Experimental)
+5. [Channels Last Memory Layout](https://pytorch.org/docs/stable/tensor_attributes.html#torch-memory-format): Beta (was Experimental)
+6. [Custom C++ Classes](https://pytorch.org/docs/stable/jit.html?highlight=experimental): Beta (was Experimental)
+7. [PyTorch Mobile](https://pytorch.org/mobile/home/): Beta (was Experimental)
+8. [Java Bindings](https://pytorch.org/docs/stable/packages.html#): Beta (was Experimental)
+9. [Torch.Sparse](https://pytorch.org/docs/stable/sparse.html?highlight=experimental#): Beta (was Experimental)
+
+
+Cheers,
+
+Joe, Greg, Woo & Jessica
diff --git a/_posts/2020-07-23-feature-classification-changes-in-pyTorch.org b/_posts/2020-07-23-feature-classification-changes-in-pyTorch.org
deleted file mode 100644
index 33b56c706a28..000000000000
--- a/_posts/2020-07-23-feature-classification-changes-in-pyTorch.org
+++ /dev/null
@@ -1,43 +0,0 @@
----
-layout: blog_detail
-title: 'Feature Classification Changes in PyTorch.org'
-author: PyTorch Team
----
-
-**Background:** This document inventories all of the changes we need to make to pytorch.org (http://pytorch.org/) in order to implement the new feature classification rubric called out in the post [here](https://fb.prod.workplace.com/groups/ToffeeInternal/permalink/667964690448681/).
-
-
-## New Feature Designations (need to be wordsmithed for external consumption)
-
-**Stable** - the value-add is proven, the API isn’t expected to change, the feature is performant and all documentation exists to support end user adoption.
-
-Level of commitment: We are committing to maintaining the [Backwards Compatibility](https://www.internalfb.com/intern/wiki/PyTorch/PyTorchDev/BCBreakingProcess/), performance, and documentation going forward.
-
-**Beta**- The value add of the general feature area has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage is not yet complete.
-
-Level of commitment: We are committing to seeing the feature through to Stable / GA. We are not committing to [Backwards Compatibility](https://www.internalfb.com/intern/wiki/PyTorch/PyTorchDev/BCBreakingProcess/) Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of our solution may change.
-
-**Prototype** - The feature is not broadly available (except maybe behind compile-time or run-time flags), but we would like to get high bandwidth partner feedback (see [#Dogfooding](https://fb.workplace.com/groups/ToffeeInternal/permalink/654476565130827/)) ahead of a real release in order to gauge utility and any changes we need to make to the UX.
-
-Level of commitment: We are committing to gathering high bandwidth partner feedback only. Based on this feedback and potential further engagement, we will decide if we want to upgrade the level of commitment or to fail fast.
-
-## Changes to [PyTorch.org](http://pytorch.org/)
-
-1. Add a landing page in [pytorch.org/docs](http://pytorch.org/docs) called ‘Feature Designations’ with:
- * 1. the above definitions
- * 2. A list of the early stage features (with hyperlinks) called out below
-2. Update feature level designations:
- * 1. beta (was experimental): high level autograd APIs (Greg) - DONE
- * 2. beta (was experimental): eager mode quant (Joe) - DONE
- * 3. prototype (was experimental): named tensors (Greg) - DONE
- * 4. prototype (was experimental): torchscript/rpc (Joe) - DONE
- * 5. Beta (was experimental): channels last (Greg) - DONE
- * 6. Beta (was experimental): custom C++ Classes (Greg) - DONE
- * 7. Beta (was experimental): PyTorch Mobile (Joe) - DONE
- * 8. Beta (was experimental): Java Bindings (Joe)
-
-
-
-
-
-
From d27286bc651fb802b167cf5b64777ed51b58ae8a Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 27 Jul 2020 10:02:10 -0700
Subject: [PATCH 20/32] Adding image for Feature Classification post
---
assets/images/install-matrix.png | Bin 0 -> 35602 bytes
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 assets/images/install-matrix.png
diff --git a/assets/images/install-matrix.png b/assets/images/install-matrix.png
new file mode 100644
index 0000000000000000000000000000000000000000..3313d64216173487a2ce39543f04691d00796d17
GIT binary patch
literal 35602
zcmeFZ1zQ|T*M=J;c+fy_26uON4esvl?(Pzt1ef6M?hXM0!QCymyPeMdValvx~gkcP1RcWdTJu%WyL?hVZ*(9_wIwFgoxt1cM#C;-hrvYKmpJ2pl`AO|A<)#
z3(HFi3xnhx?MyAKP2Rnuh&R&LN0p?Z7&b7_*B|~#O$q1drWhXnMN!|cw|lUA95kFW
zsGpLet-XPXyaA!y`z}wRr^yQ83+SgkF9?2mft~`R(ds(4EE36R^
zP?u2*;jXYS?*m$!Xz3+rq=LEqFK|9Pq}8AV|Ln*N2yj_?d&};_!dzg*!hF?zdwaXP
z1%A;%r1?b!Y<9E1qwX-sTFx`r0Sv2ZqAqDFBlC_5IEQ%$7H07d5;y|`9@xMG=!1mN
zcR)A1Lj@ipIpF{EDg<;6#Q&Uwss4GRppvkpB=A?s$kD{a*2&z?+0_pr5GZQSLRsBe
zT}GPA$j*k&z}U{vgwEZ@{?8@vc-*;wQyUX!1CYCowXGAEJ1@~c@8AN?|2$1k1p4PK
z&Q`oc>N4^mVLL|?5Gx%E9Rm>`90&yBaWppNQWO#U@5_Pzc!|uNo$a~k>D}Di=-im;
z>>SPL896yQ=^2>lnV4vSchEX{*g6}y)7m-_|5qXZQ;vvy`i1_+MA5IGH#K+t~mYI`jRzpvz>|I_k+S>nIU{Liz1o%!H+=>NCR_~0h|nBB<;RcAN?AjxPEs
zw99!)QsOsA#WR^GFkBfWp91_cPhtp4QKTmDLq4UV2xV?AI5<_YK3LM4)^8A=5PbgK
zPByax{oUF}A&HK=^nFQJH&a*Fz15aBr@aj3BV=rhuLw}6Ah3TQZR8u~caSjtkP`ns
z1kiHqyJG~vN&bE0z{+#zi~oH&>MuwN+-l;ucd-9H%)vh*E};BXqCX@H7##F5%wHu6
zfTz6+fpCTV_YDE(GXA|_+~9we80SwWu<4H%@c-ZDe_MbBY`@$Q29B#(u0o?uOf_o5
zgCx!f(z|`tsoWYv8k|I53$nL2j~4smVnfydBhsI~0P%unQQzh#E9-BUa8gt^#h{b_
zdng1PlL~BtygX`I{^|~jYM_TdE@%x!CjPsV1#VGu?BhCkG_e2br;qAD*TWrG7!64N
zdpv=>fUz^In?)1-R|mq;0Au3)?Oe^)U&A7<1E|uNUhbRNUjy*{w|5~y;9f`S4u9R?
zur5$#xjPv-xz@nWvw@n8N+lvd!MEia
z;+S91?ZnL}VPZ<8WMjjiNAg2SML{MaCYEu5L+pg)WUt%2+#OZ#vaJ~Rz5hihoyxe1zKQd^?uEzy)p_=+o0cf2)&F`?B?q
zl`c3Cs*l-~6p^T1%u{u5W@q&n|JT
zU0rmZdq1ZtdyEm|`oQJ&pFsym!@<0w?e(0kF7An@+=`l0wBFm@cU+ZF#&82&v?vF9
z%GPZv$wSy9_hA9kU0VxRgs=uy2o~!F%uM_v1oEPKvrmmS%aILMZ!@Gr0|SJ7eE3;e
zS&s{=s}OeW54SdzX)iytnrwUM%X~wVli9bnwvqFo5?HO~gx0MbEQ+6Qn=ml!66_n263v6;BdKJ5oXKO
zx@=6C(o&PHnp;{{8xf8Wx?UftRjYJ^F1CkC7{i!fANC1un!w&(pXfbr4kOrXmejKY
z^TcBfN0Lg;FD@7hZjKj~k{NAR+$-V_=PJr5qyIG8O+05fsmi}!q?^)YjrMiMAz0S^
z{ly!8M_b?<_ww!;fH~m5A;Lo%HQscWjvsS3wHlx@{v#
ziyb#c=Z9B3bZ!=qO;9o_zR)6cTIN!64NmEI-;+MI84)K43162WhQJYW;(f5F^_=}UV7~T0h)0Yq
zT+CIiSQdCCIbLc&^Lu^Z9{hrpZo0PerChOr3JOWqgX4$wRlX#ez5W~DOl6{H<3PJE
ztL04aGvf|Dc-KoKc4ejQ{c+u*W;fbZHnDijVb6Uu4*U5J@@Yo1akCcP17@>v{fA5R
z6oyUVkH9LV_cm3aUvGYq^GaKw_I)(DDn~E~%Ki0WhW=O6nkQOS=Oeb|Y-z~ISLy;a
z$ZcR&IF0MRQF|h`J!5NLoks}K%Z7W4xN>NDf^bupT3R$g&e{~LqIU=0e$Smyn<$&e)7eUv-x`dD
zhJI~*|F+O-!aP_0@Bsw{rC7OyX{pgFHi7i>fs=^R#n4$9A#*paR-@1nn~jprG?x&R
z&F*OOPz2gggZ=psft7r-y+H+feEa@**6dQO+2if>je5%=dHeO$=-j}@)IgMaqjd?J
z29+YoJHyWYIBv+mpdf`tYple*p~*=F3%Jq)J|?R<>xvp|e&6=Wv8xIGs>ccJS+i-dOP$8WS+=Qs1~zw3cvU%L3BD|KF7pNC$TEgIE1aC@@6d2|#cj_(2a
zGo4*-cA?;Na<0`p9SjT%L&44Oi~P0qt~YWvXJQVz;`(?I~3gYNyL0+RJ%F8Wm#Ev5#HIN-{Ad?u6?j5lzhPLosqSw
zX6e>{vJ{MS7X51;Z8{rgTUY7_^Og?BdkpbmD+r~SuNnz?O-Q!wFM)jD{0u9%Co9V$
zl}sB<0|k3q{2ID9-cD(>2_CNf9_%az;bT)_py^fS(uHqWy~R6|4HXLxGwly}aCv!9
zZXZZ|vL?C}&VMfmn?j>pO<-5V!`wSF;%4Z`-$z>r2=Vmp%f6gp`@e}_#oHfZ_L|Bu
z47{h)4hj#MZV*7eIYK(ey2K8FK*KwY6Szf@nDjB#^U$C=*cyO8zq=EUNFe=s&|g)^
zWGvuwJ&ck>sWtb}tb^DWewsfu6b_xr@P0JUUawC~{raaGhqJ^YN2&eQ-iPO_JI^Fa
z9jLz0SP@nej9Q;3Hr{u3OZ8Y{UofLfRmyr6558s8+pXjCEZe>Fy#5`cGU80Za$R_1
za^v8g;h93|Wrw94iPRIZJv7-Y8L982u;{rC(_+9ScRVi|9*>NN7a~ELqz;{$iE$Ey
zup&ZCD3pL8kL4KVKi2>Da>t*-XjC|diP=dWl9cxJJq&+piI#CRo}bYUUg7R8^;oB!vp
zni8r(y0C^`^rl=2qqAqvd1@$r?p
zeg}AN64_JiRBrnu#cf;+qlMqU!moJy1X2
zPhTR8iJ;E|rBXQ6lg_j#k+!}Js3>}}YRs>Ao8EGyBHnkLGeNNN>l302W@sp7khsJo
z864%o5m|$TaI7}(_Y+J|>P#|Y;IrA$V|+z7vkH37h6L0rx7%rV
z;De>V^JV@mXR%%vSZ{Byy=tn0#6q(LL>fzrUb;*Ey!M4CMFrIBD*joFVu4i2;=Fjn
zoPc!f7m#L{Y^;mIgH)dRq@j8qWZWK7qd304DmAL*CXR1X
zCK3c1EN9QhxhY?%6UpUNA>KaKxSjE>zDXt1y+=Vk?YPr@A!oiM!c;C-SFpet$yLY;
zr8>h`3LyH(V(m_l#%dMH=kpo{%$I3X#^h8~QWkiwIZpfIn#-iv%m-aS^J|CQ6f6?*
zkvu&M+ZSk2?VLdS5o9vHc3b?6)(iG8PfW6C>(DKG9$f7>63wImZDbpTP*Yp^T07fz
zn`>Jrtjqip%BExdWhk$+2sV9s#iCxkR`_iD#5P6*4_8f|Q0on*KK-`3@Ds9>5Jp9)
zCNrRR$lm>uMVGpb?^D}|$093P6%wsjxxvvs6f>6!A}_5U5iM&!XXNS}u?7S=LdwD+
zg_zCrc`}@AY=#UId`kTa**sSqd!HwW;V>xpTmurR&_G;Q<70uv6ia)UYp^92q4*h^
z@*+Z$R>6uKp%xVHzMC-adb5b=sZZZ4ttf)IqUFW}OFVhxCwiUDPkr*E;=NdJm%?7=
z$XK0x9owTf;N2j*nO9*Lk`PfI7uqIkCj&7O;BK#^(M_y5GSkRdx9tCX(oR+UD7NZ(
z^Q+}Edj)4)A~t&`0S?=9P*y>9ukEfLgW=9^gwfVq>4<)m1)iBw6&(KO>Y0x!r&*VU
zK8x{Ko<~kJOeuKoRLt@D{q@P%=(NFO=c8ZBIW+xVW|P%#`%rmi+hR3fQ`bLqI!iA4
z&o^=qat4{m(lT9wv|xq>^q^i0FRz}?x{MyL4^r4{O8Hz}8=X^<(W8Cu4N~!cM}?Au
zKm!EbtwghB^S^v#A@zv0J3Lj+UN1-Km%d15EDRhM=TBv&hQ-Al6|RSRc$?l1jQxPA
z<@dZcFV||)g(Zo_np_IoTxvKfG&%*``xQK}i6W@79^ZQW@^$xp(SBm
zJ@4AarOYV!LM2rz9Vh2L*-P4k(J7T
zQ@xf?(o{??^L3u+@j7*SX>FZo>R3Qkg3IMFiA|I#YW%Vi;TZ6A0-&|(`zQD-7`_sh
zVXjId3+@Cm)ff(Yf!7krhauu!!wE9go#vZv<*#E4nqKq2I78`rU%a1D0zWO3NuNCw
zY|>5JqgKu2@es;3y(bkeP?&l!yGU;KASg4_8p!KS<9
z+(-#a@;E*vt2@2>VfnRVn4oJ?DDs|vg)Y20P8AuA1n@9t4kacCC3&HZ-0T{M_kNd<
zoh1nj0C)okHZS|jDe}mYXRoy-kJ(g1>D;j$q
zeZe?MG>5mGDUNrnn(_H8(R>rgetci0-I6Ok?mJTkmHHGL9xHRPHE@2?v`)Dmf^73h
zye@{U2XzH16#PV|J$4;-KAy>4#skEY&h?fz)g|te?hzZrWHJaq
zc+>J3bU05sg3_ZPlbVyv=l_C$_lz}poLR-5Y35D
z>cswYe~X_%axBnIRzl1ThNG7z{5d^7AMBD1MnM_={`GwcMWfSeCYIgZuN%6dn&av1sgyiHn+3}yEQ5dYHIX>zn
zzU!Snn!*NB_jW_t8*#^CUcSlr^4Fn$xz3KCehGH=I7{ip`(D@@33z+7{
zHS`jP0K;J+$DQHkqtAdaP031`^pVnE2)#o)BZWpAe={f8YKza<@&ER@fJZLcUeLj@
zv%nDP%ZZG)H(H6JU^228f$qt-zBxtn3%xgjZoo<~huD(Krx_!BFk*WJi@l9a$Mqx~
zhan0NmkU(7P(0LTK50m@!myRahHoOXoI34Nv#gz>C|*R#Bo<8NeL%p1I5V4x6oIhjOClfREsIMv8uytj*HVJd>+-I>u
zwNM%r?C++e;NalXqmJ_zisTwT62TZ)IUepSYc(vYw4~(B69ECDG7vx7D#J6;H?}it
zbB8k7v`nWzRQS_~TH(=Ab|D`a4Tg=8RHc;s0BJoYB0TT%ea@(k-GlomQ({WzGY@2^!x|Qv|!*&^F7BqDONTQ>`@H65X~i
z!68~{4I-`;$zC>Uh<%l_hrd-q`+v+44XcHioZGOeH%h2M$GisEQ>iLGaikRepz*GS
zp$h$xkd^AjNE>UJrFIL^C2@h&%ys@vF2Mz#6C=zNn$>|c@C%CZ?stDqYUCGSc3AFs
zA7g1TCnTdKoKZ&aW|iB3A+}cENQYiV=~N&=qRo*`wdmKYy*g+W#FajkZ!_@CPV`!j
zdYYw-$z%r+b_X160^Q4Tld8olLTh@y#cS2MiVT9rLBjWbbM9bIl3LKZe}IRmE-)a8
z5TZ5XBf<`LEnA28grKP%Gco2cr|luyt)ZsL7{g~=;ZtPfJ*IoP&kS+c!&IW7`k*1A
zdrAHTkd|a(K+B4g105UG0hch>TmF}8NC-~RZ@6DOd7buXYy7GrjorC25$v+_|J}%XLaa@v}#F)
zI1y5P%ypUSaq6K-lAZ
zo2!B@aZ#%U!6WABwjU`PAdhxJ?CBJs=Ed)RI#V05l?RB2oyNr`MbHN;m3Unr3K3BZ
zM+&)N8(XM!-OVT#N(sl=pC(xhkaN_Xt9?8^Ad;my`Ey^&vsi+~G
zgSt=in8HuM6ID=J)?Y6ryOY=$hcTWl|24DvG0pEeDB7iSZMbVf~Y69j3kz){_^
z+)oxS&mzfiS0moS7EPuh}g?9ihX0u0I{5xSVj0gbW+90CQ
zzf%)9K%w1GwnXXw&S3!SiJ*V4psTzw7BEHr=g@}$kn=*fj4|3DQvSaOh#SWEr(!j87s
zpo^#*Et)8Bid=oWzfs5K)_g_9Dc=34k?Ev1(veVC&I#lLU>=PAHUnz)2XGAxMC-f<
z>-Ybv38>W{utb1hpw?eER_z2B^SB+N`r$vu0h*zYoMW#aw*d_M|Fw-j7%Q_^N6ush=5^OZWn#>V-gvqVHh$hf%HDCE$9yCmZ1$W;4?u-<5c
zLnf1!QK~Hl3|6#Zl}!*Mqr!(yx?WAVWP-~6&z`c-Ew1+Gq{oYOIA#+Wl6Pr@goGUS
z`uv85hK{XLH2m4f&F&XOYaV9WX$_pd&m1xMyx&Z*i6ue&Xy;x|*pc3sv>8%Kk}5K9
z7tEV)linQ36kn;LQApzSU}^L^ae=UFEZ1}0_lb5e#s`mx1alMMrtVii*>Ak~$|la6
zn>lE;F-;slgWn2tb2i~{I_(viFV>=KHdxxUUN_okU2MmWGWtG0**lK3V3kLGqTh&`
z;1{mPK2N66KqZ{7I$8|l^T`rC;c{{D*%4%L86nZ`@|D@83;{@>m_R%c=IbQ|2lqY~
z9UAfJ4sF2cYWDCl-1#9%oC-Ky<)ieOZ=b`}OMIas?h5aQV!<~)SNl7b0Vm+I%^mQM
zqZDA4<*}qZ%jv$AyKdhR24`8R&xcWfM{U81(xRMf=@D#uM1$`1A+HHkfmb9MLh2qxEh
z&C8Bu^M3+xm;eBx&w;u0x2HFYMd>(kS|}QMj1t2zxRVywE|6Xkc6MfIbvX+Ckw~8B
zd4s*sVD(94`B$-mFwf15(&GS&&tp+svU~(8sq@u*m0s+^f;tfV!UOj_RIEU}d3stY
zlQdVU69vrV>5sXz7lvAwNMO*UxFi`|p}@kbp*&LN#ZMv{+$VS=j)JIdwF7lPVjBv2~uMHH5&?IpgpUo1QF5c^OS47SbP6`ML*MA$Y?oVS#dSu&;f-0)tf$#?zhJi
zzv>TOEVmdT%hamoRAE)hR7JadCvxS#3l(0r2!lTa2vnw_O9olZ6h}Djj&2X3kGedm
zRlTBvfq|L-+Tg2r)T%MW(k0-AGSbB=34sw7rO-;9WJg-}dHNum#q&MSlrp-1sljiv
z4{2?ntSV-(#E(`_v+K1M^=?H8G1l3A#*jf(Pv_2{U_iZg$4h7Y>uB;UQfED7Y+fGm
zX-8A!gGq~B7~~G=M>MqbDe+3}fmRo*uQ|WaiN)H>p&p-VyzXo>UAOvS;gwhvte~Ow
z=F+I>>FEn19#!GtFSp#!doT)pO~WJ>ceIH%HjV>-cd7OW%5;9ToLqBOiH+e^ewLz0
zhvjg0@2$89l4y6vpV!0jubv?P{_6#)ufbI~vupVJY(uewj_A`w-_JWS(1jF3La%Mw
z$QIxek%3`JkNRsvy`j3Eu9OTSf0z+yCqyLRL%1M7bJ_)xB{{aMAItp5`1(H4J;O1T
zOqVE$Bj9mC1_T5Y7TN&7cHR5xgJzR$g>;JU^u*1@_PR=-5N3Nj58(l-i2Y)#3-w&N
zMqaE*p!rY?o)nDzJa+7U7EUR7BtYc^dIZq6x}L>A^C1vmzv}jt4#R47manVSg360L
z?Tc458{Rcje6I
zPf-xCSs(`@KQ#?|UG7M-s4hj;3nh}>Z6~CGeG8MEF8o<-GD|wg6Z!u8FAj^u|xLbK4LqvH*B?#*sDSAJWv*j{4sw4Yctd2@1vtA;V{I_l@=$>
zZ98*^3i@wz%-Y@v?A8nZ0T9sUCqX<#XKnPNT4eig*VA(5>z$d{X7x&m(RcdY{$S^~
zw}bp|*76UhO0pAPHynmfwdKinn}Dy}v$~R+D_ZvA6q5g~?&gKv$Ofe03-_kc)Pl6HeV1vZx)N(79Z>mCfFn{hz?^@@8_=|Z)iC?cmAtfHdraR4zLEH
zU?ytCiODwNUU2{@Ja(xgB#2?|KNihKqo
zhUcOe=MEyE_KztAhd@9h=E!DprBc8$WNk97#mQG_Hn2=2hDagE1jh@;(CZ=2p^2ip
ziP1XOnv7x;=$IFbE;h=L2?bpn%ayr+k^F0ug75ds6rozE!)lP1A{@xz
zhHJ!|lQDQVw@VFW$Y@!Y=v2Rr9`?7l%T8dcbUO?qN6m}M6LzNI3#5_MR97{nET+zN`h#?Yib-UM5*`hD4t)R2q=^a
z5isa=6HM{N5(FC2>>Y(1y)))uNT@#qx}6c)TU{a(CDTc?%=q;NoK-h%R*uC~b}c~k
z#Ig#sUsA`#w?sZ>fgtN7o3QSl#L&A;A!qN1`1azTa$Qq{{;-+PoevvjLv_iCb1k;z%E
zy0{r>(NyXdx<)u_F#z|-OF9XIMm@)@wqNyTp~i?Sn=+o8Bu)uZgd}r8tUtuoSjGPl*BIyV0X~KTUs?0#B6SFXulW^L>`dJc-
z%u2aaeiy`zFbNee?A(k
zg`10n2gj6(YO~tuD|Ls5i3HpkuYH@-ng
zMDDcb;=waq81BnfI@uY*
zL*jd?9Xya=Ff^lNsV=V#8d+w539izqaK6^I
zJ>yu+%dD+Iq1y4Q^VRbEW6CTF2PJuj$nFj>_jY_0cHZ#ge|0%}?XBu6XKE;sh=12$
z*<)0!z-+>6uLSOP$qGhr){bKzB~R(2p-WL4T-gp1sWt(aGa)htzD+!D?NMf`gnEWI
zG&J4pAxO{yssH1T7G(_+)%^NGsWy-xZt8$|z~_`TFnQAk&_>JFH#Qa2%S#O#1;#pv
z$>&I9kSfHb8D4H~ZoR2s)&&zjBT}t<p1f!$VJ6=AUJeQ`7GP`WxEAEy_OupTSHM%l?3@~b_y{TbL*VPH1`}C!Qs$B
z6`w8HRGUkjb4kz`%diV39Dcy_eJ`c^M~&@v68)FnAGDhw`sFdJWxjww5bnaOjy`-{
zqW?`D7x(CRS?y)CMm*;`Mn)<6=91Wp;cQ3^0}!=fFs!gr+}uL~;3ba0iCGjd$NUw%
zS;zYq@Qq<;4YU0ffdX2INI+&KZD2_DcUi6BCK3qS(FXrHxwxRdqBvvP%bMRiu6!yRw_TUN!g{zkVjaHaF4gw_{^dyML%L
zhYR^04^ZCx2LSwIt_7$78hGf&N$)kbbH5^{a3*VKqn%guKd+m{+WOVRDU1O|L0#SL9ieo<`05w`pZ80
z`+w+(2ekgxRg8a7l+~I}h8n;~0QH@uq@;(dO^dy*9={yUL=+IvuYB{rvT^$8kMQHi
z4|{uiig&Cg=+2eDiUi>fMD+C_6YMu>abB-G7yKbe-?NY*VhMSWYX$YpmzDafz?~h
zV*r8h74o(zf)tCNyK#b>auSVigi)>G)
z1SVQY6P%}qLbBQpCZ65vpNN(_U)lQ;Nz4R?1lmomdvmUvX&KH`kW*^oC;NI}dOFSy
zC)s!14o~;XAE$sq1n~R08@hpkFK#E;NvHidbbqleUu!HG<*D1^D0&l%r^ylL_U-z6
zeQ$pRL252B5z)rhmf^^9#}k_QT6+s6Zy7hxo-&im_A!+Er!&7l>7leiUUs`Rk%=tc
zSs$BkE#N6pkd6@omGIEX^w36A;%FPYC(*^j63uOGc?NxhzpvHJSfQLo3QU0yK^znq
zt2+JV!JR~_1D@IweuSZY@?FTH=Lm8Y-3;?!mf7ruu
z-;SO?L4jV*fCYCsoC$bdrIXHnji4MIrH}3I4uFvF@N`kab%Ub=LmBCOeYE<5&kLtg
zsyz9j#q|VRrXB8-w&5UczttYZV#2o5amN>2JpcM~M`AjW_T5Yz|KPk>rq6Wyk{zt-
z?L&wWA(FPos{x6)XbKfhn^T@emweRixzB-3fx@|{qcj3hvQNZ0ebLc%ZqyvnX}4@M
ztGj>|?pEx3LgVA^kA%XhSoa-_GDQ(&LNdI>2VGY8G2Te0rP|PLNF=(Q;lJe8g^wF=
z3auQ;{ZJ`%TK}O5a{QU!sX1(xhRl~6KLAcyaI!9TCbd6IxdQ!9*{E3EYQ{h=XbV
z0RU1c)EK_MK3z@Uv&Wk1dg?DOrl5sMqEZ=APHEW{8N9puRFHWcwBiIi<>&ik!y1Z+
z7s%amXP9=opY2!Tw+ln3zGM
z%r*+I$z1hLYf9I%#SeA`tLC$%9{`ohx-#_X0v*`=(jDRJH6wgacYDKD7zpCI*=HH0
z#+{A_-v?E^cIEGD9)3rC=VC1!s
zuqoMFnQm)r1zKkkUD*7uI$ygu00CFP#)dwPPOEZy6Yq^OeQ@wfH{i=%3=_wiGqbCy
zsR5viKs**-xJs9|p=CLX*Gq;>im3kfa98ssw;AsL`aL|Liz=#WMTm;fEERh(rk>cF0y?<1o1N+(E15hy|(Wxs4-JD(+h3>6l}fTD
z1QmO3qj%uS-DJ$_5c-xL!A$PqpJ(>9qC8;m1Jgpl*%ym?psrwji6i=&&1M*bLxB#Mwh#b4KtoMXka
z{I#~BQCgmW3Rx;HGS*F~=`$6cwDbx_PNB5M|Nc|EIdfM8x^o^P5)e7>b$Ab-cnY+|bxJD6ah
z3+XM3C!%Q$zX&pzs1O?%A9>QLwH)r}IT(m@<*(>JXS+LPmR-5zBHnO2`(xyK_kO0I
zkJ6XT^x1#;u~?+{mXwOD(O99|@d(3Jtp^Xi++-)3UG!86=4XKSHXS?K@U2?`7R4)R
zFz3x*aD==UKFiVs#({37$*zh|PA~k;`kTPktUd>N-Hb>o`&@}SZ2!r+Eq+K!ms;P^
z!9MNQm!SUi`Ax+Ig34JX>*zc+dgcV0M%$b8UbmSi`)eeDT2#xgW*wqjNSB`i07yMM
z0bfJ-i9ZNnPz4%-SLidw=y?G6FG`J#zm*YSnK4u(!6r+Uk);l=%1U`lsR|@{kz2C~
z70U%6Oresf`&R&Hs3>WBay$Ca*`(oT9!DoX&3F26zmSvZ6ZkzxV4wnNTU#$b=2A)1
zGam_wa8!}Nnf%|BIvG59A_4{=Fe`8mBHZC1V6mOJ*=VoSJjruPw$*e3B^+n_J;!vt
zX;h6x!j**SVp{gk&$wX71ezmE!GBKnu7bYVWAlbiVgQ*+O-?e7&`pE+T8g5;f`J8S
zQUK(nbP82_l9}ymlE*Z#N5Hw&9AihE*snG>fktCk{|VBt=?x0yC`|TYE7tVF^7E?sy7L4gf%Lxsl>OV+1CNt17%=9
z{*O|zBH03oHb-kb{G%20b?*-ObYjW01>^DbCr&Qc!!H+Sow10qu1uLzHPc25^%lgs
zSiIoOX3|4wt6t`MklTdIPkuhmtD#*ysF!YM9V*iMjzg}n;SaIp${)`c9eLx}H4i5@
zCEcA*Kcqgm(P?00$=%M6#M^XJaLSsKeY;rLw^JL<-Sa^Q0(Qj@*$QCX*hHibUAkz3B1pG8I|W(n!P`LJ@5a$LB3HJLI{_(@ETz>C}k{d}~3^
zM%$KS*`4e8jT0ZQaM%VN)xfDxAk}30T5&aBs-l#9cQrx%6-lQ&YX%j1ADHlqT7sKk
zatES5P1j8gMsn0P`f9q=bYjFhh5Fx1z20xjAsfgCUIzg3xSX2HiuTo)3iR(Nq&j!3DGUB6al3-$|_%S8AfvMaav7FifcU}2tBr2$;9E*-Wr*7n$
zLcm8}Ebt!xxx*Ka3fS5Jptz0RkSkAtvr1L>_+PcVeNqs}k(5e&iqt~wmCqk$uy0~w
zEPqs&dN%GhUn)pQOL$25y{!^8h^*)*DVanB)uI^hk;y;sl&g0l@OtqUNJD$q|Cr_-
zu@=B>$F$Dwo_)pJ;CBE6H!bdIZJ$vgP~>t8_G5)Wi8AeLFon2B!2XaOkXQd8cbr8F
zlYV0Whv|U%S$cc4=!azU#dy@YmmM4Ur`O7>k*Q)y1NLhvv05gD&H197^D+X$3r&Uu
zw5?4*-yPh}*nb=FsT%{ExoJ+tO&&b+8Ova%c+yi*7=t@5l}m#+wC5JK{c4;eN-PrX
z5dw*2cFBs^V1GQ*vLMyRvzy+OH%~0Oo&hVtdyiltad9JA*HeK$o!WIf*C%JNT@9`R?kH{Sgia^7jFcYHs2^3emux
z-q@G^6^gv4A#!t&%79L20+2s`t^i(Cv)CPus`+YDA^|>C~&$?}vyD
zkdR4}u8|3G96E*lgIh{aer7^D!sb1BEMHd=C;Vlzr*qoA|Baa}}Et;Z!5hXQYc))&LHC$R}k3
zT)s~9Anwlan4imAFSm+5p@hcmC$~^SMb#W&MM&pP)4v$i=xSMvmw^rSZaYx!nNn_i1UHMZuM~?
zU;7}C?+3O$2@4#%Ki!?lRrIV1?6mw>yze0f(gQRcEYZWmFgGX-MjQp54C?AWa^Hb0
zW=Us##87IZqdy##@#&^S@j>tCoWoT6eQfvomd@IEO3>nQ
zlF|BY)dCSRzo;k~kN|pPM*pAvouH8J3YR;H;J;vnM0e#Mx^%QWG5EjwfIk4Gn--uL
zwly30f721xKXk@PRzPhQ_3!PW#aQ`^=YX}5=_i+=FW%raB@SxQ>sXNgG-YO%
z?8ZpW8a19yrA32W;AaB+5mQ&~w{Y>ZDY4KD(0sV|hW!M#zx+#d(MBjxN7U9bYfbUj
zofu04!jnPo6sf<7j4(iCR#z<9lKyHW6B$63zC18A_@jm&YgYNsWz7BoDmLJA@R;}R
zZTqh1?!17m7F?Orct?F8gtUNfI;)scpp`
z{PyG&hf4qrZT5+ToD5|9hPdKxM3JyPO}FFXSI2{OzdcpW4a5
zJi2_|UJT3iy6}OZUBuZr$#Xp<$MeP#lhsnWk=62S&8Jwc%C>da=YA6s0iOpG$i_>u
zBmJX60(R>HvyOHr?w8minV$R~(^F+=+mfI&7!q>A`}ie$dzYZ6KDLpW({@v(sv?ubl#9>?R;Zl#m-du~wE{1FRZa&)>}XP-XFi7mi^IEun$U
zPW_{wHWW8?Ig%^a;D&kU)7O?1`$OR-BS))R|tV&?guF(EPMg1
zp$5wP?Ou2IA8E0}rwTr&iyyswnIZ{-!HJ29DH)MgiHii_AS$ra
zEwnEzHV2S1n9m)9IiD<{6ZqZ}dfuHz0|@4Dx*x$3NKgKrKhr4JZedb+tsgPP{bME|
zOWE7Ael%~eUHGB71m72(T2*LBHuI*BnT`%|d(8t)r`5IZaNSq5oXhs%SD*Q8Yjh{G
z;q=2(+PeKlw{O7+Z3vNQB#4R%?(OIbfiIgGhgd9zZMEmD^93pzn|<=Z_sIqQ_H?z(
zn%PLc%~eOIv#6+u?{0#BPJg+v;#_{nL@HU~aMPS;jt09~{C~a+pamWrBFgZgwqT*d
zjJ>_1V-nbz!D8XqRpNcRfB{yNU=sfkI}p-@X=k#f3CKpx)T3vpX#TEe-&Cwg!|xY=
zYe0oHds6`yGy?Pv7FYRk^H9BjV8tB?^LxmE(*R9+91=KuUCn!tPLq?GTG2=6&9;#AJO`dC7U)P0nV0+}SY}5fd}KJJtEU+{E9?
zf03WIdJoH_L~O9x`v3!j(640|JH)#?QUdmd&tmiVuK)Gwc#)Ya_Tucgr7rwSxPXW|
zhwWN>gzpRVM#!ho&6yIvuU8AQnJ2O9^Bd10{~!rK&7Ie28;#9+6XZ9^uD3asI*rL<
z4}(8KlsfjOZ0@NziV65Ty!
zg_0df%F}7Pz285(DfYJeilAl%L!+q-3kBhF0nq-DmNLG;q`l1^Xf6Gx+lt>;&U*Kc
z7;_i=FU{Iku()`c)#4Tik4Tu1^mg>_o*sgx%Kd}N<3`nB;H7@(_}m&3OQRbvdtrGp
zF@p|L@6J6f!}%Y
z3Se;YwoA|i-uhHZYtaB}QyCu&l6Js4k$BbfeGpCOj?jpgE{@O#;Hv12@6`a{?k8=?
z|Dm#)iSfFxUn+b4d||>eaa>h!(C1`2s>DnJ`-`pjteAHJJOb13Obl{T(o+xpuZmE&
zE{Ah{d?sWh5Pk~rYqpf^jKZ1)EXaRwKq!C%UMYjB{rv@S0cHch@DLhU^4pZJNyv^b
zkE}X)lQiLRM1FLZVgQ)iI#5ZxcpvT0LBgMQO9et1q3`zF;&H$6K$INWA$PsoOPaIT
z1d%Dx%D};gI29aXc<3Cl%=CtflKlA|2)K`6HS3?Qrh!cfxj^h)a!7|blI_PQJ~WZW
zs`X*;$O4TZBF;_S(_-&^CcmF~6zA43gdUTX09X3Zr`$Xk^-t)Ay
z;yRv@bTkSJXxt}-`gre>fV~T}&UpQ1L>stW#&)6QD9X*gY-FV*;_mq&*kzPLQ
z5?Ce_A>WQ!!Tw-~#6U#?>?A066U2(my=${YLf9UMtNHPaBZNZK4b(SqZ{m{D8hBjdY=tjbJ^iu_Nl{79Vz&p=;wjM%XxFxAb2IlKWzcCQ?(Rnv3
z7VKy3=Vj6CAXYDW><804Y)h<|%vz~nKsp1ZTGyp97)?s}455Ja+st~l64huZ_P4JO
zEM0+2no4qTcGFMekz|9jb(F=jBnppcuLmdV;rxf7Q$=YM7s$493il
zbxuu94fK^EqW@5DfX7dBC0Z}R&Q}Sp@R?EcTf3W8qzoVYFwO5krPy>A&W_hSZSf{U
zAZLwPuFS0*a(0$S6NUAlJJ!N3ef-BOn~A{7vH;q@@4`?=
z1grEZ{l0}jIz-AR(`5c`zEPYmpis;)xLpe*_|}q$fbG%~p$AAXhu)gPrlzIEeYB@F
zGQKPs$Ej63VMQ)$jWkk0^jUMfQ`AmG1(rfq@eU#u8mc3CkY(}*jBj!~h3d?8_bfSz
z^c12i;si??NaF(w2SqQG=;_014cGKPsp3o<-GsxP9OnF#voGyuVSb-
z947adre7Be;f~E`#mWpX<8^EYM0qfHCZ)V=KN*Yzdal;V1p>3FEM1kdNgxC$hFE5R
z$MsUMY~JB?uwSvRyBZedUywSF=RJ!vMZM)x_7m*mQwYUXS;@EvhVaEUa<)R=kXh?~
zNg|K=_vgqI(CLpi=UpM)wDIN4^F5Z!s6e?-FG_dJ_k|Ak3BuMgatmhzZAYsm)=^;;
z6tHCuqEj~8JNZZ=`rTw3WBavOB6>}AQHwp?II^-$je|%ENDyZ1NgenFZflFx)
zwA&<@Z4Y?9<_@Zm&GuAuzTZtL2y!WvM`nzPi@O%8v65=T)^OZNy*&T0_sRh;5(B~s
zkAn`8YddZBpLh-L>yDehq8MI(#&@hHDJm43K*+{adwg;NT;AlXWzhT8(3gE-FNB;_
zI2CQzO}x_aRAE8lloOAg6hT>8Y*+VVRztW#=K{&JBY
zmU#OcBc`k%b!Rn_7&ljyt#1do&4uN2<2qUuxLjI
zL?VqO8mt!daf`t`h@Qw%nI^394TXt6inV
DkM+XO|F{k(o8I_
zVIaR`)$WwEuJkM1q|#{rG>$J9C7S}RqKgyg
z7Is-6)=2{vx&l}X$eQd_Gp=Dkt^bn?oQ1lH%ag;go?Qb9l4ZF5oQJ3NvVUx|`|GO{
z@82IidPo|qmxctDnzVl({MRoO0jdMB0RPtq^WPu6j8}QUj)zIfUpjw6OaPh}MKb&?
zayX0u1O^W+Cl;UnLeIIxa;xq3k;THHgkZve4?|E8xc3QtKxmb{d2q}Nj@z$D-iO9(
z$9wlLDT+6wMaY|xV2cCK>QDbZBjD^JjevGRb!a&G->L^oR6zBhc@tCiI{ZH$9~$(`
z&vx;@&w3uEI{FuQ_*5ethK<0dk@61-8r>nz@3u-GL=9(L?^^mwb*#`Tbyj`HA{KKq
zUMh^99tU1NwRs@^KM2zQYQOyBCXs`Tj*hn39FPZ05-_g-n+ag`lSk1}QKPKz$pGed
zk9?*gFxO1)>sJau#~TFHOnDMD;XiFRpwZCKN^@K+ARuwIv;YjFQ5+9ME5K9~Km*F`
zZY5I@f5qQur$9!CmN_d!_Rl(0@N-b4iN6CZ0)ihPqT%->T1`#Bi~^euxY$|U-~JBd
zA9VNki>ivQx3;#f<*$!)Zw+@+09h)dqoa`?E&^Qa>|JAH^vXIH7lv=J(9y$y>g(d`
zaf~;V&%gjh=m$0#z7`MnmaEm1UMyXHpfr)X%N>)>c&7CA^;OEC1u~iFGR0%a-ZNyj
z16d6g2_3(4M5_Rs0WQ1E8vx+<57W0>m>mN9-r3zwMvnX4IlIF?2EOM{QXr$_i$nq*
zDzxY0nW)7|8^n34+s%@bG9Q&`R3Hpgg16fOo!KcL{0~uijE+W$237Z{3mlc2ZZgWIqJH
zebn{1`n2fv@hY|mpW4j^3R?*I|3PTvysSZP`tepc8;)N-qWS&ujjbcM~jJ-0md+!pp9x$lS#
z%^qkonexz_3a7{P7UWSZT@<#zoRV9>wgan-K@X)3IFwtcJ*gB&iP&s(T@eY5hS
z1`7iiyvM=)V8hV>w`;6O5#u!AWhuz;)aW1N`~4m1Jp(^M2y|>`J?wiwXQ1xYfQ3ot
zfhHE04i3^IBzm3|09sdXfgA}Me~1nDG;!0Ub)-XY&F<~{$QKJgexUzfX*NYa+w4RD
z2#s?-kZusez`(T9&-dm@dUXK(78C-4ichg)+!vws`RsO^uZ^>Cz=>i22v1M1t^@$>
zv3#?mrN?oVK|c^eq`Lv=6~6w;1jF}wv^BC^v;MF;8>~eddB6+fHbuy*G*Zty(LnIF
zlV3xt+K`gRdraaWEj3lHz0>18_a;ZF`cq1J67$epwGlcHd=m~NUFqlCGXl@PE2JUo
z?QGRX{^r-bsPpr4L9qb`#nh|tuSlaAd=>c9m}N1}7NU@J+D$*)jvMocBJ&fdRq?Y;
z^$5Wd&f}t-&q|?Se>42>5@-@g0yRwnnH}8^<0D=EzM>tPrcw7-ZYt_xAOR&MrDEt;
zNk+5NiC_k=2cdBpU;s;G_(Ck?9r*|dedK&08H_>Hg~}zY+he+!!$0k|f=6#UAVyB7
zHLZ!5m=xg9s1@SIV9^HN`euEqLNtI0-Z2}4ei;+3vryG^m_l{GWXk7=yvzS+
zx@DAVi4F)*M6(2s--ThM=yR585>_ohh!XIT^r|62_WBQWKE*FJe!^`BOlW{d=9lbp
zmCU*=82K-NqxzJk1c!j-Y4jT{DRREa;R7@h?vL+?CyMz~Q$WTFLu*H860^y-WuOJO
z5k#K;i`N<`^z-qOdIj~b_NM^(edx3ZI|qju?JaTc;-6n#PK>=>)4OpDU9pCFEOW`>mU>7|JV
zis#5u+=vPIwwz_vZJ`kxOEET*{=VF}09&ZkR}C72>m~JUyY5}WBbMQ{?&Ca_YPp3F
z55_^k=#IVgm86#c(-itbY{yJdNk!r4ik0TZFd-4J_J
zMjCqL)O3%}5)f78DmAv5)Ni}O3)e%y2itIeqsg*dC&B}yFPH+B#(;h)p{#g3o2NVS
zD4YBEB!lA6P>NrWh7Nsg9vSl6!Wku)XZu~<-HH#DgAb(a>}n4T;t;()>hb(ns^}SO
zYkTx*v|B-y4DPp;;3ENXV+4>)_@+Djc)d~E9nT)v#S6e?{DyxqoB@P^5ZvG8ooLl~
zt>`FQu@A*nWWrdjublq+OnbF2Qca}*v6VpB)wT8+W+Y^r7cLR@hxE>RuOtJoP|%Wa
zU&EVLN_1$$o
zX{cW34106zVCo~K?ss;xEiNHeMKr^7ZPqJZ?P?D)Ll0l~#`843Mv_J|!&iHM|4OvU
zMZKOuNLE!Kn=!hJksrn11Zk&jFQ)goqI7;;In)4
zuK7_cuwZL%xp-l>mudt7+`1Y&BRMIl84l
z=^JJ${t#yxHRYNGA^64y9tNYIz6CeH_LP?lHNxyUK;9X
z^%dd=d4T+Gu?7rdQTZ%pB@;OvaDcv9QB9EgZ=fS`DHNM(w#iKY`FiUPSQtz|$)6T-
zkF0>cBVb=ONsH?zvC;gWn(KyuE8cLIT=fs#W
zTj{N?)K?lq-iHMQs(MRJ?NU3zJh$FHp9zf#&_r4uqgk1wp9IY;@5d#|RvPV9WACh`
z&x#buDBYT&oeQ_RqkA#RsZ!c*RtO?qDc3S>Yl+S95tP%j`fo@h7TtzdL>mU~+dEvVxMfgDwG`PmwdN)ZiSnxQohL&UWeOm7boP
zE&F6&DGH
z?YwfT|9t*!yZ%eN?^F89#9^9o10MK4jNulM0EYX}Qe=Vl#}EL14*;A6rE(h_e*rN)
zzgKj)Ds2?==MZ9q)WBiG0GzSEfEd^ZlE?xAff|8y3k6;xy%1d*5cyZjY@9;^R@)Eu
zrKr=oJtI{JRjc6Q-eVEqE(>aVT!gEy!N=|}cR(1|U*FIn0{5$QK*X9RMfeY}3NYUi
z05_b5X5je;00a#G^1!@UT=rF~|8@2@1(3Uu?gc%G?fk$!=17lnWk~uD$^`_@n&0Hc
zs<%64`XSEy|HDk{eSoZ{!JQSkn1lc8n+X388vB2SfBkFz>stc;`*i;^4Hy3xvpXAS
z6B!$8Sevr2fFdR%1MRW|t?(|oUHuZz@n|_MfLSr)eCbA#!SsA`5>7p|5g*OBva(Vt
zgZAkC)lkX(Nr(#q&FC=22(bYxzL*z}$F>w}e4?kkbt^M|R#w)Y$pGP2R{+9~h=?&l
z(Q6V*Af`alOD%n?t9rh@+Epi>IvH6eG&s0#YgpATkIZS}hN(F7@i~zDhBzFLFI}Eh
zasKFD)wR8!l#hm)hi$szrC$mEH4pI|`L%EcIt4a+vzAHWi@jP@8En
zH`wM+A{0PeEh!jJTy2O|jW9S*7Vu7?_K7N%rpht
zD!Fm&B6W*n$79q!0EOdL
zg<=6DxRsoo#M2$YkpZZDMXfXuyZI{whR&(4oeLwALhz!^=ivM=?oavfoziSeiR+v)
zC@=y*(sRh38loin$-bbHQtaSNi+T*p##MCqXKv8~nT+@!hp5qP*6A3+sXT5n3)cx4
z)EZL6#MLS6n*vL+X`HkP5<+v#>y36>)eBzdn;FFe{gP3{!iVSZ?`SA)AtHdF;=ZPl8w>2PHf*DkL0!6XTVYgX%fKjkVM`^9~88?=$vdZ`}i`{4}eNrTs
z$M(@5gCbJ*e6tY*Od-Oy=dMXDmPZh;Koab(>ZR=OY~k@sWR;n4{3rF`y&EalSe8lD
zBQB?B-8CX;>s$3-7X1QDn@$ax8Y98By<%Z^U`%
zY{Sotx7d!RA318>Wj7Fv^jk8m;W20k?=H4#tTsLo7|*=iZXkCLyOe5L3cTbdquAM!`(E#>%;G(VmC-#Z{FQbk%Wx}qt*+BJYYUu
zPmJ2;s4UmaP{Rxg4vz*}4E*sl+H{id=qt0wL+L33)6@G?Bx6w?EtjsNw4sqAkv+ce
z{DOCWXS?JYUUu*ZsG&~jeLyjN%yP0Vi#pD>sjCPGl7|xo(!UcN1xzP@ibG4BrG5h_
z`}50*r%h<`FMZlb?$}}Y%+uPn!w1%uTJ#Us$5E(b5z&B;HIW5SXvTRxKldBC>b!jm
zUb3R~(Ll#t+=|{l<aU&L7w0^l>;AP?3p&+%iGkkO74%J<3d5U$dn4#Pt_LC2KO1MfpFMKz
z3&M2-*0x&C${Lgh_-JW0Sz>;F6rFYX{uYm{UwJ@~-F>t0(zQ5{d#R*=BjaUWFp0~{
zXVF%agQM~#lf`;jN#yN^Vh3Hala`C}R5_yC5bb88sShR6_gj)!d-m%5&`6#tj(ZN0
zznw3_FpC#=SWTH82teh_D~~>?zvR7OFFs6{vkr;e;J0p`iE>G9h#-zw&Qz=ra`7%u
z8i+*1hNm?pa--1AJ303N{i1QT`|CYgz9T-uG)}(!F;iZSZ@ZOm@n?1KxtE4>U0N1m
z?9Rvdq*v<9<^;Ft)kpwwfOtHe2H}dwi{H1S<GGl|2qFDAjpR&-cc<%6k(Q~W5H?cRSs^AoFp;9fO
z+!|U=CyLJ>$A5Nl4PP?4FgmRBDohK-V_QBAP%sK>&H9|qc+{-Yx4hOZnyZ}R$myay
z6v{^tKpjudI7EU0QL{#85A%oiVXJhQYL!T
zDy6v+T@~BszBAk6A@OCG!)vbgZHgw_2Kw1~R!jGQz8d)w%29-F_wAwDl|iPHwH$
zMP`2MDAl-oxkg_WS1KeSBgGtN!Ax1)%hSn+&eC5?O~t@^Q-ZN&kpJ!BTGnWwW^67P3~h5TX6AA2_VLcX
z`aVqA?)H=jZfY(WXvGyWOpk*w*5|)pndjN89T-+ny?1fAE(?LD2bzk%9z3Z7;4!0F
zEf&2r&MKbcP|OoWZ~WYut4){g94A>0Dmo^ka!A=B~NBBc_?jQN&l%9FN>
zc--;xZU>c0ySU=Z{e=tUb!&?p#42E6?q%WPutEiLCB5XXCYJ96zVSv)b;{Ss4>0k)i?HY&l3y@MW
ze)@dRbll76>rkRnMo|0vi?P?5xqV*OlG7k6rvC!ZqBtk_;&S>(9eO?|u%W|su2j#c
zCMKEhd<_j0q2j(#$$N8sBb~xWDC>A)X8bnmJ&pEb@}hiOOA^sajZ?;8aP6&|)$K(-
z9dcc>mMF+|S+zA;DxIZsvgD=1<_{DKLNx*BVkg}@wWH_-o>>w6
zqeXGT4)vkV@h$327iB93w^qRNecl|twp})#Y_J0#hWkn6{>DQiqe{j&{$dpVMgr)X
zDs|A(nyXZ>uuk8wnHi_wR5*GVSKz!=?e$vsw;j*)&?F_gG`{SH`AE~1ap|{%`$X}F
zI`|*cQWeeaw{8CV-n_ncr*orYdDbri=UZ4@hg0jJ)jcDO=$>c#xkaKjHj%WyTwTMZ
z^^sqgIIoLU%Jk>2;;CjTGaBA&H2j=5KrX>D8BMLTs*r+DRQ@S+arCiTe@>sdZH@Xe
zy&RnT_mOQ)^!GLa3O7+GE_x)-nK=B=BZ3=Glh`pZzwZO_Wdup?Kb+XwxOpN5cBu5~
zjyvnijF!G5j+dAqcT&y6a4zf0p{oL~yYVTJB+MJ>nQCol{9ugG_HgW_@Ep!HayQ0D
z5Vu?SwVBK-gO{}6S;Fp;vn05)oSruUFc(R#ctQX%Fl&J+N43)eVGm|FL@nwy@5Y-P
zG}EeRjkD#DbTU2euM^ITF-lzx@*0uWu#5#@o1MIfV33Xzz^EYWynf~+|IlL1VSu|N
zusE~dH~S1!8)cG|yhM{NOh(a+;=6M-4YhK4dN<|thB@V|gqb61I%8$qh#r>#y8Np6
z`MV+5JAipE_Jb8g@lZEs%yjVwP!;~kymql_yKFqIb}4fv%vKP2Ms#1(eD(Io)V2-~
zL8};$Q+YYVBzIG$$*UBwmP^@>&XxFCT49Yo174lyn`6CM-G2QZbZnde#KWHCtuG}x
z^~-|SY9^{qsX8?Xa;8|ln+3aOfMKI5zJVxHKYj^bjVus60J{gv%dB;O9d^j$68uqI)*4j5jbaz!D-kOr?9+6nOc`y!yDK*BaUH@35U
z$+noituI%LWtic4`kdCO7XshMY&^;ezgy8XzcVJcKdl*~wqFxGg;-9OOT-P~mw<0=4_CCsiy)q`?dd5%3Ooc7T;~q}Ii<~1^6?y~>Azcxxdj*n
zy&ENG`U#|?VDgP%c9?Kz0Q0eG#F(x98CtzYT+z7&;4sig_l|5-2}j}
z-pZ`FH0Fx**uk7uqV$)mxS%twsD0tELz6udK7bhk%0<V>5guV&J$56s+PgIB2`n+n`jf
z5nfRz38({Z(!ROm=2$juQZ2=4`EkTx>frND!$QI!;VUu6Rv7k`&YkaxYFY^YlGu5?
zo6#-Ewek~T%G4!j)Kq1!jJEsq*_R0!QTjd^e0X#S9w}*O(rNuSU0N%Ae)#54U&&Y$
z=C6oEQ%58QOg~~nsNoqQ2et|#rHnIb#63J+sPi3JTk6m`wF28DVy3B}i=RC=3NQ|2
z{h&kjPvmAZbbHMd%JD1(Wy)&JH|cKQj4UEhyQ-M#_LjmZF``+LYQs@z%Jhg3?Q#zx
zdQAL{GL~iGo)utZ$87g&(?=m}50#aT?rwkkr6(J$3r1&9TEkU!At~1HH{k@|AjWpH
zmMNC!Y8tUe&>v`s@KseqUJR#Ycx9FNQyWc(nqt%X?g5m4bINljE|)Obq!
z=0!(MYy+nUu^rK{n6w7>q7C~k5NVK=KLGD
zmEfr>gr}=Gpg@sO=TUYKz>yT#Q$)N?#a_czRPtW#SE^qxCTlOPX5BRf@O=oI8%#z+
z1D$?>lg}@SrqG~Zy?h@aJ_?F?Aw16tmNU_%BE^E)67A=mYKaP>dO`hJB4sauaZ8!a
z=1$JIif-*lGAkp!&_NR)@7j;IiiK$VE5GuuAk2&>G%<&CLz(j&_*ChgKp6U0Yqn{h
zD%3Z4mA}y;xTv&&p8Xrq-*?wqF5QQT5LO+ZMJ8jFY1L^+BeM=clxb)rjF{Ni%yWDiT;?5^0h_e-WrONmVKB1DhV?LH(kk3a)(guU6Y(he&Tc9
z^)6M_yg|YvsEX<9>6`6WwOgf0Eoi{q+*{mYT;kdLGWWb!z$BS=L?)XQFUU?;yba0l
zdA|LHJZCeP?gJO&Q+YlX%5)j26X@`xGuoqKLd@?AfWAfmm8L6_mp^iL!(QZE`r~qP
zT`ruo=aHMn%f!&@>LhX7w$xhA2s}`Bfk?(OQVhv;iu8r|?hKuZ3}7&D*!$jlc`kZi}5db>8V9%@KgUF
zMNrLbN!{+r+C_dR>|QsWJodUJ2}DFm$+Kf%#y6j`vb3a>)LH-&oxX@1a^UxhlV@
zqPK9EA^EvxFU7Zd`H4s?f4S{0=R3F5y2-a?>6efkD@bP@?U?;CFn9K0&mA#sQa5Mg
zV}wK~)$wd6{LKYR+bZs3*X&t-JKyuBqKoGwf?4sbQeUX+GKCBSb+Nprlg304M5rXJ
zD(tkAY_Ka?grpXPC_53B3IX{}S}k{21sdnxnN8rRnM%?`Xx9DJeloA1OZxdpGNbb9
zBcd?+m*>SfN)=vKf^NUgmfJ~`%LUe~fpwQ8eQ|}npSjgxqZuC=(*DxQ
ztkBw>chI|g(_xqsY%j>rlD?5>4BMV=dcC3Cgw}9+r?HLM1hN6wPuRZJuw5ngJCZk6
z5!ieXscdGm{NIs25RJXX|DpI=h`o4wZ!F#o3@iS^F62*N6gMVI=7VUQ)X~KQEG}u;
ze9J6To84GFKdTHtqk_O5CcNFosR_S7TwKPvna*($0ZQVL)OF4qQ7JHAcJF3?-{6{A
zH~0{849Tpj%e0~6${~#hjkHVMEviSJ^(Z2?0(j6Z<}b^+OPJAA_;J`nw5TY9e7U5Mvkne
zvxQZR*uFuikXJ2K?;Ve>{SW6ZJ_k2>4@xby8g%)K=i|h`zLGpwePN0plTFQt8
zOe%xV8J^KD``jF2M-vy6eC*Py(yUudm~Xwk8tsj48spUhI4Zgb{X-|gki2#Yc8(Yn
z3&cH;NKvVgilhZ2k!;@+U{lgN++lO{KM|sES_>qlKdPPCEpsySxVVc$2*b%lc4xRv
zlQK`Xrzdl?GPz_Y&5aH4tQICwJC(mh$Qwu?mp#Tnhs18Q78$>1YLnvy5q)Xf9vg(&
zanuX^AVu{vnbApR$2~L-#{95RNh=lIq^k;8vXswWR{GPa$!4$MK7fPLm?wk`5Ehf`
zrc!UN@)Wk~$IGj=&ORm5O}DoPwcd4OVhwk6-u
z&mpT8@<8d^Rfz%w^D(_mVT7K1gei1KSX7C%mL%uwlOmWknR{@6NS;{&MuieEhg(Rw
z>xJx7v9}i@;m%q+qWP+W=@=sy>b@Bg!^0^V6TCh>6yXrS9q&lU1^~D*N9_YTj}+G`
zfw!GiYhCQpgqw=j%TunAb=
zT?_+Pk^rGgWeclUIjVQZQf$%HY|V~a&4Gf819%|%^-G|4+qb;;nMVwwl4I%Dqp82x
z$}u2v1s10i-j+TXDKYZ%?qEWvK5+fizaDK)0UdM8-F`Wb`Mp{$+ZwlF&TGBZ6ntak
zo`vhK%kkpqT~f__kxXy!ZiVtV-49fnq8ty*w#3H%#mm90eJ?n=J-w`}u<@wpRO#f=
zOqonov`zQbOrwNn?@Xo9N%@klL_>MRTne9-Z59o{7ZG+tmfsHvxSGL!!D%mtb#r>`
zcx#cRJ@?s-8UbmH0lm6@{vv)kxSs~s@rF{V(5A^4A?jWIEQ-b{crCawhwOobkpHp2
zU7hizjrAj8v@+J?dqhaCmeZDiuTN1QK(N_MDz3F)GnM_j>Sg7(OX!d1
zr?d^RQWGqPt``aAqf0I`X_Ssq;xUm}VjVExxNhZHYYoVi+H}iMM&IIa$BaeD#k3Mq
zuc;9`YI6O^o_$i0XG>R50o#
ze1N&?&$B*FdJ_kc@tg7EUqcwuf-50at9bnfJ?knnFa~dqfyiG&3}^teWLWlWX#O|#
zX#$MlF47<*7^KuUx$jW%AXFq0YvODAm^L7NWnd
zXr%WGtj$)3mxVghU&kRafJ+KCrCmz%*Wud;;I??FG8+8r;-Sa^R|!(dsjB*~!>fG2
zm457xC;dyT%g+y7JQTU5-l4yq2jDz_r-VQ=9(aN5A7nMy>y6LvfJOaxnB4P6;Ku*&
zlm6dR^zTI`IJOGXdc9R-qR~XyDvU6gp!L^bQIgy*nkP<@Cc!tYaCGZBXzk1nXwD3x69Dv6n
zQtQ$Dpu!73B{8DYOn62cUj?JS59UR8$m>Q;e+nMbGyZeYJb`2?#myX84Zab-y{I8!
zJkRbqan{YiI__XOpy<3tm1+498Fl`=M6Jfk#>KL^7wV<0!4fwDt=hj!IpGgk;jv
zql=s-F7tz=L|4^kG@dJtAXk@4!;H(Og-g_7-l`4a=4JXXPkj=n+RBIK+hSKXuC8KN7kAaUIk%JXcle^w?n
z+l)bb*=UG9?WecfOA-}Y-#Fyz@v8ZpUGVnP3^CVK?FQOdYJ8#X6G?TqNB&}B-UJKX
zAT*ETkzngevm-WKpjYhf;8^lS&jdNCnY)+$iliTB;uI`b7QS<-Utj;pg9_c1+po=t
zbWSZa*)*0pgBiNWL@0Dpvv0EKSP!G{K`Re+TCVky3Lomq;6!gyYUSy9Y0p;nEO-R9
zN_%LME#7C*%A-R|tlu${!QbViyDgRuc@rHY;FFP%h=Cr@#P3{L6B4(Wqst5KPr}ii
zyQQd~fAKc5{LB#=!%Ltt8JfsM%OlY&C?H)s<*f)+*#5?O@k^y)1M-QxzG81jvQx-_mwBs&nB^$v>2RjjOejkDD>UHh@oG*B@*X;3e*-^@AV>mUPa!<_5JygxIhTB
zOqgYfk0t7JaQKb=R0)OxN&aJ#xY$yBw8uHTc#F5=2k8KdCV(
zsq5n#xUvM=FyPUVA;k4`!*%eG=^(A#P62n9w6(k-Q^l4rEjR8`dk#f6^<^|E?B%mFCc6cZJ`Y#
z`E}x223wn;E8`6>1Z$2q!5rL?XLUs`&*`Mf%|@=@kL1_@#1yi*^!H}WLVLDce6f*3
z4y0NxAA8Ex^?2_9PtrfLO?Uz8f2*LhepeH1vkHURbtAS|a1!*2+6M(;b;%X5^zPo#n8r{a+t4H-sg20
zN2L%!y
zmm@Kj=)?iH4(aW(2AQtd(mOOBgIND7kKd^<<~-F7dnO5RnjVi(m~_Mf()iUCnlKEV
zz68RzKLhITly9Y(A-CC2q4;5`%9*dz9+C~%4f?IL%#1<5!-X)P@%Bk@znK`JXWVD7
z$@FQ7sWNjXd6MFCCFWLow0>UIM4L7Vl~)l=af<#aU4q!%Q{&ZqNWOFLY+jP11{cFU
z!u~tX@04SHZ15SX6SH`sH#$)fDRa)lB>gg}`-F!@sQKwp
zc;<<7Pn9}j1?oKBAL~juNs8y-rPQ+v!YU6C$+f8=m+~F#;MckfBrh*f!D9;K9C}8k
zEi0)$C!CR`D4GDC6hFa#ZwQyUb^T3aH@
zs34V|NAa9BY`C`D)$2zV4`37&=FF*6_A00US-EOa`V5boLrcK7l@!~G$;iGF#a}vB
z3RpZuPe`(4Qrz)@Fh2}5cYxB5@h_qj-+u#!!#v;DBZi3pIotJ7P1A_1xWcbw^_RLk
z`(&3px$ClXkf%{%Qlpa$!j6fmL&sI-DJ&cTV>d_RfALzqFQ0{AngV*ArDMn
zY8}>(RqJ0gd>^ZN@#sYF<@n=1tm{Yn9
zJmT}D0U;G9c0e!BAMovd*GJag&--BpdL#}ei&
zS#smu}YSA2AST(4~bWCa|5pacGV6q{5of+9_D?SRahlUXp>
zZB`LAr?$MlqMkh<8f0AkZc4S2iM%@dGzc?*ux^Eo3s3kU!gvd0kiEtortWIdu+R9j
z1j5na{=jL(D~5KnnMTw}MPgDrw2~7nYemsJ0%{-5%kL7pW;_}{JBLa!fEkr5h|99bd|IEB*xj3
z2zb$hEtE3-`Scn}gSRDN8cDyih(iFsy*lVw2~S$l8{qXQ3&Cg+`rlRet72IhKu?4v
zm$@19J_Mh;tu-K5w0S;n3wahJh5Y#vs?(1EobvP^&i=Cs79)&=Ww&fZ7n?HH{_8lv
zrlbH69z-^D{_nl*wSmG9bP`~Hg&WlU_3MN+z~KRlY??o}LT^a$SYV7dN9-yofBgCX
gJ@kLlgmk=sZ4kH*AQiOIz5)J;3(E+V3+nm)55h7{!2kdN
literal 0
HcmV?d00001
From 73cb0c27113d38fc5b7619aee10e363917b2dbb5 Mon Sep 17 00:00:00 2001
From: Bruce Lin <49162601+brucejlin1@users.noreply.github.com>
Date: Mon, 27 Jul 2020 10:04:11 -0700
Subject: [PATCH 21/32] Update
2020-07-20-pytorch-feature-classification-changes.md
---
_posts/2020-07-20-pytorch-feature-classification-changes.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/_posts/2020-07-20-pytorch-feature-classification-changes.md b/_posts/2020-07-20-pytorch-feature-classification-changes.md
index 21cb85b254a8..b947f77aa643 100644
--- a/_posts/2020-07-20-pytorch-feature-classification-changes.md
+++ b/_posts/2020-07-20-pytorch-feature-classification-changes.md
@@ -20,6 +20,10 @@ We previously called these features ‘Experimental’ and we found that this cr
*Level of commitment*: We are committing to seeing the feature through to the Stable classification. We are however not committing to Backwards Compatibility. Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of this feature may change.
+