diff --git a/_posts/2020-07-28-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md b/_posts/2020-07-28-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
new file mode 100644
index 000000000000..a262a44f1c83
--- /dev/null
+++ b/_posts/2020-07-28-accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision.md
@@ -0,0 +1,131 @@
+---
+layout: blog_detail
+title: 'Introducing native PyTorch automatic mixed precision for faster training on NVIDIA GPUs'
+author: Mengdi Huang, Chetan Tekur, Michael Carilli
+---
+
+Most deep learning frameworks, including PyTorch, train with 32-bit floating point (FP32) arithmetic by default. However this is not essential to achieve full accuracy for many deep learning models. In 2017, NVIDIA researchers developed a methodology for [mixed-precision training](https://developer.nvidia.com/blog/mixed-precision-training-deep-neural-networks/), which combined [single-precision](https://blogs.nvidia.com/blog/2019/11/15/whats-the-difference-between-single-double-multi-and-mixed-precision-computing/) (FP32) with half-precision (e.g. FP16) format when training a network, and achieved the same accuracy as FP32 training using the same hyperparameters, with additional performance benefits on NVIDIA GPUs:
+
+* Shorter training time;
+* Lower memory requirements, enabling larger batch sizes, larger models, or larger inputs.
+
+In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed [Apex](https://developer.nvidia.com/blog/apex-pytorch-easy-mixed-precision-training/) in 2018, which is a lightweight PyTorch extension with [Automatic Mixed Precision](https://developer.nvidia.com/automatic-mixed-precision) (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy.
+
+For the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, [torch.cuda.amp](https://pytorch.org/docs/stable/amp.html). `torch.cuda.amp` is more flexible and intuitive compared to `apex.amp`. Some of `apex.amp`'s known pain points that `torch.cuda.amp` has been able to fix:
+
+* Guaranteed PyTorch version compatibility, because it's part of PyTorch
+* No need to build extensions
+* Windows support
+* Bitwise accurate [saving/restoring](https://pytorch.org/docs/master/amp.html#torch.cuda.amp.GradScaler.load_state_dict) of checkpoints
+* [DataParallel](https://pytorch.org/docs/master/notes/amp_examples.html#dataparallel-in-a-single-process) and intra-process model parallelism (although we still recommend [torch.nn.DistributedDataParallel](https://pytorch.org/docs/master/notes/amp_examples.html#distributeddataparallel-one-gpu-per-process) with one GPU per process as the most performant approach)
+* [Gradient penalty](https://pytorch.org/docs/master/notes/amp_examples.html#gradient-penalty) (double backward)
+* torch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to [apex.amp.initialize()](https://github.com/NVIDIA/apex/issues/439) (including [cross-validation)](https://github.com/NVIDIA/apex/issues/392#issuecomment-610038073) without difficulty. Multiple convergence runs in the same script should each use a fresh [GradScaler instance](https://github.com/NVIDIA/apex/issues/439#issuecomment-610028282), but GradScalers are lightweight and self-contained so that's not a problem.
+* Sparse gradient support
+
+With AMP being added to PyTorch core, we have started the process of deprecating `apex.amp.` We have moved `apex.amp` to maintenance mode and will support customers using `apex.amp.` However, we highly encourage `apex.amp` customers to transition to using `torch.cuda.amp` from PyTorch Core.
+
+# Example Walkthrough
+Please see official docs for usage:
+* [https://pytorch.org/docs/stable/amp.html](https://pytorch.org/docs/stable/amp.html )
+* [https://pytorch.org/docs/stable/notes/amp_examples.html](https://pytorch.org/docs/stable/notes/amp_examples.html)
+
+Example:
+
+```python
+import torch
+# Creates once at the beginning of training
+scaler = torch.cuda.amp.GradScaler()
+
+for data, label in data_iter:
+ optimizer.zero_grad()
+ # Casts operations to mixed precision
+ with torch.cuda.amp.autocast():
+ loss = model(data)
+
+ # Scales the loss, and calls backward()
+ # to create scaled gradients
+ scaler.scale(loss).backward()
+
+ # Unscales gradients and calls
+ # or skips optimizer.step()
+ scaler.step(optimizer)
+
+ # Updates the scale for next iteration
+ scaler.update()
+```
+
+# Performance Benchmarks
+In this section, we discuss the accuracy and performance of mixed precision training with AMP on the latest NVIDIA GPU A100 and also previous generation V100 GPU. The mixed precision performance is compared to FP32 performance, when running Deep Learning workloads in the [NVIDIA pytorch:20.06-py3 container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) from NGC.
+
+## Accuracy: AMP (FP16), FP32
+The advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy while providing improved training performance. To illustrate this point, for [Resnet 50 v1.5 training](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/Classification/ConvNets/resnet50v1.5#training-accuracy-nvidia-dgx-a100-8x-a100-40gb), we see the following accuracy results where higher is better. Please note that the below accuracy numbers are sample numbers that are subject to run to run variance of up to 0.4%. Accuracy numbers for other models including BERT, Transformer, ResNeXt-101, Mask-RCNN, DLRM can be found at [NVIDIA Deep Learning Examples Github](https://github.com/NVIDIA/DeepLearningExamples).
+
+Training accuracy: NVIDIA DGX A100 (8x A100 40GB)
+
+
+
+
+ epochs |
+ Mixed Precision Top 1(%) |
+ TF32 Top1(%) |
+
+
+ 90 |
+ 76.93 |
+ 76.85 |
+
+
+
+
+Training accuracy: NVIDIA DGX-1 (8x V100 16GB)
+
+
+
+
+ epochs |
+ Mixed Precision Top 1(%) |
+ FP32 Top1(%) |
+
+
+ 50 |
+ 76.25 |
+ 76.26 |
+
+
+ 90 |
+ 77.09 |
+ 77.01 |
+
+
+ 250 |
+ 78.42 |
+ 78.30 |
+
+
+
+
+## Speedup Performance:
+
+### FP16 on NVIDIA V100 vs. FP32 on V100
+AMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.
+
+
+

+
+*Figure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better.*
+
+## FP16 on NVIDIA A100 vs. FP16 on V100
+
+AMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.
+
+
+

+
+*Figure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better.*
+
+# Call to action
+AMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning [examples](https://github.com/NVIDIA/DeepLearningExamples). NVIDIA PyTorch with native AMP support is available from the [PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:pytorch) version 20.06. We highly encourage existing `apex.amp` customers to transition to using `torch.cuda.amp` from PyTorch Core available in the latest [PyTorch 1.6 release](https://pytorch.org/blog/pytorch-1.6-released/).
+
+
+
+
diff --git a/_posts/2020-07-28-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md b/_posts/2020-07-28-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
new file mode 100644
index 000000000000..8f74a50570bd
--- /dev/null
+++ b/_posts/2020-07-28-microsoft-becomes-maintainer-of-the-windows-version-of-pytorch.md
@@ -0,0 +1,35 @@
+---
+layout: blog_detail
+title: 'Microsoft becomes maintainer of the Windows version of PyTorch'
+author: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, Jiachen Pu - Engineer at Facebook
+
+---
+
+Along with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and is taking ownership of the development and maintenance of the PyTorch build for Windows.
+
+According to the latest [Stack Overflow developer survey](https://insights.stackoverflow.com/survey/2020#technology-developers-primary-operating-systems), Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). [Jiachen Pu](https://github.com/peterjc123) initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self.
+
+In the PyTorch 1.6 release, we have improved the core quality of the Windows build by bringing test coverage up to par with Linux for core PyTorch and its domain libraries and by automating tutorial testing. Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio. In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.
+
+In addition to the native Windows experience, Microsoft released a preview adding [GPU compute support to Windows Subsystem for Linux (WSL) 2](https://blogs.windows.com/windowsdeveloper/2020/06/17/gpu-accelerated-ml-training-inside-the-windows-subsystem-for-linux/) distros, with a focus on enabling AI and ML developer workflows. WSL is designed for developers that want to run any Linux based tools directly on Windows. This preview enables valuable scenarios for a variety of frameworks and Python packages that utilize [NVIDIA CUDA](https://developer.nvidia.com/cuda/wsl) for acceleration and only support Linux. This means WSL customers using the preview can run native Linux based PyTorch applications on Windows unmodified without the need for a traditional virtual machine or a dual boot setup.
+
+## Getting started with PyTorch on Windows
+It's easy to get started with PyTorch on Windows. To install PyTorch using Anaconda with the latest GPU support, run the command below. To install different supported configurations of PyTorch, refer to the installation instructions on [pytorch.org](https://pytorch.org).
+
+`conda install pytorch torchvision cudatoolkit=10.2 -c pytorch`
+
+Once you install PyTorch, learn more by visiting the [PyTorch Tutorials](https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) and [documentation](https://pytorch.org/docs/stable/index.html).
+
+
+

+
+
+## Getting started with PyTorch on Windows Subsystem for Linux
+The [preview of NVIDIA CUDA support in WSL](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) is now available to Windows Insiders running Build 20150 or higher. In WSL, the command to install PyTorch using Anaconda is the same as the above command for native Windows. If you prefer pip, use the command below.
+
+`pip install torch torchvision`
+
+You can use the same tutorials and documentation inside your WSL environment as on native Windows. This functionality is still in preview so if you run into issues with WSL please share feedback via the [WSL GitHub repo](https://github.com/microsoft/WSL) or with NVIDIA CUDA support share via NVIDIA’s [Community Forum for CUDA on WSL](https://forums.developer.nvidia.com/c/accelerated-computing/cuda/cuda-on-windows-subsystem-for-linux/303).
+
+## Feedback
+If you find gaps in the PyTorch experience on Windows, please let us know on the [PyTorch discussion forum](https://discuss.pytorch.org/c/windows/26) or file an issue on [GitHub](https://github.com/pytorch/pytorch) using the #module: windows label.
diff --git a/_posts/2020-07-28-pytorch-feature-classification-changes.md b/_posts/2020-07-28-pytorch-feature-classification-changes.md
new file mode 100644
index 000000000000..1ace6ef10388
--- /dev/null
+++ b/_posts/2020-07-28-pytorch-feature-classification-changes.md
@@ -0,0 +1,51 @@
+---
+layout: blog_detail
+title: 'PyTorch feature classification changes'
+author: Team PyTorch
+---
+
+Traditionally features in PyTorch were classified as either stable or experimental with an implicit third option of testing bleeding edge features by building master or through installing nightly builds (available via prebuilt whls). This has, in a few cases, caused some confusion around the level of readiness, commitment to the feature and backward compatibility that can be expected from a user perspective. Moving forward, we’d like to better classify the 3 types of features as well as define explicitly here what each mean from a user perspective.
+
+# New Feature Designations
+
+We will continue to have three designations for features but, as mentioned, with a few changes: Stable, Beta (previously Experimental) and Prototype (previously Nightlies). Below is a brief description of each and a comment on the backward compatibility expected:
+
+## Stable
+Nothing changes here. A stable feature means that the user value-add is or has been proven, the API isn’t expected to change, the feature is performant and all documentation exists to support end user adoption.
+
+*Level of commitment*: We expect to maintain these features long term and generally there should be no major performance limitations, gaps in documentation and we also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).
+
+## Beta
+We previously called these features ‘Experimental’ and we found that this created confusion amongst some of the users. In the case of a Beta level features, the value add, similar to a Stable feature, has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works and is documented. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage across operators is not yet complete.
+
+*Level of commitment*: We are committing to seeing the feature through to the Stable classification. We are however not committing to Backwards Compatibility. Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of this feature may change.
+
+
+

+
+
+## Prototype
+Previously these were features that were known about by developers who paid close attention to RFCs and to features that land in master. In this case the feature is not available as part of binary distributions like PyPI or Conda (except maybe behind run-time flags), but we would like to get high bandwidth partner feedback ahead of a real release in order to gauge utility and any changes we need to make to the UX. To test these kinds of features we would, depending on the feature, recommend building from master or using the nightly whls that are made available on pytorch.org. For each prototype feature, a pointer to draft docs or other instructions will be provided.
+
+*Level of commitment*: We are committing to gathering high bandwidth feedback only. Based on this feedback and potential further engagement between community members, we as a community will decide if we want to upgrade the level of commitment or to fail fast. Additionally, while some of these features might be more speculative (e.g. new Frontend APIs), others have obvious utility (e.g. model optimization) but may be in a state where gathering feedback outside of high bandwidth channels is not practical, e.g. the feature may be in an earlier state, may be moving fast (PRs are landing too quickly to catch a major release) and/or generally active development is underway.
+
+# What changes for current features?
+
+First and foremost, you can find these designations on [pytorch.org/docs](http://pytorch.org/docs). We will also be linking any early stage features here for clarity.
+
+Additionally, the following features will be reclassified under this new rubric:
+
+1. [High Level Autograd APIs](https://pytorch.org/docs/stable/autograd.html#functional-higher-level-api): Beta (was Experimental)
+2. [Eager Mode Quantization](https://pytorch.org/docs/stable/quantization.html): Beta (was Experimental)
+3. [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html): Prototype (was Experimental)
+4. [TorchScript/RPC](https://pytorch.org/docs/stable/rpc.html#rpc): Prototype (was Experimental)
+5. [Channels Last Memory Layout](https://pytorch.org/docs/stable/tensor_attributes.html#torch-memory-format): Beta (was Experimental)
+6. [Custom C++ Classes](https://pytorch.org/docs/stable/jit.html?highlight=experimental): Beta (was Experimental)
+7. [PyTorch Mobile](https://pytorch.org/mobile/home/): Beta (was Experimental)
+8. [Java Bindings](https://pytorch.org/docs/stable/packages.html#): Beta (was Experimental)
+9. [Torch.Sparse](https://pytorch.org/docs/stable/sparse.html?highlight=experimental#): Beta (was Experimental)
+
+
+Cheers,
+
+Joe, Greg, Woo & Jessica
diff --git a/assets/images/install-matrix.png b/assets/images/install-matrix.png
new file mode 100644
index 000000000000..3313d6421617
Binary files /dev/null and b/assets/images/install-matrix.png differ