Skip to content

Commit 267cfb9

Browse files
Deploy to GitHub Pages on master [ci skip]
1 parent cbd7a15 commit 267cfb9

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

feed.xml

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.7.3">Jekyll</generator><link href="https://pytorch.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://pytorch.org/" rel="alternate" type="text/html" /><updated>2020-12-02T01:29:38-08:00</updated><id>https://pytorch.org/</id><title type="html">PyTorch Website</title><subtitle>Scientific Computing...</subtitle><author><name>Facebook</name></author><entry><title type="html">Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds</title><link href="https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/" rel="alternate" type="text/html" title="Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds" /><published>2020-11-12T00:00:00-08:00</published><updated>2020-11-12T00:00:00-08:00</updated><id>https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds</id><content type="html" xml:base="https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/">&lt;p&gt;Today, we are announcing four PyTorch prototype features. The first three of these will enable Mobile machine-learning developers to execute models on the full set of hardware (HW) engines making up a system-on-chip (SOC). This gives developers options to optimize their model execution for unique performance, power, and system-level concurrency.&lt;/p&gt;
1+
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.7.3">Jekyll</generator><link href="https://pytorch.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://pytorch.org/" rel="alternate" type="text/html" /><updated>2020-12-02T21:29:53-08:00</updated><id>https://pytorch.org/</id><title type="html">PyTorch Website</title><subtitle>Scientific Computing...</subtitle><author><name>Facebook</name></author><entry><title type="html">Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds</title><link href="https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/" rel="alternate" type="text/html" title="Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds" /><published>2020-11-12T00:00:00-08:00</published><updated>2020-11-12T00:00:00-08:00</updated><id>https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds</id><content type="html" xml:base="https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/">&lt;p&gt;Today, we are announcing four PyTorch prototype features. The first three of these will enable Mobile machine-learning developers to execute models on the full set of hardware (HW) engines making up a system-on-chip (SOC). This gives developers options to optimize their model execution for unique performance, power, and system-level concurrency.&lt;/p&gt;
22

33
&lt;p&gt;These features include enabling execution on the following on-device HW engines:&lt;/p&gt;
44
&lt;ul&gt;

get-started/locally/index.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -765,7 +765,7 @@ <h3 id="windows-prerequisites-2">Prerequisites</h3>
765765
<ol>
766766
<li>Install <a href="#anaconda">Anaconda</a></li>
767767
<li>Install <a href="https://developer.nvidia.com/cuda-downloads">CUDA</a>, if your machine has a <a href="https://developer.nvidia.com/cuda-gpus">CUDA-enabled GPU</a>.</li>
768-
<li>If you want to build on Windows, Visual Studio 2017 14.11 toolset and NVTX are also needed. Especially, for CUDA 8 build on Windows, there will be an additional requirement for VS 2015 Update 3 and a patch for it. The details of the patch can be found out <a href="https://support.microsoft.com/en-gb/help/4020481/fix-link-exe-crashes-with-a-fatal-lnk1000-error-when-you-use-wholearch">here</a>.</li>
768+
<li>If you want to build on Windows, Visual Studio with MSVC toolset, and NVTX are also needed. The exact requirements of those dependencies could be found out <a href="https://github.com/pytorch/pytorch#from-source">here</a>.</li>
769769
<li>Follow the steps described here: https://github.com/pytorch/pytorch#from-source</li>
770770
</ol>
771771

get_started/installation/windows.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ For the majority of PyTorch users, installing from a pre-built binary via a pack
128128

129129
1. Install [Anaconda](#anaconda)
130130
2. Install [CUDA](https://developer.nvidia.com/cuda-downloads), if your machine has a [CUDA-enabled GPU](https://developer.nvidia.com/cuda-gpus).
131-
3. If you want to build on Windows, Visual Studio 2017 14.11 toolset and NVTX are also needed. Especially, for CUDA 8 build on Windows, there will be an additional requirement for VS 2015 Update 3 and a patch for it. The details of the patch can be found out [here](https://support.microsoft.com/en-gb/help/4020481/fix-link-exe-crashes-with-a-fatal-lnk1000-error-when-you-use-wholearch).
131+
3. If you want to build on Windows, Visual Studio with MSVC toolset, and NVTX are also needed. The exact requirements of those dependencies could be found out [here](https://github.com/pytorch/pytorch#from-source).
132132
4. Follow the steps described here: https://github.com/pytorch/pytorch#from-source
133133

134134
You can verify the installation as described [above](#windows-verification).

0 commit comments

Comments
 (0)