Skip to content

Commit 361d6ca

Browse files
committed
Generate Python docs from pytorch/pytorch@3c3874d
1 parent ed9a590 commit 361d6ca

File tree

2,270 files changed

+4527
-3522
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,270 files changed

+4527
-3522
lines changed

docs/main/_images/RReLU.png

291 Bytes
Loading

docs/main/_modules/index.html

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

@@ -588,6 +588,7 @@ <h1>All modules for which code is available</h1>
588588
<li><a href="torch/distributed/rpc/functions.html">torch.distributed.rpc.functions</a></li>
589589
<li><a href="torch/distributed/rpc/options.html">torch.distributed.rpc.options</a></li>
590590
</ul><li><a href="torch/distributed/tensor/parallel/api.html">torch.distributed.tensor.parallel.api</a></li>
591+
<li><a href="torch/distributed/tensor/parallel/ddp.html">torch.distributed.tensor.parallel.ddp</a></li>
591592
<li><a href="torch/distributed/tensor/parallel/fsdp.html">torch.distributed.tensor.parallel.fsdp</a></li>
592593
<li><a href="torch/distributed/tensor/parallel/style.html">torch.distributed.tensor.parallel.style</a></li>
593594
</ul><li><a href="torch/distributions/bernoulli.html">torch.distributions.bernoulli</a></li>

docs/main/_modules/torch.html

Lines changed: 29 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

@@ -1340,17 +1340,37 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
13401340

13411341
<span class="sd"> Supports three settings:</span>
13421342

1343-
<span class="sd"> * &quot;highest&quot;, float32 matrix multiplications use the float32 datatype for</span>
1344-
<span class="sd"> internal computations.</span>
1345-
<span class="sd"> * &quot;high&quot;, float32 matrix multiplications use the TensorFloat32 or bfloat16_3x</span>
1346-
<span class="sd"> datatypes for internal computations, if fast matrix multiplication algorithms</span>
1347-
<span class="sd"> using those datatypes internally are available. Otherwise float32</span>
1348-
<span class="sd"> matrix multiplications are computed as if the precision is &quot;highest&quot;.</span>
1349-
<span class="sd"> * &quot;medium&quot;, float32 matrix multiplications use the bfloat16 datatype for</span>
1350-
<span class="sd"> internal computations, if a fast matrix multiplication algorithm</span>
1343+
<span class="sd"> * &quot;highest&quot;, float32 matrix multiplications use the float32 datatype (24 mantissa</span>
1344+
<span class="sd"> bits) for internal computations.</span>
1345+
<span class="sd"> * &quot;high&quot;, float32 matrix multiplications either use the TensorFloat32 datatype (10</span>
1346+
<span class="sd"> mantissa bits) or treat each float32 number as the sum of two bfloat16 numbers</span>
1347+
<span class="sd"> (approximately 16 mantissa bits), if the appropriate fast matrix multiplication</span>
1348+
<span class="sd"> algorithms are available. Otherwise float32 matrix multiplications are computed</span>
1349+
<span class="sd"> as if the precision is &quot;highest&quot;. See below for more information on the bfloat16</span>
1350+
<span class="sd"> approach.</span>
1351+
<span class="sd"> * &quot;medium&quot;, float32 matrix multiplications use the bfloat16 datatype (8 mantissa</span>
1352+
<span class="sd"> bits) for internal computations, if a fast matrix multiplication algorithm</span>
13511353
<span class="sd"> using that datatype internally is available. Otherwise float32</span>
13521354
<span class="sd"> matrix multiplications are computed as if the precision is &quot;high&quot;.</span>
13531355

1356+
<span class="sd"> When using &quot;high&quot; precision, float32 multiplications may use a bfloat16-based algorithm</span>
1357+
<span class="sd"> that is more complicated than simply truncating to some smaller number mantissa bits</span>
1358+
<span class="sd"> (e.g. 10 for TensorFloat32, 8 for bfloat16). Refer to [Henry2019]_ for a complete</span>
1359+
<span class="sd"> description of this algorithm. To briefly explain here, the first step is to realize</span>
1360+
<span class="sd"> that we can perfectly encode a single float32 number as the sum of three bfloat16</span>
1361+
<span class="sd"> numbers (because float32 has 24 mantissa bits while bfloat16 has 8, and both have the</span>
1362+
<span class="sd"> same number of exponent bits). This means that the product of two float32 numbers can</span>
1363+
<span class="sd"> be exactly given by the sum of nine products of bfloat16 numbers. We can then trade</span>
1364+
<span class="sd"> accuracy for speed by dropping some of these products. The &quot;high&quot; precision algorithm</span>
1365+
<span class="sd"> specifically keeps only the three most significant products, which conveniently excludes</span>
1366+
<span class="sd"> all of the products involving the last 8 mantissa bits of either input. This means that</span>
1367+
<span class="sd"> we can represent our inputs as the sum of two bfloat16 numbers rather than three.</span>
1368+
<span class="sd"> Because bfloat16 fused-multiply-add (FMA) instructions are typically &gt;10x faster than</span>
1369+
<span class="sd"> float32 ones, it&#39;s faster to do three multiplications and 2 additions with bfloat16</span>
1370+
<span class="sd"> precision than it is to do a single multiplication with float32 precision.</span>
1371+
1372+
<span class="sd"> .. [Henry2019] http://arxiv.org/abs/1904.06376</span>
1373+
13541374
<span class="sd"> .. note::</span>
13551375

13561376
<span class="sd"> This does not change the output dtype of float32 matrix multiplications,</span>

docs/main/_modules/torch/__config__.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

docs/main/_modules/torch/_jit_internal.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

docs/main/_modules/torch/_lobpcg.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

docs/main/_modules/torch/_logging/_internal.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

docs/main/_modules/torch/_lowrank.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

docs/main/_modules/torch/_tensor.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

docs/main/_modules/torch/_tensor_str.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@
230230
<div class="pytorch-left-menu-search">
231231

232232
<div class="version">
233-
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+gitb234b94 ) &#x25BC</a>
233+
<a href='https://pytorch.org/docs/versions.html'>main (2.1.0a0+git3c3874d ) &#x25BC</a>
234234
</div>
235235

236236

0 commit comments

Comments
 (0)