Skip to content

v1.6.0 docs redo for stable (0.7) vision #458

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 19, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion docs/stable/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 93625c989914b71802289037a0f16437
config: 892a2ec27a03fe01b7f360c8920a2882
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file removed docs/stable/_images/add_histogram.png
Binary file not shown.
Binary file removed docs/stable/_images/add_hparam.png
Binary file not shown.
Binary file removed docs/stable/_images/add_image.png
Binary file not shown.
Binary file removed docs/stable/_images/add_images.png
Binary file not shown.
Binary file removed docs/stable/_images/add_scalar.png
Binary file not shown.
Binary file removed docs/stable/_images/add_scalars.png
Binary file not shown.
1 change: 0 additions & 1 deletion docs/stable/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -494,7 +494,6 @@ <h1>All modules for which code is available</h1>
<li><a href="torch/utils/data/distributed.html">torch.utils.data.distributed</a></li>
<li><a href="torch/utils/data/sampler.html">torch.utils.data.sampler</a></li>
<li><a href="torch/utils/mobile_optimizer.html">torch.utils.mobile_optimizer</a></li>
<li><a href="torch/utils/tensorboard/writer.html">torch.utils.tensorboard.writer</a></li>
</ul><li><a href="torchvision.html">torchvision</a></li>
<ul><li><a href="torchvision/datasets/celeba.html">torchvision.datasets.celeba</a></li>
<li><a href="torchvision/datasets/cifar.html">torchvision.datasets.cifar</a></li>
Expand Down
4 changes: 2 additions & 2 deletions docs/stable/_modules/torch.html
Original file line number Diff line number Diff line change
Expand Up @@ -838,9 +838,9 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
<span class="k">del</span> <span class="n">_torch_docs</span><span class="p">,</span> <span class="n">_tensor_docs</span><span class="p">,</span> <span class="n">_storage_docs</span>


<div class="viewcode-block" id="compiled_with_cxx11_abi"><a class="viewcode-back" href="../generated/torch.compiled_with_cxx11_abi.html#torch.compiled_with_cxx11_abi">[docs]</a><span class="k">def</span> <span class="nf">compiled_with_cxx11_abi</span><span class="p">():</span>
<span class="k">def</span> <span class="nf">compiled_with_cxx11_abi</span><span class="p">():</span>
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1&quot;&quot;&quot;</span>
<span class="k">return</span> <span class="n">_C</span><span class="o">.</span><span class="n">_GLIBCXX_USE_CXX11_ABI</span></div>
<span class="k">return</span> <span class="n">_C</span><span class="o">.</span><span class="n">_GLIBCXX_USE_CXX11_ABI</span>


<span class="c1"># Import the ops &quot;namespace&quot;</span>
Expand Down
4 changes: 2 additions & 2 deletions docs/stable/_modules/torch/_jit_internal.html
Original file line number Diff line number Diff line change
Expand Up @@ -710,7 +710,7 @@ <h1>Source code for torch._jit_internal</h1><div class="highlight"><pre>
<span class="n">fn</span><span class="o">.</span><span class="n">_torchscript_modifier</span> <span class="o">=</span> <span class="n">FunctionModifiers</span><span class="o">.</span><span class="n">UNUSED</span>
<span class="k">return</span> <span class="n">fn</span></div>

<span class="k">def</span> <span class="nf">ignore</span><span class="p">(</span><span class="n">drop</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<div class="viewcode-block" id="ignore"><a class="viewcode-back" href="../../generated/torch.jit.ignore.html#torch.jit.ignore">[docs]</a><span class="k">def</span> <span class="nf">ignore</span><span class="p">(</span><span class="n">drop</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> This decorator indicates to the compiler that a function or method should</span>
<span class="sd"> be ignored and left as a Python function. This allows you to leave code in</span>
Expand Down Expand Up @@ -801,7 +801,7 @@ <h1>Source code for torch._jit_internal</h1><div class="highlight"><pre>
<span class="k">else</span><span class="p">:</span>
<span class="n">fn</span><span class="o">.</span><span class="n">_torchscript_modifier</span> <span class="o">=</span> <span class="n">FunctionModifiers</span><span class="o">.</span><span class="n">IGNORE</span>
<span class="k">return</span> <span class="n">fn</span>
<span class="k">return</span> <span class="n">decorator</span>
<span class="k">return</span> <span class="n">decorator</span></div>


<span class="k">def</span> <span class="nf">_copy_to_script_wrapper</span><span class="p">(</span><span class="n">fn</span><span class="p">):</span>
Expand Down
4 changes: 2 additions & 2 deletions docs/stable/_modules/torch/_lowrank.html
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,7 @@ <h1>Source code for torch._lowrank</h1><div class="highlight"><pre>
<span class="k">return</span> <span class="n">Q</span>


<span class="k">def</span> <span class="nf">svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="mi">6</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<div class="viewcode-block" id="svd_lowrank"><a class="viewcode-back" href="../../generated/torch.svd_lowrank.html#torch.svd_lowrank">[docs]</a><span class="k">def</span> <span class="nf">svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="mi">6</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="c1"># type: (Tensor, Optional[int], Optional[int], Optional[Tensor]) -&gt; Tuple[Tensor, Tensor, Tensor]</span>
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Return the singular value decomposition ``(U, S, V)`` of a matrix,</span>
<span class="sd"> batches of matrices, or a sparse matrix :math:`A` such that</span>
Expand Down Expand Up @@ -464,7 +464,7 @@ <h1>Source code for torch._lowrank</h1><div class="highlight"><pre>
<span class="n">tensor_ops</span> <span class="o">=</span> <span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">M</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="ow">not</span> <span class="nb">set</span><span class="p">(</span><span class="nb">map</span><span class="p">(</span><span class="nb">type</span><span class="p">,</span> <span class="n">tensor_ops</span><span class="p">))</span><span class="o">.</span><span class="n">issubset</span><span class="p">((</span><span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="p">,</span> <span class="nb">type</span><span class="p">(</span><span class="kc">None</span><span class="p">)))</span> <span class="ow">and</span> <span class="n">has_torch_function</span><span class="p">(</span><span class="n">tensor_ops</span><span class="p">)):</span>
<span class="k">return</span> <span class="n">handle_torch_function</span><span class="p">(</span><span class="n">svd_lowrank</span><span class="p">,</span> <span class="n">tensor_ops</span><span class="p">,</span> <span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span>
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span>
<span class="k">return</span> <span class="n">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="n">q</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="n">niter</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="n">M</span><span class="p">)</span></div>


<span class="k">def</span> <span class="nf">_svd_lowrank</span><span class="p">(</span><span class="n">A</span><span class="p">,</span> <span class="n">q</span><span class="o">=</span><span class="mi">6</span><span class="p">,</span> <span class="n">niter</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">M</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
Expand Down
10 changes: 5 additions & 5 deletions docs/stable/_modules/torch/autograd/grad_mode.html
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ <h1>Source code for torch.autograd.grad_mode</h1><div class="highlight"><pre>
<span class="k">return</span> <span class="n">generator_context</span>


<span class="k">class</span> <span class="nc">no_grad</span><span class="p">(</span><span class="n">_DecoratorContextManager</span><span class="p">):</span>
<div class="viewcode-block" id="no_grad"><a class="viewcode-back" href="../../../autograd.html#torch.autograd.no_grad">[docs]</a><span class="k">class</span> <span class="nc">no_grad</span><span class="p">(</span><span class="n">_DecoratorContextManager</span><span class="p">):</span>
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Context-manager that disabled gradient calculation.</span>

<span class="sd"> Disabling gradient calculation is useful for inference, when you are sure</span>
Expand Down Expand Up @@ -406,10 +406,10 @@ <h1>Source code for torch.autograd.grad_mode</h1><div class="highlight"><pre>
<span class="n">torch</span><span class="o">.</span><span class="n">_C</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="kc">False</span><span class="p">)</span>

<span class="k">def</span> <span class="fm">__exit__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">*</span><span class="n">args</span><span class="p">):</span>
<span class="n">torch</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">prev</span><span class="p">)</span>
<span class="n">torch</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">prev</span><span class="p">)</span></div>


<div class="viewcode-block" id="enable_grad"><a class="viewcode-back" href="../../../generated/torch.enable_grad.html#torch.enable_grad">[docs]</a><span class="k">class</span> <span class="nc">enable_grad</span><span class="p">(</span><span class="n">_DecoratorContextManager</span><span class="p">):</span>
<div class="viewcode-block" id="enable_grad"><a class="viewcode-back" href="../../../autograd.html#torch.autograd.enable_grad">[docs]</a><span class="k">class</span> <span class="nc">enable_grad</span><span class="p">(</span><span class="n">_DecoratorContextManager</span><span class="p">):</span>
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Context-manager that enables gradient calculation.</span>

<span class="sd"> Enables gradient calculation, if it has been disabled via :class:`~no_grad`</span>
Expand Down Expand Up @@ -448,7 +448,7 @@ <h1>Source code for torch.autograd.grad_mode</h1><div class="highlight"><pre>
<span class="n">torch</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">prev</span><span class="p">)</span></div>


<span class="k">class</span> <span class="nc">set_grad_enabled</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<div class="viewcode-block" id="set_grad_enabled"><a class="viewcode-back" href="../../../autograd.html#torch.autograd.set_grad_enabled">[docs]</a><span class="k">class</span> <span class="nc">set_grad_enabled</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
<span class="sa">r</span><span class="sd">&quot;&quot;&quot;Context-manager that sets gradient calculation to on or off.</span>

<span class="sd"> ``set_grad_enabled`` will enable or disable grads based on its argument :attr:`mode`.</span>
Expand Down Expand Up @@ -493,7 +493,7 @@ <h1>Source code for torch.autograd.grad_mode</h1><div class="highlight"><pre>
<span class="k">pass</span>

<span class="k">def</span> <span class="fm">__exit__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">*</span><span class="n">args</span><span class="p">):</span>
<span class="n">torch</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">prev</span><span class="p">)</span>
<span class="n">torch</span><span class="o">.</span><span class="n">set_grad_enabled</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">prev</span><span class="p">)</span></div>
</pre></div>

</article>
Expand Down
Loading