|
192 | 192 | <div class="pytorch-left-menu-search">
|
193 | 193 |
|
194 | 194 | <div class="version">
|
195 |
| - <a href='https://pytorch.org/docs/versions.html'>master (1.9.0a0+gitc375b9a ) ▼</a> |
| 195 | + <a href='https://pytorch.org/docs/versions.html'>master (1.9.0a0+git6b0e363 ) ▼</a> |
196 | 196 | </div>
|
197 | 197 |
|
198 | 198 |
|
@@ -764,6 +764,8 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
|
764 | 764 | <span class="sd"> tensor</span>
|
765 | 765 | <span class="sd"> * :func:`torch.Tensor.put_` with ``accumulate=True`` when called on a CPU</span>
|
766 | 766 | <span class="sd"> tensor</span>
|
| 767 | +<span class="sd"> * :func:`torch.Tensor.scatter_add_` when ``input`` dimension is one and called</span> |
| 768 | +<span class="sd"> on a CUDA tensor</span> |
767 | 769 | <span class="sd"> * :func:`torch.gather` when ``input`` dimension is one and called</span>
|
768 | 770 | <span class="sd"> on a CUDA tensor that requires grad</span>
|
769 | 771 | <span class="sd"> * :func:`torch.index_add` when called on CUDA tensor</span>
|
@@ -798,15 +800,16 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
|
798 | 800 | <span class="sd"> * :class:`torch.nn.CTCLoss` when attempting to differentiate a CUDA tensor</span>
|
799 | 801 | <span class="sd"> * :class:`torch.nn.EmbeddingBag` when attempting to differentiate a CUDA tensor when</span>
|
800 | 802 | <span class="sd"> ``mode='max'``</span>
|
801 |
| -<span class="sd"> * :func:`torch.Tensor.scatter_add_` when called on a CUDA tensor</span> |
| 803 | +<span class="sd"> * :func:`torch.Tensor.scatter_add_` when ``input`` dimension is larger than one</span> |
| 804 | +<span class="sd"> and called on a CUDA tensor</span> |
| 805 | +<span class="sd"> * :func:`torch.gather` when ``input`` dimension is larger than one</span> |
| 806 | +<span class="sd"> and called on a CUDA tensor that requires grad</span> |
802 | 807 | <span class="sd"> * :func:`torch.Tensor.put_` when ``accumulate=False``</span>
|
803 | 808 | <span class="sd"> * :func:`torch.Tensor.put_` when ``accumulate=True`` and called on a CUDA tensor</span>
|
804 | 809 | <span class="sd"> * :func:`torch.histc` when called on a CUDA tensor</span>
|
805 | 810 | <span class="sd"> * :func:`torch.bincount` when called on a CUDA tensor</span>
|
806 | 811 | <span class="sd"> * :func:`torch.kthvalue` with called on a CUDA tensor</span>
|
807 | 812 | <span class="sd"> * :func:`torch.median` with indices output when called on a CUDA tensor</span>
|
808 |
| -<span class="sd"> * :func:`torch.gather` when ``input`` dimension is larger than one</span> |
809 |
| -<span class="sd"> and called on a CUDA tensor that requires grad</span> |
810 | 813 | <span class="sd"> * :func:`torch.nn.functional.grid_sample` when attempting to differentiate a CUDA tensor</span>
|
811 | 814 |
|
812 | 815 | <span class="sd"> A handful of CUDA operations are nondeterministic if the CUDA version is</span>
|
@@ -1058,6 +1061,7 @@ <h1>Source code for torch</h1><div class="highlight"><pre>
|
1058 | 1061 | <span class="c1"># side effect of adding to the imported module's members for other users.</span>
|
1059 | 1062 |
|
1060 | 1063 | <span class="kn">from</span> <span class="nn">torch</span> <span class="kn">import</span> <span class="n">cuda</span> <span class="k">as</span> <span class="n">cuda</span>
|
| 1064 | +<span class="kn">from</span> <span class="nn">torch</span> <span class="kn">import</span> <span class="n">cpu</span> <span class="k">as</span> <span class="n">cpu</span> |
1061 | 1065 | <span class="kn">from</span> <span class="nn">torch</span> <span class="kn">import</span> <span class="n">autograd</span> <span class="k">as</span> <span class="n">autograd</span>
|
1062 | 1066 | <span class="kn">from</span> <span class="nn">torch.autograd</span> <span class="kn">import</span> <span class="p">(</span>
|
1063 | 1067 | <span class="n">no_grad</span> <span class="k">as</span> <span class="n">no_grad</span><span class="p">,</span>
|
|
0 commit comments