Skip to content

Commit af0aa36

Browse files
authored
Merge branch 'site' into site
2 parents 1c48f80 + f45f98d commit af0aa36

File tree

2,518 files changed

+8565
-5755
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

2,518 files changed

+8565
-5755
lines changed

docs/2.2/amp.html

+2-2
Original file line numberDiff line numberDiff line change
@@ -630,7 +630,7 @@ <h1>Automatic Mixed Precision package - torch.amp<a class="headerlink" href="#au
630630
<p><code class="docutils literal notranslate"><span class="pre">autocast(enabled=False)</span></code> subregions can be nested in autocast-enabled regions.
631631
Locally disabling autocast can be useful, for example, if you want to force a subregion
632632
to run in a particular <code class="docutils literal notranslate"><span class="pre">dtype</span></code>. Disabling autocast gives you explicit control over
633-
the execution type. In the subregion, inputs from the surrounding region
633+
the execution type. In the subregion, inputs from the surrounding region
634634
should be cast to <code class="docutils literal notranslate"><span class="pre">dtype</span></code> before use:</p>
635635
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="c1"># Creates some tensors in default dtype (here assumed to be float32)</span>
636636
<span class="n">a_float32</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">((</span><span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">),</span> <span class="n">device</span><span class="o">=</span><span class="s2">&quot;cuda&quot;</span><span class="p">)</span>
@@ -1734,4 +1734,4 @@ <h2>Resources</h2>
17341734
})
17351735
</script>
17361736
</body>
1737-
</html>
1737+
</html>

docs/2.3/_images/RReLU.png

158 Bytes
Loading

docs/2.3/_modules/torch/distributed/device_mesh.html

+35-67
Large diffs are not rendered by default.

docs/2.3/_modules/torch/fx/node.html

+3
Original file line numberDiff line numberDiff line change
@@ -508,12 +508,15 @@ <h1>Source code for torch.fx.node</h1><div class="highlight"><pre>
508508
<span class="n">torch</span><span class="o">.</span><span class="n">amp</span><span class="o">.</span><span class="n">_exit_autocast</span><span class="p">,</span>
509509
<span class="p">}</span>
510510

511+
<span class="c1"># TODO: Either refactor this into 2 functions 1 dce for functional graphs and 1 dce for all graphs,</span>
512+
<span class="c1"># or add logic to correctly mark all inplace ops as side effectful.</span>
511513
<span class="n">_side_effectful_functions</span><span class="p">:</span> <span class="n">Set</span><span class="p">[</span><span class="n">Callable</span><span class="p">]</span> <span class="o">=</span> <span class="p">{</span>
512514
<span class="n">torch</span><span class="o">.</span><span class="n">_assert</span><span class="p">,</span>
513515
<span class="n">torch</span><span class="o">.</span><span class="n">_assert_async</span><span class="p">,</span>
514516
<span class="n">_ops</span><span class="o">.</span><span class="n">aten</span><span class="o">.</span><span class="n">_assert_async</span><span class="o">.</span><span class="n">msg</span><span class="p">,</span>
515517
<span class="n">_ops</span><span class="o">.</span><span class="n">aten</span><span class="o">.</span><span class="n">_assert_scalar</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
516518
<span class="n">_ops</span><span class="o">.</span><span class="n">aten</span><span class="o">.</span><span class="n">copy_</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
519+
<span class="n">_ops</span><span class="o">.</span><span class="n">aten</span><span class="o">.</span><span class="n">index_put_</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
517520
<span class="n">_ops</span><span class="o">.</span><span class="n">aten</span><span class="o">.</span><span class="n">sym_constrain_range</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
518521
<span class="n">_ops</span><span class="o">.</span><span class="n">aten</span><span class="o">.</span><span class="n">sym_constrain_range_for_size</span><span class="o">.</span><span class="n">default</span><span class="p">,</span>
519522
<span class="n">_ops</span><span class="o">.</span><span class="n">profiler</span><span class="o">.</span><span class="n">_record_function_enter</span><span class="p">,</span>

docs/2.3/_modules/torch/nested.html

+2-5
Original file line numberDiff line numberDiff line change
@@ -639,11 +639,8 @@ <h1>Source code for torch.nested</h1><div class="highlight"><pre>
639639
<span class="n">requires_grad</span><span class="o">=</span><span class="n">requires_grad</span><span class="p">,</span>
640640
<span class="n">pin_memory</span><span class="o">=</span><span class="n">pin_memory</span><span class="p">)</span>
641641
<span class="k">elif</span> <span class="n">layout</span> <span class="o">==</span> <span class="n">torch</span><span class="o">.</span><span class="n">jagged</span><span class="p">:</span>
642-
<span class="c1"># Need to:</span>
643-
<span class="c1"># * Detach tensors to discard autograd history</span>
644-
<span class="c1"># * Wrap lists of scalars as tensors</span>
645-
<span class="n">list_of_tensors</span> <span class="o">=</span> <span class="p">[</span><span class="n">t</span><span class="o">.</span><span class="n">detach</span><span class="p">()</span> <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">Tensor</span><span class="p">)</span> <span class="k">else</span> <span class="n">torch</span><span class="o">.</span><span class="n">as_tensor</span><span class="p">(</span><span class="n">t</span><span class="p">)</span>
646-
<span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">tensor_list</span><span class="p">]</span>
642+
<span class="c1"># Need to wrap lists of scalars as tensors</span>
643+
<span class="n">list_of_tensors</span> <span class="o">=</span> <span class="p">[</span><span class="n">t</span> <span class="k">if</span> <span class="nb">isinstance</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">Tensor</span><span class="p">)</span> <span class="k">else</span> <span class="n">torch</span><span class="o">.</span><span class="n">as_tensor</span><span class="p">(</span><span class="n">t</span><span class="p">)</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">tensor_list</span><span class="p">]</span>
647644

648645
<span class="kn">from</span> <span class="nn">torch.nested._internal.nested_tensor</span> <span class="kn">import</span> <span class="n">jagged_from_list</span>
649646

docs/2.3/_modules/torch/nn/modules/module.html

+7-2
Original file line numberDiff line numberDiff line change
@@ -482,6 +482,7 @@ <h1>Source code for torch.nn.modules.module</h1><div class="highlight"><pre>
482482
<span class="kn">from</span> <span class="nn">typing</span> <span class="kn">import</span> <span class="n">Union</span><span class="p">,</span> <span class="n">Tuple</span><span class="p">,</span> <span class="n">Any</span><span class="p">,</span> <span class="n">Callable</span><span class="p">,</span> <span class="n">Iterator</span><span class="p">,</span> <span class="n">Set</span><span class="p">,</span> <span class="n">Optional</span><span class="p">,</span> <span class="n">overload</span><span class="p">,</span> <span class="n">TypeVar</span><span class="p">,</span> <span class="n">Mapping</span><span class="p">,</span> <span class="n">Dict</span><span class="p">,</span> <span class="n">List</span>
483483
<span class="kn">from</span> <span class="nn">typing_extensions</span> <span class="kn">import</span> <span class="n">Self</span>
484484
<span class="kn">from</span> <span class="nn">...utils.hooks</span> <span class="kn">import</span> <span class="n">RemovableHandle</span>
485+
<span class="kn">from</span> <span class="nn">torch.utils._python_dispatch</span> <span class="kn">import</span> <span class="n">is_traceable_wrapper_subclass</span>
485486

486487
<span class="n">__all__</span> <span class="o">=</span> <span class="p">[</span><span class="s1">&#39;register_module_forward_pre_hook&#39;</span><span class="p">,</span> <span class="s1">&#39;register_module_forward_hook&#39;</span><span class="p">,</span>
487488
<span class="s1">&#39;register_module_full_backward_pre_hook&#39;</span><span class="p">,</span> <span class="s1">&#39;register_module_backward_hook&#39;</span><span class="p">,</span>
@@ -1271,8 +1272,12 @@ <h1>Source code for torch.nn.modules.module</h1><div class="highlight"><pre>
12711272
<span class="k">with</span> <span class="n">torch</span><span class="o">.</span><span class="n">no_grad</span><span class="p">():</span>
12721273
<span class="n">param_applied</span> <span class="o">=</span> <span class="n">fn</span><span class="p">(</span><span class="n">param</span><span class="p">)</span>
12731274
<span class="n">p_should_use_set_data</span> <span class="o">=</span> <span class="n">compute_should_use_set_data</span><span class="p">(</span><span class="n">param</span><span class="p">,</span> <span class="n">param_applied</span><span class="p">)</span>
1275+
1276+
<span class="c1"># subclasses may have multiple child tensors so we need to use swap_tensors</span>
1277+
<span class="n">p_should_use_swap_tensors</span> <span class="o">=</span> <span class="n">should_use_swap_tensors</span> <span class="ow">or</span> <span class="n">is_traceable_wrapper_subclass</span><span class="p">(</span><span class="n">param_applied</span><span class="p">)</span>
1278+
12741279
<span class="n">param_grad</span> <span class="o">=</span> <span class="n">param</span><span class="o">.</span><span class="n">grad</span>
1275-
<span class="k">if</span> <span class="n">should_use_swap_tensors</span><span class="p">:</span>
1280+
<span class="k">if</span> <span class="n">p_should_use_swap_tensors</span><span class="p">:</span>
12761281
<span class="k">try</span><span class="p">:</span>
12771282
<span class="k">if</span> <span class="n">param_grad</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
12781283
<span class="c1"># Accessing param.grad makes its at::Tensor&#39;s use_count 2, which will prevent swapping.</span>
@@ -1298,7 +1303,7 @@ <h1>Source code for torch.nn.modules.module</h1><div class="highlight"><pre>
12981303
<span class="k">with</span> <span class="n">torch</span><span class="o">.</span><span class="n">no_grad</span><span class="p">():</span>
12991304
<span class="n">grad_applied</span> <span class="o">=</span> <span class="n">fn</span><span class="p">(</span><span class="n">param_grad</span><span class="p">)</span>
13001305
<span class="n">g_should_use_set_data</span> <span class="o">=</span> <span class="n">compute_should_use_set_data</span><span class="p">(</span><span class="n">param_grad</span><span class="p">,</span> <span class="n">grad_applied</span><span class="p">)</span>
1301-
<span class="k">if</span> <span class="n">should_use_swap_tensors</span><span class="p">:</span>
1306+
<span class="k">if</span> <span class="n">p_should_use_swap_tensors</span><span class="p">:</span>
13021307
<span class="n">grad_applied</span><span class="o">.</span><span class="n">requires_grad_</span><span class="p">(</span><span class="n">param_grad</span><span class="o">.</span><span class="n">requires_grad</span><span class="p">)</span>
13031308
<span class="k">try</span><span class="p">:</span>
13041309
<span class="n">torch</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">swap_tensors</span><span class="p">(</span><span class="n">param_grad</span><span class="p">,</span> <span class="n">grad_applied</span><span class="p">)</span>

docs/2.3/_modules/torch/nn/modules/rnn.html

+1
Original file line numberDiff line numberDiff line change
@@ -682,6 +682,7 @@ <h1>Source code for torch.nn.modules.rnn</h1><div class="highlight"><pre>
682682
<span class="bp">self</span><span class="o">.</span><span class="n">batch_first</span><span class="p">,</span> <span class="nb">bool</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">bidirectional</span><span class="p">))</span></div>
683683

684684
<span class="k">def</span> <span class="nf">_apply</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">fn</span><span class="p">,</span> <span class="n">recurse</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span>
685+
<span class="bp">self</span><span class="o">.</span><span class="n">_flat_weight_refs</span> <span class="o">=</span> <span class="p">[]</span>
685686
<span class="n">ret</span> <span class="o">=</span> <span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="n">_apply</span><span class="p">(</span><span class="n">fn</span><span class="p">,</span> <span class="n">recurse</span><span class="p">)</span>
686687

687688
<span class="c1"># Resets _flat_weights</span>

docs/2.3/_sources/generated/exportdb/index.rst.txt

+11-11
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ support in export please create an issue in the pytorch/pytorch repo wih a modul
1919
:caption: Tags
2020

2121
torch.escape-hatch
22-
torch.cond
2322
torch.dynamic-shape
23+
torch.cond
2424
python.closure
2525
torch.dynamic-value
2626
python.data-structure
@@ -203,7 +203,7 @@ cond_branch_class_method
203203

204204
.. note::
205205

206-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
206+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
207207

208208
Support Level: SUPPORTED
209209

@@ -284,7 +284,7 @@ cond_branch_nested_function
284284

285285
.. note::
286286

287-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
287+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
288288

289289
Support Level: SUPPORTED
290290

@@ -363,7 +363,7 @@ cond_branch_nonlocal_variables
363363

364364
.. note::
365365

366-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
366+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
367367

368368
Support Level: SUPPORTED
369369

@@ -528,7 +528,7 @@ cond_operands
528528

529529
.. note::
530530

531-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
531+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
532532

533533
Support Level: SUPPORTED
534534

@@ -602,7 +602,7 @@ cond_predicate
602602

603603
.. note::
604604

605-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
605+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
606606

607607
Support Level: SUPPORTED
608608

@@ -666,7 +666,7 @@ constrain_as_size_example
666666

667667
.. note::
668668

669-
Tags: :doc:`torch.dynamic-value <torch.dynamic-value>`, :doc:`torch.escape-hatch <torch.escape-hatch>`
669+
Tags: :doc:`torch.escape-hatch <torch.escape-hatch>`, :doc:`torch.dynamic-value <torch.dynamic-value>`
670670

671671
Support Level: SUPPORTED
672672

@@ -726,7 +726,7 @@ constrain_as_value_example
726726

727727
.. note::
728728

729-
Tags: :doc:`torch.dynamic-value <torch.dynamic-value>`, :doc:`torch.escape-hatch <torch.escape-hatch>`
729+
Tags: :doc:`torch.escape-hatch <torch.escape-hatch>`, :doc:`torch.dynamic-value <torch.dynamic-value>`
730730

731731
Support Level: SUPPORTED
732732

@@ -1240,7 +1240,7 @@ list_contains
12401240

12411241
.. note::
12421242

1243-
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.data-structure <python.data-structure>`, :doc:`python.assert <python.assert>`
1243+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
12441244

12451245
Support Level: SUPPORTED
12461246

@@ -1286,7 +1286,7 @@ list_unpack
12861286

12871287
.. note::
12881288

1289-
Tags: :doc:`python.data-structure <python.data-structure>`, :doc:`python.control-flow <python.control-flow>`
1289+
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`python.data-structure <python.data-structure>`
12901290

12911291
Support Level: SUPPORTED
12921292

@@ -2005,6 +2005,6 @@ Result:
20052005

20062006
.. code-block::
20072007
2008-
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at 0x7f268479fd30>
2008+
Unsupported: torch.* op returned non-Tensor int call_function <function sym_min at 0x7f4d9cf5cd30>
20092009
20102010

docs/2.3/_sources/generated/exportdb/python.assert.rst.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ list_contains
5151

5252
.. note::
5353

54-
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.data-structure <python.data-structure>`, :doc:`python.assert <python.assert>`
54+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
5555

5656
Support Level: SUPPORTED
5757

docs/2.3/_sources/generated/exportdb/python.control-flow.rst.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ list_unpack
5151

5252
.. note::
5353

54-
Tags: :doc:`python.data-structure <python.data-structure>`, :doc:`python.control-flow <python.control-flow>`
54+
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`python.data-structure <python.data-structure>`
5555

5656
Support Level: SUPPORTED
5757

docs/2.3/_sources/generated/exportdb/python.data-structure.rst.txt

+2-2
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ list_contains
116116

117117
.. note::
118118

119-
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.data-structure <python.data-structure>`, :doc:`python.assert <python.assert>`
119+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
120120

121121
Support Level: SUPPORTED
122122

@@ -162,7 +162,7 @@ list_unpack
162162

163163
.. note::
164164

165-
Tags: :doc:`python.data-structure <python.data-structure>`, :doc:`python.control-flow <python.control-flow>`
165+
Tags: :doc:`python.control-flow <python.control-flow>`, :doc:`python.data-structure <python.data-structure>`
166166

167167
Support Level: SUPPORTED
168168

docs/2.3/_sources/generated/exportdb/torch.cond.rst.txt

+5-5
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ cond_branch_class_method
55

66
.. note::
77

8-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
8+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
99

1010
Support Level: SUPPORTED
1111

@@ -86,7 +86,7 @@ cond_branch_nested_function
8686

8787
.. note::
8888

89-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
89+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
9090

9191
Support Level: SUPPORTED
9292

@@ -165,7 +165,7 @@ cond_branch_nonlocal_variables
165165

166166
.. note::
167167

168-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
168+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
169169

170170
Support Level: SUPPORTED
171171

@@ -330,7 +330,7 @@ cond_operands
330330

331331
.. note::
332332

333-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
333+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
334334

335335
Support Level: SUPPORTED
336336

@@ -404,7 +404,7 @@ cond_predicate
404404

405405
.. note::
406406

407-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
407+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
408408

409409
Support Level: SUPPORTED
410410

docs/2.3/_sources/generated/exportdb/torch.dynamic-shape.rst.txt

+6-6
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ cond_branch_class_method
55

66
.. note::
77

8-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
8+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
99

1010
Support Level: SUPPORTED
1111

@@ -86,7 +86,7 @@ cond_branch_nested_function
8686

8787
.. note::
8888

89-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
89+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
9090

9191
Support Level: SUPPORTED
9292

@@ -165,7 +165,7 @@ cond_branch_nonlocal_variables
165165

166166
.. note::
167167

168-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
168+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
169169

170170
Support Level: SUPPORTED
171171

@@ -269,7 +269,7 @@ cond_operands
269269

270270
.. note::
271271

272-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
272+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
273273

274274
Support Level: SUPPORTED
275275

@@ -343,7 +343,7 @@ cond_predicate
343343

344344
.. note::
345345

346-
Tags: :doc:`torch.cond <torch.cond>`, :doc:`torch.dynamic-shape <torch.dynamic-shape>`
346+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`torch.cond <torch.cond>`
347347

348348
Support Level: SUPPORTED
349349

@@ -686,7 +686,7 @@ list_contains
686686

687687
.. note::
688688

689-
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.data-structure <python.data-structure>`, :doc:`python.assert <python.assert>`
689+
Tags: :doc:`torch.dynamic-shape <torch.dynamic-shape>`, :doc:`python.assert <python.assert>`, :doc:`python.data-structure <python.data-structure>`
690690

691691
Support Level: SUPPORTED
692692

0 commit comments

Comments
 (0)