Skip to content

Conversation

renovate-bot
Copy link
Contributor

@renovate-bot renovate-bot commented Jun 4, 2025

This PR contains the following updates:

Package Change Age Confidence
torch (source) ==2.4.0 -> ==2.6.0 age confidence
torch (source) ==2.2.2 -> ==2.8.0 age confidence

GitHub Vulnerability Alerts

CVE-2025-32434

Description

I found a Remote Command Execution (RCE) vulnerability in PyTorch. When loading model using torch.load with weights_only=True, it can still achieve RCE.

Background knowledge

https://github.com/pytorch/pytorch/security
As you can see, the PyTorch official documentation considers using torch.load() with weights_only=True to be safe.
image
Since everyone knows that weights_only=False is unsafe, so they will use the weights_only=True to mitigate the seucirty issue.
But now, I just proved that even if you use weights_only=True, it can still achieve RCE.

Credit

This vulnerability was found by Ji'an Zhou.

CVE-2025-2953

A vulnerability, which was classified as problematic, has been found in PyTorch 2.6.0+cu124. Affected by this issue is the function torch.mkldnn_max_pool2d. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used.

CVE-2025-3730

A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue.


Release Notes

pytorch/pytorch (torch)

v2.6.0: PyTorch 2.6.0 Release

Compare Source

  • Highlights
  • Tracked Regressions
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers

Highlights

We are excited to announce the release of PyTorch® 2.6 (release notes)! This release features multiple improvements for PT2: torch.compile can now be used with Python 3.13; new performance-related knob torch.compiler.set_stance; several AOTInductor enhancements. Besides the PT2 improvements, another highlight is FP16 support on X86 CPUs.

NOTE: Starting with this release we are not going to publish on Conda, please see [Announcement] Deprecating PyTorch’s official Anaconda channel for the details.

For this release the experimental Linux binaries shipped with CUDA 12.6.3 (as well as Linux Aarch64, Linux ROCm 6.2.4, and Linux XPU binaries) are built with CXX11_ABI=1 and are using the Manylinux 2.28 build platform. If you build PyTorch extensions with custom C++ or CUDA extensions, please update these builds to use CXX_ABI=1 as well and report any issues you are seeing. For the next PyTorch 2.7 release we plan to switch all Linux builds to Manylinux 2.28 and CXX11_ABI=1, please see [RFC] PyTorch next wheel build platform: manylinux-2.28 for the details and discussion.

Also in this release as an important security improvement measure we have changed the default value for weights_only parameter of torch.load. This is a backward compatibility-breaking change, please see this forum post for more details.

This release is composed of 3892 commits from 520 contributors since PyTorch 2.5. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve PyTorch. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page.

Beta Prototype
torch.compiler.set_stance Improved PyTorch user experience on Intel GPUs
torch.library.triton_op FlexAttention support on X86 CPU for LLMs
torch.compile support for Python 3.13 Dim.AUTO
New packaging APIs for AOTInductor CUTLASS and CK GEMM/CONV Backends for AOTInductor
AOTInductor: minifier
AOTInductor: ABI-compatible mode code generation
FP16 support for X86 CPUs

*To see a full list of public feature submissions click here.

BETA FEATURES
[Beta] torch.compiler.set_stance

This feature enables the user to specify different behaviors (“stances”) that torch.compile can take between different invocations of compiled functions. One of the stances, for example, is

“eager_on_recompile”, that instructs PyTorch to code eagerly when a recompile is necessary, reusing cached compiled code when possible.

For more information please refer to the set_stance documentation and the Dynamic Compilation Control with torch.compiler.set_stance tutorial.

[Beta] torch.library.triton_op

torch.library.triton_op offers a standard way of creating custom operators that are backed by user-defined triton kernels.

When users turn user-defined triton kernels into custom operators, torch.library.triton_op allows torch.compile to peek into the implementation, enabling torch.compile to optimize the triton kernel inside it.

For more information please refer to the triton_op documentation and the Using User-Defined Triton Kernels with torch.compile tutorial.

[Beta] torch.compile support for Python 3.13

torch.compile previously only supported Python up to version 3.12. Users can now optimize models with torch.compile in Python 3.13.

[Beta] New packaging APIs for AOTInductor

A new package format, “PT2 archive”, has been introduced. This essentially contains a zipfile of all the files that need to be used by AOTInductor, and allows users to send everything needed to other environments. There is also functionality to package multiple models into one artifact, and to store additional metadata inside of the package.

For more details please see the updated torch.export AOTInductor Tutorial for Python runtime.

[Beta] AOTInductor: minifier

If a user encounters an error while using AOTInductor APIs, AOTInductor Minifier allows creation of a minimal nn.Module that reproduces the error.

For more information please see the AOTInductor Minifier documentation.

[Beta] AOTInductor: ABI-compatible mode code generation

AOTInductor-generated model code has dependency on Pytorch cpp libraries. As Pytorch evolves quickly, it’s important to make sure previously AOTInductor compiled models can continue to run on newer Pytorch versions, i.e. AOTInductor is backward compatible.

In order to guarantee application binary interface (ABI) backward compatibility, we have carefully defined a set of stable C interfaces in libtorch and make sure AOTInductor generates code that only refers to the specific set of APIs and nothing else in libtorch. We will keep the set of C APIs stable across Pytorch versions and thus provide backward compatibility guarantees for AOTInductor-compiled models.

[Beta] FP16 support for X86 CPUs (both eager and Inductor modes)

Float16 datatype is commonly used for reduced memory usage and faster computation in AI inference and training. CPUs like the recently launched Intel® Xeon® 6 with P-Cores support Float16 datatype with native accelerator AMX. Float16 support on X86 CPUs was introduced in PyTorch 2.5 as a prototype feature, and now it has been further improved for both eager mode and Torch.compile + Inductor mode, making it Beta level feature with both functionality and performance verified with a broad scope of workloads.

PROTOTYPE FEATURES

[Prototype] Improved PyTorch user experience on Intel GPUs

PyTorch user experience on Intel GPUs is further improved with simplified installation steps, Windows release binary distribution and expanded coverage of supported GPU models including the latest Intel® Arc™ B-Series discrete graphics. Application developers and researchers seeking to fine-tune, inference and develop with PyTorch models on Intel® Core™ Ultra AI PCs and Intel® Arc™ discrete graphics will now be able to directly install PyTorch with binary releases for Windows, Linux and Windows Subsystem for Linux 2.

  • Simplified Intel GPU software stack setup to enable one-click installation of the torch-xpu PIP wheels to run deep learning workloads in an out of the box fashion, eliminating the complexity of installing and activating Intel GPU development software bundles.
  • Windows binary releases for torch core, torchvision and torchaudio have been made available for Intel GPUs, and the supported GPU models have been expanded from Intel® Core™ Ultra Processors with Intel® Arc™ Graphics, Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics and Intel® Arc™ A-Series Graphics to the latest GPU hardware Intel® Arc™ B-Series graphics.
  • Further enhanced coverage of Aten operators on Intel GPUs with SYCL* kernels for smooth eager mode execution, as well as bug fixes and performance optimizations for torch.compile on Intel GPUs.

For more information regarding Intel GPU support, please refer to Getting Started Guide.

[Prototype] FlexAttention support on X86 CPU for LLMs

FlexAttention was initially introduced in PyTorch 2.5 to provide optimized implementations for Attention variants with a flexible API. In PyTorch 2.6, X86 CPU support for FlexAttention was added through TorchInductor CPP backend. This new feature leverages and extends current CPP template abilities to support broad attention variants (e.x.: PageAttention, which is critical for LLMs inference) based on the existing FlexAttention API, and brings optimized performance on x86 CPUs. With this feature, it’s easy to use FlexAttention API to compose Attention solutions on CPU platforms and achieve good performance.

[Prototype] Dim.AUTO

Dim.AUTO allows usage of automatic dynamic shapes with torch.export. Users can export with Dim.AUTO and “discover” the dynamic behavior of their models, with min/max ranges, relations between dimensions, and static/dynamic behavior being automatically inferred.

This is a more user-friendly experience compared to the existing named-Dims approach for specifying dynamic shapes, which requires the user to fully understand the dynamic behavior of their models at export time. Dim.AUTO allows users to write generic code that isn’t model-dependent, increasing ease-of-use for exporting with dynamic shapes.

Please see torch.export tutorial for more information.

[Prototype] CUTLASS and CK GEMM/CONV Backends for AOTInductor

The CUTLASS and CK backend adds kernel choices for GEMM autotuning in Inductor. This is now also available in AOTInductor which can run in C++ runtime environments. A major improvement to the two backends is improved compile-time speed by eliminating redundant kernel binary compilations and dynamic shapes support.

Tracked Regressions

torch.device(0) makes CUDA init fail in subprocess

There is a known regression (#​144152) that torch.device(0) makes CUDA init fail in subprocess since PyTorch 2.5.0.
There was an attempt to fix the regressions, but it caused some complications and was reverted.

An easy workaround is to use torch.device('cuda') or torch.device('cuda:0') instead.

Regression in the compilation of the torch.all operation with out= variant

A regressions (#​145220) was reported for PyTorch 2.6.0 with
compilation of the out= variant of the torch.all operator. This should be a rare use case, a workaround can be
rewriting the model code to avoid the out= variant.

Backwards Incompatible changes

Flip default torch.load to weights_only (#​137602, #​138225, #​138866, #​139221, #​140304, #​138936, #​139541, #​140738, #​142153, #​139433)

We are closing the loop on the deprecation that started in 2.4 and flipped torch.load to use weights_only=True by default.

When this flag is set, instead of using the usual pickle module, torch.load uses a custom unpickler constrained to call only functions and classes needed for loading state dictionaries and basic types.

While this change is disruptive for users serializing more than basic types, we expect the increased security by default is a tradeoff that is worth it. Do note that, even though this default is safer, we still recommend only loading trusted checkpoints and rely on more constrained (and even safer) formats like safetensors for un-trusted checkpoints.

For full details, please refer to this dev-discuss post.

Anaconda deprecation in CD. Remove anaconda dependency in Magma builds (#​141024) (#​141281) (#​140157) (#​139888) (#​140141) (#​139924) (#​140158) (#​142019) (#​142276) (#​142277) (#​142282)

PyTorch will stop publishing Anaconda packages that depend on Anaconda’s default packages. We are directing users to utilize our official wheel packages from download.pytorch.org or PyPI, or switch to utilizing conda-forge (pytorch) packages if they would like to continue to use conda. For more details refer to this announcement

Added Manylinux 2.28 prototype support and CXX11_ABI=1 for following binaries: Linux CUDA 12.6, Linux aarch64 CPU, Linux aarch64 GPU CUDA 12.6, ROCm 6.2.4, Linux XPU (#​139894) (#​139631) (#​139636) (#​140743) (#​137696) (#​141565) (#​140681) (#​141609) (#​141704) (#​141423) (#​141609)

The PyTorch binaries shipped with CUDA 12.6.3 are built with CXX11_ABI=1 and are using the Manylinux 2.28 build platform. If you are building PyTorch extensions with custom C++ or CUDA extensions, please update these builds to use CXX_ABI=1 as well and report any issues you are seeing. For the next PyTorch 2.7 release we plan to switch all Linux builds to Manylinux 2.28 and CXX11_ABI=1, please see [RFC] PyTorch next wheel build platform: manylinux-2.28 for the details and discussion.

ONNX
torch.onnx.export(..., dynamo=True) now creates ONNX models using IR version 10 (#​141207)

ONNX ir_version=10 is used to add support for UINT4, INT4 data types and include metadata in GraphProto and NodeProto. Make sure model consumers are able to accept IR version 10 ONNX models. You may read more about IRv10 on https://github.com/onnx/onnx/releases/tag/v1.16.0.

Several obsolete APIs are removed (#​133825, #​136279, #​137789, #​137790)

Some logging APIs, torch.onnx.ExportTypes, torch.onnx.export_to_pretty_string are removed. Users should remove usage of the APIs above.

torch.onnx.ONNXProgram has been reimplemented and improved (#​136281)

All ONNX "dynamo" APIs will return the new ONNXProgram class. Some notable methods available are save(), optimize(). It can also be directly applied on PyTorch tensors to leverage ONNX Runtime to verify the ONNX graph. Some legacy methods are no longer available.

Deprecations

Releng
Removed CUDA 12.1 support in CI/CD (#​141271) (#​142177)

The full release compatibility matrix matrix can be found in release.md

Deprecated c10d::onCompletionHook (#​142390)
  • In PT 2.5 and before, users can do:
    pg = dist.init_process_group()
    def hook(work_info: torch._C._distributed_c10d.WorkInfo):

do something

pg._register_on_completion_hook(hook)

The hook will be triggered after the collective complete

pg.broadcast([tensor]).wait()

* Starting from PT 2.6, when users write the code above, they will get get a warning message “ProcessGroupNCCL OnCompletion hook will be deprecated in favor of Flight Recorder”

##### Inductor
##### Deprecate TORCHINDUCTOR_STACK_ALLOCATION ([#​139147](https://redirect.github.com/pytorch/pytorch/pull/139147))
Instead of setting TORCHINDUCTOR_STACK_ALLOCATION, update your torch.compile call: `torch.compile(options={"aot_inductor.allow_stack_allocation": True})(foo)`.

#### **New features**
##### Python Frontend

* Introduce a device-agnostic runtime API design ([#​132204](https://redirect.github.com/pytorch/pytorch/pull/132204))
* Add validation for ambiguous behavior in `Tensor.dim_order()` ([#​141632](https://redirect.github.com/pytorch/pytorch/pull/141632))
* Add type check for `ord` argument for `torch.linalg.{vector,matrix}_norm()` ([#​137463](https://redirect.github.com/pytorch/pytorch/pull/137463))
* FlexAttention support for NJT ([#​136792](https://redirect.github.com/pytorch/pytorch/pull/136792), [#​140723](https://redirect.github.com/pytorch/pytorch/pull/140723))

##### Miscellaneous

* Enable forward AD in `functional.affine_grid` ([#​135494](https://redirect.github.com/pytorch/pytorch/pull/135494))
* Added SVE support for ARM CPUs ([#​119571](https://redirect.github.com/pytorch/pytorch/pull/119571))
* User buffer registration via MemPool API ([#​133603](https://redirect.github.com/pytorch/pytorch/pull/133603))
* Add in_order flag for data loader, allowing out-of-order dataloading ([#​141833](https://redirect.github.com/pytorch/pytorch/pull/141833))

##### Optim

* Add Support for Tracking Parameter Names (named_parameters) in Optimizer State Dict ([#​134107](https://redirect.github.com/pytorch/pytorch/pull/134107))
* Support tensor betas in Adam and AdamW ([#​134171](https://redirect.github.com/pytorch/pytorch/pull/134171))

##### Distributed

* c10d
  * Made ProcessGroup initialization non-blocking when `device_id` is given [#​138527](https://redirect.github.com/pytorch/pytorch/pull/138527))
  * Allowed sub group to be eagerly inited even if default one is not ([#​138665](https://redirect.github.com/pytorch/pytorch/pull/138665))
  * Supported `group_dst`/`group_src` in c10d collectives ([#​140460](https://redirect.github.com/pytorch/pytorch/pull/140460), [#​139677](https://redirect.github.com/pytorch/pytorch/pull/139677), [#​140827](https://redirect.github.com/pytorch/pytorch/pull/140827), [#​140843](https://redirect.github.com/pytorch/pytorch/pull/140843), [#​140847](https://redirect.github.com/pytorch/pytorch/pull/140847))
  * Enabled Flight Recorder buffer for all users ([#​142260](https://redirect.github.com/pytorch/pytorch/pull/142260))
  * Registered Intel distributed Backend (`XCCL`) in PyTorch distributed package ([#​141856](https://redirect.github.com/pytorch/pytorch/pull/141856))
* Pipeline
  * Performed shape inference at runtime using user-provided real tensors ([#​136912](https://redirect.github.com/pytorch/pytorch/pull/136912))
  * Added ZBV schedule ([#​142084](https://redirect.github.com/pytorch/pytorch/pull/142084))
* FSDP2
  * Moved FSDP2 to public ([#​141868](https://redirect.github.com/pytorch/pytorch/pull/141868))

##### Dynamo

* Add `torch.compiler.set_stance` to dynamically change `torch.compile` behavior without needing to re-apply `torch.compile`. ([#​137504](https://redirect.github.com/pytorch/pytorch/pull/137504))
* Profile guided optimization for `automatic_dynamic` - automatically save and load automatic dynamic decisions to reuse on future runs ([#​139001](https://redirect.github.com/pytorch/pytorch/pull/139001))
* `skip_guard_eval_unsafe` compiler stance option for power users - skip guard checks when it is known to be safe to do so ([#​140251](https://redirect.github.com/pytorch/pytorch/pull/140251))

##### Releng

* Added support for CUDA 12.6 in CI/CD ([#​142335](https://redirect.github.com/pytorch/pytorch/pull/142335)) ([#​136321](https://redirect.github.com/pytorch/pytorch/pull/136321)) ([#​138417](https://redirect.github.com/pytorch/pytorch/pull/138417)) ([#​138563](https://redirect.github.com/pytorch/pytorch/pull/138563)) ([#​138562](https://redirect.github.com/pytorch/pytorch/pull/138562))  ([#​139909](https://redirect.github.com/pytorch/pytorch/pull/139909)) ([#​138899](https://redirect.github.com/pytorch/pytorch/pull/138899)) ([#​141365](https://redirect.github.com/pytorch/pytorch/pull/141365)) ([#​141433](https://redirect.github.com/pytorch/pytorch/pull/141433))  ([#​141805](https://redirect.github.com/pytorch/pytorch/pull/141805)) ([#​141976](https://redirect.github.com/pytorch/pytorch/pull/141976)) ([#​139988](https://redirect.github.com/pytorch/pytorch/pull/139988))  ([#​140143](https://redirect.github.com/pytorch/pytorch/pull/140143)) ([#​141377](https://redirect.github.com/pytorch/pytorch/pull/141377)) ([#​142064](https://redirect.github.com/pytorch/pytorch/pull/142064))
* Intel GPU enablement in CI/CD. Upgrade XPU support packages to Intel® Deep Learning Essentials 2025.0. Add prototype Linux and Windows binary builds with XPU runtime pypi packages dependencies. ([#​138189](https://redirect.github.com/pytorch/pytorch/pull/138189)) ([#​139050](https://redirect.github.com/pytorch/pytorch/pull/139050)) ([#​139604](https://redirect.github.com/pytorch/pytorch/pull/139604)) ([#​139775](https://redirect.github.com/pytorch/pytorch/pull/139775)) ([#​140373](https://redirect.github.com/pytorch/pytorch/pull/140373)) ([#​141546](https://redirect.github.com/pytorch/pytorch/pull/141546)) ([#​141775](https://redirect.github.com/pytorch/pytorch/pull/141775)) ([#​141135](https://redirect.github.com/pytorch/pytorch/pull/141135)) ([#​142210](https://redirect.github.com/pytorch/pytorch/pull/142210)) ([#​135638](https://redirect.github.com/pytorch/pytorch/pull/135638)) ([#​142298](https://redirect.github.com/pytorch/pytorch/pull/142298))
* Added Python 3.13 in CI/CD support and prototype support for Python 3.13t in CD (Only Linux and Linux aarch64 torch binaries)  ([#​136001](https://redirect.github.com/pytorch/pytorch/pull/136001)) ([#​137396](https://redirect.github.com/pytorch/pytorch/pull/137396)) ([#​138037](https://redirect.github.com/pytorch/pytorch/pull/138037)) ([#​138629](https://redirect.github.com/pytorch/pytorch/pull/138629)) ([#​140137](https://redirect.github.com/pytorch/pytorch/pull/140137)) ([#​138095](https://redirect.github.com/pytorch/pytorch/pull/138095)) ([#​141572](https://redirect.github.com/pytorch/pytorch/pull/141572)) ([#​140733](https://redirect.github.com/pytorch/pytorch/pull/140733)) ([#​141264](https://redirect.github.com/pytorch/pytorch/pull/141264)) ([#​142294](https://redirect.github.com/pytorch/pytorch/pull/142294)) ([#​137142](https://redirect.github.com/pytorch/pytorch/pull/137142)) ([#​137127](https://redirect.github.com/pytorch/pytorch/pull/137127)) ([#​139533](https://redirect.github.com/pytorch/pytorch/pull/139533)) ([#​140733](https://redirect.github.com/pytorch/pytorch/pull/140733))

##### ROCM

* Added AMDSMI support for UUID input ([#​129741](https://redirect.github.com/pytorch/pytorch/pull/129741))
* Added faster HW support for packed bfloat16 and fp16 for MI300 ([#​135770](https://redirect.github.com/pytorch/pytorch/pull/135770))
* Improved performance of reductions on 1D and 2D tensors. ([#​137737](https://redirect.github.com/pytorch/pytorch/pull/137737))

##### XPU

* Add `torch.xpu.mem_get_info` API: Introduces a new API to retrieve memory information for XPU devices. ([#​141230](https://redirect.github.com/pytorch/pytorch/pull/141230))
* Add architecture property to XPU device: Adds new properties to XPU devices to query architecture details. ([#​138186](https://redirect.github.com/pytorch/pytorch/pull/138186))
* Add `elapsed_time` method for XPU events: Introduces a method to measure elapsed time between XPU events. ([#​140865](https://redirect.github.com/pytorch/pytorch/pull/140865))
* Add `torch.xpu.get_arch_list` and `torch.xpu.get_gencode_flags`: Introduces new APIs to retrieve architecture lists and code generation flags for XPU. ([#​137773](https://redirect.github.com/pytorch/pytorch/pull/137773))
* Add quantized convolution support for XPU backend ([#​133080](https://redirect.github.com/pytorch/pytorch/pull/133080))
* Enable XPU device support for LSTMCell operators ([#​140246](https://redirect.github.com/pytorch/pytorch/pull/140246))

##### Profiler

* Hide ProfilerStep Alignment behind Experimental Config ([#​137668](https://redirect.github.com/pytorch/pytorch/pull/137668))
* Add functionality to call dump function of NCCL profiler plugin ([#​137523](https://redirect.github.com/pytorch/pytorch/pull/137523))

##### Export

* Add `torch.export.export_for_training()` API to perform export that can run training. Note that this replaces the non-documented `capture_pre_autograd_graph` feature ([#​135374](https://redirect.github.com/pytorch/pytorch/pull/135374), [#​135918](https://redirect.github.com/pytorch/pytorch/pull/135918), [#​135549](https://redirect.github.com/pytorch/pytorch/pull/135549), [#​143224](https://redirect.github.com/pytorch/pytorch/pull/143224))
* New packaging APIs for AOTInductor `torch._inductor.aoti_compile_and_package` 
  * Previously, AOTInductor (through `torch._export.aot_compile`), would return a path to a .so. However, this does not have a great user experience as actually there are other files that are used along with the .so, for example .cubin files and serialized extern kernels. So, we introduce a new package format, “[PT2 archive](https://docs.google.com/document/d/1RQ4cmywilnFUT1VE-4oTGxwXdc8vowCSZsrRgo3wFA8/edit#heading=h.v2y2jgnwc56a)”, which is what we intend to have AOTInductor return. This essentially contains a zipfile of all the files that need to be used by AOTInductor, and allows users to send to other environments. There is also functionality to package multiple models into one artifact, and to store additional metadata inside of the package.
* [AOTInductor Minifier](https://pytorch.org/docs/main/torch.compiler_aot_inductor_minifier.html). If you encounter an error while using AOT Inductor APIs such as `torch._inductor.aoti_compile_and_package`, `torch._indcutor.aoti_load_package`, or running the loaded model of aoti_load_package on some inputs, you can use the AOTInductor Minifier to create a minimal nn.Module that reproduces the error. ([#​139351](https://redirect.github.com/pytorch/pytorch/pull/139351),[#​140999](https://redirect.github.com/pytorch/pytorch/pull/140999), [#​141159](https://redirect.github.com/pytorch/pytorch/pull/141159), [#​141156](https://redirect.github.com/pytorch/pytorch/pull/141156))
* AOTInductor: ABI-compatible mode code generation. In order to guarantee ABI backward compatibility, we have carefully defined a set of stable C interfaces in libtorch and make sure AOTInductor generates code that only refers to the specific set of APIs and nothing else in libtorch. We will keep the set of C APIs stable across Pytorch versions and thus provide BC guarantees for AOTInductor-compiled models.
* `export.export_for_inference` and `export.exported_program.core_aten_decompositions` API. `export_for_inference` returns a functional, post-dispatch ATen IR. ([#​135912](https://redirect.github.com/pytorch/pytorch/pull/135912)).

##### Inductor

* Move stack allocation related configs in AOTI ([#​139093](https://redirect.github.com/pytorch/pytorch/pull/139093)). All stack allocation related configs now have a aot_inductor prefix, so `torch.compile(options={"use_minimal_arrayref_interface": True})(foo)` is now `torch.compile(options={"aot_inductor.use_minimal_arrayref_interface": True})(foo)` and `torch.compile(options={"allow_stack_allocation": True})(foo)` is now `torch.compile(options={"aot_inductor.allow_stack_allocation": True})(foo)`.
* Move `torch._utils.is_compiling` to `torch.compiler.is_compiling` ([#​127690](https://redirect.github.com/pytorch/pytorch/pull/127690)) Rewrite `torch._utils.is_compiling()` to `torch.compiler.is_compiling()`.
* Added option `​​autotune_num_choices_displayed` to control number of kernel options displayed ([#​138788](https://redirect.github.com/pytorch/pytorch/pull/138788))
* Added option `force_pointwise_cat` concat support through inductor using pointwise kernels ([#​141966](https://redirect.github.com/pytorch/pytorch/pull/141966)). This forces concat to be generated as a pointwise op with masked loads.
* New config option `annotate_training` that adds Inductor annotations to NVTX.  ([#​130429](https://redirect.github.com/pytorch/pytorch/pull/130429))
* Introduces an option `triton_kernel_default_layout_constraint` to tweak stride settings for user-defined Triton kernels, enhancing customization and flexibility ([#​135530](https://redirect.github.com/pytorch/pytorch/pull/135530)).
* User can patch inductor config to enable strict custom kernel layout constraints by changing `torch.compile(options={"triton_kernel_default_layout_constraint": "needs_fixed_stride_order"})(foo)` ([#​135581](https://redirect.github.com/pytorch/pytorch/pull/135581)). 
* External callable registration API `register_external_matmul` for Matmul tuning candidates in Inductor ([#​130774](https://redirect.github.com/pytorch/pytorch/pull/130774)).
* Adds support for Windows Arm64 to enhance platform compatibility ([#​133088](https://redirect.github.com/pytorch/pytorch/pull/133088)).
* Integrates support for AMD triton stream pipeliner in ROCm to enhance performance ([#​139881](https://redirect.github.com/pytorch/pytorch/pull/139881)).
* Adds support for TRITON_INTERPRET in Inductor ([#​140841](https://redirect.github.com/pytorch/pytorch/pull/140841)).
* Adds update_constant_buffer pybind support in AOTInductor ([#​140755](https://redirect.github.com/pytorch/pytorch/pull/140755)).
* Provides an option `package_constants_in_so` to exclude weights from .so files in AOTInductor ([#​141997](https://redirect.github.com/pytorch/pytorch/pull/141997)).
* Adds `load_constants` to the package API ([#​142246](https://redirect.github.com/pytorch/pytorch/pull/142246)).
* Enables auto functionalize v2 by default ([#​136685](https://redirect.github.com/pytorch/pytorch/pull/136685)).
* Adds raise_error_on_ignored_optimization to the aoti config ([#​138035](https://redirect.github.com/pytorch/pytorch/pull/138035)).
* Adds stats summary (mean/min/max, etc) for jit inductor tensor value printing ([#​135887](https://redirect.github.com/pytorch/pytorch/pull/135887)).

##### ONNX

* Models using `torch.cond` is supported ([#​137428](https://redirect.github.com/pytorch/pytorch/pull/137428))

`torch.cond` is the recommended way to introduce control flows that can be converted to an ONNX model.

* Users can provide a `custom_translation_table` to provide custom implementations for converting operators to ONNX ([#​135403](https://redirect.github.com/pytorch/pytorch/pull/135403))

This is useful when you need to override an implementation or provide one that is not currently implemented. Refer to the tutorials for a more complete description of the operator registration mechanism. 

```py 

### Define the translation using ONNX Script 
from onnxscript import opset18 as op 

def sym_not_onnx(input): 
 return op.Not(input) 
torch.onnx.export(... 
dynamo=True,  
 custom_translation_table = { # Then provide it here 
    torch.sym_not: sym_not_onnx,  
}) 
  • ONNXProgram has a new optimize() method (#​137667)

Users can run optimize() to flatten nested structures in the ONNX graph, perform constant folding and remove redundancies in the ONNX model. Calling optimize() after exporting to ONNX is recommended.

onnx_program = torch.onnx.export(..., dynamo=True) 
onnx_program.optimize()  # Optimize the graph before saving is recommended 
onnx_program.save(...) 
  • Users can now use complex constants in their models and export to ONNX (#​138279)

Improvements

Python Frontend
  • Add support for fp16 and bf16 to torch.special.i1 (#​137899)
  • Add option to disable checksum computation in torch.save (#​137735)
  • Speed up fp16 tensors printing (#​141927)
  • Add support for fp16 for torch.adaptive_pool3d on cpu (#​136091)
  • Add support for fp8* to torch.masked_select (#​141928)
  • Add support for complex fp16 to fill_empty_deterministic_ (#​137488)
  • Remove dependency on numpy for serialization for XLA/open registration devices without numpy (#​137444, #​137600)
  • Fix torch.{linalg.}norm complex half support (#​133661)
NN Frontend
  • Allow global module hook to accept keyword arguments (#​137403)
  • Add APIs to separate norm calculation and gradient scaling in nn.utils.clip_grad_norm_ (#​139662)
  • Add Half support for reflection and replication padding on CPU (#​135931)
  • Add weight argument to MSELoss, HuberLoss and L1Loss (#​132049)
  • Gaussian nll loss scalar variance support (#​138931)
  • Added validation for input types for torch.nn.Linear and torch.nn.Bilinear (#​135596)
Optim
  • Improve ReduceLROnPlateau and Optimizer.add_param_group interaction by auto-updating min_lrs (#​137637)
  • Allow SequentialLR to include ChainedScheduler (#​133450)
Composability
Decompositions, FakeTensor and meta tensors

Operator decompositions, FakeTensors and meta tensors are used to trace out a graph in torch.compile and torch.export. They received several improvements:

  • Several operator decomps received improvements/bugfixes:
  • New decompositions for a few pytorch operators:
  • Several meta implementations of operators received improvements/bugfixes:
  • New meta tensor implementations for a few pytorch operators:

Dynamic shapes

We made many improvements and bugfixes to dynamic shapes in torch.compile

  • Minor error message improvements (#​136671, #​138310)
  • Make native_layer_norm_backward work with unbacked SymInts (#​136798)
  • Make masked_fill work with unbacked SymIntsl (#​137060)
  • Improve tracing speed of torch.cat with large numbers of symbolic variables (#​139653)
  • Improve performance of canonicalize_bool_expr (#​135621)
  • Improve performance of sympy_generic_le (#​135622)
  • Simplify expr before getting implications in _maybe_evaluate_static (#​135499)
  • use a fast expand algorithm (#​135999, #​136163)
  • Fix calling Add._from_args and Mul._from_args (#​136143)
  • Dynamic shape logging improvements in tlparse (#​136508, #​141068, #​140867)
  • Avoid some quadratic behavior of dynamic shapes involving aliasing + mutation of graph inputs (#​136857)
  • Tensorify compute on Python scalars (#​136674)
  • Delay mul/pow expansion for _SympyT to enable more folding (#​138235)
  • Fix bug in unbacked_bindings for a*u0 (#​138136)
  • Remove parallel_and and parallel_or (#​138135)
  • Explicitly avoid recording when should_record_events is false in record_shapeenv_event (#​138965)
  • Better support for dynamic shapes with tensor subclasses (#​125941)
  • support symfloats in translation validation (#​139457)
  • Add trunc to z3 validator (#​140886)
  • Refactor ShapeGuardPrinter for future C++ additon (#​140968)
  • Fix another item memo loss location + bool specialization bug (#​139587)
  • Optimize increment summations (#​140822)
  • Only compute new_untracked_symbols and new_unbacked_bindings if needed. (#​140083)
  • Use has_free_unbacked_symbols instead of bool(free_unbacked_symbols) (#​140027)
  • Try to simplify FloorDiv axioms implications when needed during evaluations. (#​141267)
  • Fix AttributeError: 'int' object has no attribute 'node' due to constant prop (#​141250)
  • Update tensorify pass to specialize symfloats we didn't tensorify away (#​139564)
  • Add TORCHDYNAMO_EXTENDED_ADVICE (#​137159) (#​137196)
  • Do not try to optimize new implications in get_implications (#​139738)

Custom operators

We improved the existing torch.library APIs and added new ones.

  • Add new torch.library.triton_op API (#​141880)
  • Fix partitioner behavior on user triton kernels (#​136878)
  • Add links to new Custom Ops Landing Page (#​137933, #​139634)
  • Fix torch.library.register_vmap to work with nested vmap (#​137306)
  • No-op torch.library.custom_op APIs on torch.deploy (#​139509)
  • Optimize mutable torch.library.custom_op overhead (#​139513)
  • Improve torch.library.opcheck and register_autograd docs (#​141883)
Distributed
  • c10d
    • Added FP8 support to NaN checker (#​135891, #​135961, #​136115)
    • Added support for cuStreamWriteValue32 (#​136488)
    • Improved the detection robustness in CudaDMAConnectivityDetector (#​137530)
    • Simplified barrier implementation and further decouple CPU/GPU synchronization (#​137516)
    • Threw value error if passing world_size=0 to TCPStore (#​137792)
    • Performed retry connection timeout failures in socket (#​138003)
    • Added an API to get the future result(success or failure) of a collective and customized error handling (#​137799)
    • Disabled watchdog thread in blockingWait mode (#​138001)
    • Added default value for nccl_nonblocking_timeout (#​138374)
    • Ensured nccl comm is ready before all accesses (#​138384)
    • Used a promise to delay watchdog shutdown (#​138828)
    • Supported optional backend if device_id provided (#​140963)
    • Supported group ranks in P2POp and batch_isend_irecv (#​141054)
    • Enabled CudaEventCache by default and add multi device support (#​140975)
    • Added an API to retrieve default distributed backend from device (#​140536)
    • Supported rank, world size, group name/desc overrides for PyProcessGroup (#​141529)
    • Added the detect of accelerator type when backend is not specified (#​142216)
    • Used task submitter TLS in gloo working threads (#​142184)
    • Added _reduce_scatter_base to c10d::ProcessGroupUCC (#​138021)
  • DDP
    • Made DDPOptimizer work with HOPs (#​138787)
    • Made DDP Quantization hooks backend Agnostic (#​138816)
    • Used device-agnostic runtime API in DDP/FSDP instead of cuda device specific. (#​137678)
  • FSDP
    • Updates real device in FSDP state_dict_utils (#​134994)
    • Generalized of FSDP common for non-cuda execution (#​133209)
  • FSDP2
    • Added _set_unshard_async_op (#​135523)
    • Added module, mp policy to fsdp_pre_all_gather (#​136129)
    • Added check for contiguous parameters (#​137000)
    • Relaxed even sharding requirement for all-gather extensions (#​137005)
    • Used stream and event based on device (#​136843)
    • Added shard_placement_fn arg (#​137496)
    • Added set_unshard_in_backward(bool) (#​137922)
    • Made module-to-state mapping use weakrefs (#​139650)
    • Removed CUDA-like device check in fsdp2. (#​139539)
  • DTensor
    • Allowed user to manual_seed different seed on device mesh and only synced RNG state in WORLD when manual_seed has not been called (#​141223)
    • Supported matmul in inference_mode (#​142197)
  • Pipeline
    • Made PipelineStage support meta initialization (#​136243)
    • Allowed non-0 stages to accept kwargs (#​136416)
    • added schedule simulator and chrometrace dump (#​138134)
    • Supported separate dI / dW and V-schedules (#​131762)
    • Updated schedules to use I, B actions. (#​138886)
    • Added type checking to _backward functions (#​140019)
    • Allowed multiple backward grads (#​140981)
    • Improved schedule csv loading (#​142009)
  • TorchElastic
    • Added TryExcept when decoding healthcheck port (#​136574)
    • Skipped store barrier and store get in host assign (#​136865)
  • Checkpoint
    • Throw an error when state_dict and saved tensors are different sizes (#​141571)
Profiler
  • Create Auto-Trace Frontend for Trace ID (#​139310)
  • Add skip_first_wait to profiler.schedule (#​141512)
  • Add CUDA Overhead to Auto-trace (#​142271)
Nested Tensor
Functorch
  • Add vmap support for torch.scatter_reduce (#​135547)
  • Add vmap support for native_dropout_backward (#​140140)
  • Allow optional positional arguments for torch.func.functional_call (#​134643))
Quantization
  • Add uint16 support for observer (#​136238)
  • change flatten recipe for X86InductorQuantizer (#​136298)
  • Update choose_qparams_per_token op to output correct shape for scales and zp (#​136807)
  • Make QAT Fu

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Never, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate-bot renovate-bot requested review from a team as code owners June 4, 2025 23:09
@trusted-contributions-gcf trusted-contributions-gcf bot added kokoro:force-run Add this label to force Kokoro to re-run the tests. owlbot:run Add this label to trigger the Owlbot post processor. labels Jun 4, 2025
@product-auto-label product-auto-label bot added the samples Issues that are directly related to samples. label Jun 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @renovate-bot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, gemini-code-assist here to provide a summary of this pull request. This PR, opened by renovate-bot, aims to update the torch dependency in the dataflow/run-inference/requirements.txt file. The primary motivation for this update is to address recently disclosed security vulnerabilities in older versions of torch, specifically CVE-2025-32434 (a Remote Command Execution vulnerability) and CVE-2025-2953 (a Denial of Service vulnerability). The PR updates the version from 2.2.2 to 2.7.1 to incorporate the necessary security fixes.

Highlights

  • Security Fixes: This update is crucial as it includes fixes for significant security vulnerabilities (CVE-2025-32434 and CVE-2025-2953) found in previous versions of torch.

Changelog

  • dataflow/run-inference/requirements.txt
    • Updated the torch dependency from version 2.2.2 to 2.7.1.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Dependency update,
Fixing bugs, making code safe,
Security patch.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the torch dependency from version 2.2.2 to 2.7.1. This is an important update as it addresses two security vulnerabilities: CVE-2025-32434 (Remote Command Execution in torch.load) and CVE-2025-2953 (Denial of Service in torch.mkldnn_max_pool2d).

The change is confined to the requirements.txt file and correctly updates the version string. The Renovate bot's compatibility checks indicate that this new version should be compatible with the existing dependencies, which is reassuring.

Given the security implications of the older torch version, this update is highly recommended. The code in main.py (via Apache Beam's PytorchModelHandlerTensor) and download_model.py involves model loading and saving, so ensuring PyTorch is up-to-date with security patches is crucial.

Overall, this is a beneficial and necessary update.

Summary of Findings

  • Security Vulnerability Remediation: The primary purpose of this PR is to update torch to v2.7.1, which addresses critical security vulnerabilities (CVE-2025-32434 and CVE-2025-2953) present in the previous version (v2.2.2). This is a crucial improvement for the security posture of the application.
  • Dependency Update: The torch dependency has been updated from 2.2.2 to 2.7.1 in dataflow/run-inference/requirements.txt. This change is correctly implemented.

Merge Readiness

This pull request directly addresses known security vulnerabilities by updating the torch library. The change is minimal and appears to be compatible according to automated checks. I recommend merging this PR to enhance the security of the project. As I am an AI assistant, I am not authorized to approve pull requests; please ensure it undergoes any further necessary human review and testing procedures before merging.

@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 8976e2a to 54dff54 Compare June 5, 2025 00:27
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 5, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 5, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 54dff54 to c3c3982 Compare June 11, 2025 04:44
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 11, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 11, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from c3c3982 to 6fd487d Compare June 11, 2025 21:19
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 11, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 11, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 6fd487d to 5010020 Compare June 12, 2025 04:41
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 12, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 12, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 5010020 to c062b4a Compare June 12, 2025 14:44
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 12, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 12, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from c062b4a to 9e87ef8 Compare June 13, 2025 00:13
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 13, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 13, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 9e87ef8 to 289f877 Compare June 13, 2025 08:53
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 13, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Jun 13, 2025
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 1, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 1, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from da898aa to 864a61e Compare September 2, 2025 02:26
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 2, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 2, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 864a61e to a5c6a93 Compare September 2, 2025 10:44
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 2, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 2, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from a5c6a93 to 6332ab4 Compare September 2, 2025 18:17
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 2, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 2, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 6332ab4 to b8a21aa Compare September 3, 2025 02:24
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 3, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 3, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from b8a21aa to d77c93c Compare September 3, 2025 10:26
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 3, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 3, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from d77c93c to ba02696 Compare September 3, 2025 18:11
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 3, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 3, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from ba02696 to f896db9 Compare September 4, 2025 06:06
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from f896db9 to 12c842c Compare September 4, 2025 15:58
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2025
@renovate-bot renovate-bot force-pushed the renovate/pypi-torch-vulnerability branch from 12c842c to ea02eeb Compare September 4, 2025 20:05
@trusted-contributions-gcf trusted-contributions-gcf bot added the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Sep 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
owlbot:run Add this label to trigger the Owlbot post processor. samples Issues that are directly related to samples.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants