-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Open
Labels
module: functionalizationused for issues that are specific to functionalization (AOTAutograd bugs should start w aotdispatch)used for issues that are specific to functionalization (AOTAutograd bugs should start w aotdispatch)module: functorchPertaining to torch.func or pytorch/functorchPertaining to torch.func or pytorch/functorchmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,oncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🚀 The feature, motivation and pitch
torch.compile
, torch.func
and friends do not like mutable operations. However, sometimes assignment is the most concise way to implement an operation.
JAX has an immutable assignment operator (https://docs.jax.dev/en/latest/_autosummary/jax.numpy.ndarray.at.html). Could a similar feature be added to PyTorch tensors?
Alternatives
No response
Additional context
No response
cc @bdhirsh @ezyang @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345 @albanD
Metadata
Metadata
Assignees
Labels
module: functionalizationused for issues that are specific to functionalization (AOTAutograd bugs should start w aotdispatch)used for issues that are specific to functionalization (AOTAutograd bugs should start w aotdispatch)module: functorchPertaining to torch.func or pytorch/functorchPertaining to torch.func or pytorch/functorchmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,PT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,oncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module