Skip to content

Commit 0060c63

Browse files
authored
Update 2021-6-15-pytorch-1.9-released.md
1 parent abde932 commit 0060c63

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_posts/2021-6-15-pytorch-1.9-released.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ Inference Mode API allows significant speed-up for inference workloads while rem
162162

163163
*torch.package* is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model’s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline while retaining the flexibility of a pure-Python representation. For more details, refer to [the documentation](https://pytorch.org/docs/1.9.0/package.html).
164164

165-
### (Prototype) prepare_for_inference (for TorchScript Modules)
165+
### (Prototype) prepare_for_inference
166166

167167
prepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user’s workflows. For more details, see [the documentation](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) for the Torchscript version [here](https://github.com/pytorch/pytorch/blob/master/torch/jit/_freeze.py#L168) or the FX version [here](https://github.com/pytorch/pytorch/blob/master/torch/fx/experimental/optimization.py#L234).
168168

0 commit comments

Comments
 (0)