-
Notifications
You must be signed in to change notification settings - Fork 24.9k
Testing ONNX branch CI #70586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing ONNX branch CI #70586
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 21a41e1 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
…ytorch#67640) ghstack-source-id: 60320d8 Pull Request resolved: pytorch#68489
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py. * Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`. * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter. * `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py. Motivation for this change: - Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter. - The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion. Co-authored-by: BowenBao <bowbaomicrosoft.com> ghstack-source-id: 577a7e6 Pull Request resolved: pytorch#68491
* Initial commit * Fix flake issue * Add test tags ghstack-source-id: b277f18 Pull Request resolved: pytorch#68492
Fixes pytorch#66786. `index_select` only supports `index` of 1-D tensor. `ONNX::Gather` allows `index` to have rank `q`. Abort constant folding `ONNX::Gather` if `index` rank is larger than 1. Co-authored-by: BowenBao <bowbaomicrosoft.com> ghstack-source-id: 9b53061 Pull Request resolved: pytorch#68493
Co-authored-by: Gary Miguel <garymiguelmicrosoft.com> ghstack-source-id: a662f60 Pull Request resolved: pytorch#69544
The arg is not used and was previously deprecated. Also remove torch.onnx._export_to_pretty_string. It's redundant with the public version. ghstack-source-id: 0d451e0 Pull Request resolved: pytorch#69546
ScriptModule export introduces duplicated ONNX initializers for shared weights, unnecessarily increases ONNX model size. This PR de-duplicates ONNX initializers for model exported in eval mode, by checking if the underlying tensors share the same `data_ptr`, `strides` and `sizes`. Co-authored-by: BowenBao <bowbaomicrosoft.com> ghstack-source-id: d17dfa4 Pull Request resolved: pytorch#69547
* Add Concat to Scalar type analysis pass By using scalar type analysis for Concat, the exported model can do automatic type promotion for Concat nodes, including mixed fp16 and fp32 inputs, for example. Unit tests based on the original PR pytorch#24378 * Fix UTs ghstack-source-id: 4e796b1 Pull Request resolved: pytorch#69548
[ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after pytorch#67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. ghstack-source-id: 6572ba2 Pull Request resolved: pytorch#69549
Fix the wiki URL. Also minor reorganization in onnx.rst. ghstack-source-id: e59c7f1 Pull Request resolved: pytorch#69550 [ONNX] restore documentation of public functions (pytorch#69623) The build-docs check requires all public functions to be documented. These should really not be public, but we'll fix that later.'
This PR adds a new attribute overload_name to the Aten node so that third party applications can implement calls to libtorch without using PyTorch source code. This is necessary because torch's torch::jit::findOperatorFor(fullname) requires a full name, including operator and overload names. ATen op was originally created for Caffe2, which leveraged the availability of the pytorch yaml files to create calls to the aten oeprators directly, not relying on torch::jit::findOperatorFor The first part of the PR refactors all symbolics that create Aten ops, so that there is a single helper for this operator. Next all symbolics are updated to pass in the relevant overload name, if empty string is not applicable
Extend shape inference support for `Expand`, when value of argument `shape` is unknown. Infer the rank of the output of `Expand`, and set shape to dynamic, if shape of argument `shape` is known. Without this, shape inference aborts, and falls back to the static shape provided by tracer, which is incorrect in many cases. Co-authored-by: BowenBao <bowbao@microsoft.com>
…ear (pytorch#69232) Co-authored-by: David Fan <jiafa@microsoft.com>
* Add module name as pythonOp attr * Move to trace_post_record * Add tests * Code compactness
Enable tests that are fixed by ORT 1.10
bf001ea
to
21a41e1
Compare
All check passed, closing. |
No description provided.