Skip to content

[ONNX] Improve error message for supported model input types in ONNX export API#50119

Merged
BowenBao merged 1 commit intopytorch:onnx_ms_1from
spandantiwari:spandantiwari/improve_error_msg1
Jan 6, 2021
Merged

[ONNX] Improve error message for supported model input types in ONNX export API#50119
BowenBao merged 1 commit intopytorch:onnx_ms_1from
spandantiwari:spandantiwari/improve_error_msg1

Conversation

@spandantiwari
Copy link
Copy Markdown

This PR updates the error message that is generated when a unsupported input type is presented to the torch.onnx.export API.

@facebook-github-bot facebook-github-bot added cla signed oncall: jit Add this issue/PR to JIT oncall triage queue labels Jan 5, 2021
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jan 5, 2021

💊 CI failures summary and remediations

As of commit 0aed6b3 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jan 05 23:32:14 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Jan 05 23:32:14 processing existing schema:  alltoall(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, Tensor[] _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 23:32:14 processing existing schema:  send(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 23:32:14 processing existing schema:  recv(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2, int _3) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 23:32:14 processing existing schema:  recv_anysource(__torch__.torch.classes.dist_c10d.ProcessGroup _0, Tensor[] _1, int _2) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 23:32:14 processing existing schema:  barrier(__torch__.torch.classes.dist_c10d.ProcessGroup _0) -> (__torch__.torch.classes.dist_c10d.Work _0)
Jan 05 23:32:14 processing existing schema:  __init__(__torch__.torch.classes.dist_c10d.frontend _0) -> (None _0)
Jan 05 23:32:14 processing existing schema:  new_process_group_helper(__torch__.torch.classes.dist_c10d.frontend _0, int _1, int _2, int[] _3, str _4, __torch__.torch.classes.dist_c10d.Store _5, str? _6, int _7) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
Jan 05 23:32:14 processing existing schema:  get_process_group_by_name(__torch__.torch.classes.dist_c10d.frontend _0, str _1) -> (__torch__.torch.classes.dist_c10d.ProcessGroup _0)
Jan 05 23:32:14 processing existing schema:  get_name_of_process_group(__torch__.torch.classes.dist_c10d.frontend _0, __torch__.torch.classes.dist_c10d.ProcessGroup _1) -> (str _0)
Jan 05 23:32:14 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0)
Jan 05 23:32:14 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
Jan 05 23:32:14 
Jan 05 23:32:14 Broken ops: [
Jan 05 23:32:14 	aten::_test_ambiguous_defaults.a(Tensor dummy, int a=1, int b=1) -> (Tensor)
Jan 05 23:32:14 	aten::_test_ambiguous_defaults.b(Tensor dummy, int a=2, str b="2") -> (Tensor)
Jan 05 23:32:14 ]
Jan 05 23:32:14 + cleanup
Jan 05 23:32:14 + retcode=1
Jan 05 23:32:14 + set +x
Jan 05 23:32:14 =================== sccache compilation log ===================
Jan 05 23:32:14 =========== If your build fails, please take a look at the log above for possible reasons ===========

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (2/2)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Jan 06 00:06:39 what(): boxed_kernel_func_ == nullptr INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h":220, please report a bug to PyTorch. Tried to set a manually boxed kernel for a kernel that already has a boxed kernel set.
Jan 06 00:06:36 + '[' /tmp/pytorch_py_test.log '!=' '' ']'
Jan 06 00:06:36 + run_all_tests
Jan 06 00:06:36 + tee /tmp/pytorch_py_test.log
Jan 06 00:06:36 + run_dynamic python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 06 00:06:36 + echo 'Running in DynamicShape mode: python3' /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 06 00:06:36 Running in DynamicShape mode: python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 06 00:06:36 + XLA_EXPERIMENTAL=nonzero:masked_select
Jan 06 00:06:36 + run_test python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 06 00:06:36 + python3 /var/lib/jenkins/workspace/xla/test/../../test/test_view_ops.py -v TestViewOpsXLA
Jan 06 00:06:39 terminate called after throwing an instance of 'c10::Error'
Jan 06 00:06:39   what():  boxed_kernel_func_ == nullptr INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h":220, please report a bug to PyTorch. Tried to set a manually boxed kernel for a kernel that already has a boxed kernel set.
Jan 06 00:06:39 Exception raised from setManuallyBoxedKernel_ at /var/lib/jenkins/workspace/aten/src/ATen/core/boxing/KernelFunction_impl.h:220 (most recent call first):
Jan 06 00:06:39 frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x7d (0x7fe0bcbaf31d in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
Jan 06 00:06:39 frame #1: <unknown function> + 0xf41ee0 (0x7fe0bdd15ee0 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 06 00:06:39 frame #2: c10::impl::OperatorEntry::registerKernel(c10::Dispatcher const&, c10::optional<c10::DispatchKey>, c10::KernelFunction, c10::optional<c10::impl::CppSignature>, std::unique_ptr<c10::FunctionSchema, std::default_delete<c10::FunctionSchema> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x5eb (0x7fe0bdd12ddb in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 06 00:06:39 frame #3: c10::Dispatcher::registerImpl(c10::OperatorName, c10::optional<c10::DispatchKey>, c10::KernelFunction, c10::optional<c10::impl::CppSignature>, std::unique_ptr<c10::FunctionSchema, std::default_delete<c10::FunctionSchema> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x134 (0x7fe0bdd09f54 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 06 00:06:39 frame #4: torch::Library::_impl(char const*, torch::CppFunction&&) & + 0x439 (0x7fe0bdd40739 in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_cpu.so)
Jan 06 00:06:39 frame #5: torch::Library& torch::Library::impl<char const*, at::Tensor (*)(at::Tensor const&, at::Tensor const&)>(char const*, at::Tensor (*&&)(at::Tensor const&, at::Tensor const&)) & + 0x64 (0x7fe0af8e6e24 in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)
Jan 06 00:06:39 frame #6: <unknown function> + 0x68e4cf (0x7fe0af8d84cf in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)
Jan 06 00:06:39 frame #7: torch::detail::TorchLibraryInit::TorchLibraryInit(torch::Library::Kind, void (*)(torch::Library&), char const*, c10::optional<c10::DispatchKey>, char const*, unsigned int) + 0xdb (0x7fe0af8e558b in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)
Jan 06 00:06:39 frame #8: <unknown function> + 0x223571 (0x7fe0af46d571 in /opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/_XLAC.cpython-36m-x86_64-linux-gnu.so)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 5 times.

@BowenBao BowenBao merged commit e9cbf65 into pytorch:onnx_ms_1 Jan 6, 2021
spandantiwari pushed a commit to spandantiwari/pytorch that referenced this pull request Jan 8, 2021
facebook-github-bot pushed a commit that referenced this pull request Jan 13, 2021
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (#49270)
- Symbolic function for torch.square (#49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (#49783) …
- [ONNX] Enable export af aten::__derive_index (#49514) …
- [ONNX] Update symbolic for unfold (#49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (#49798)
- [ONNX] Enable opset 13 ops (#49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (#50119)
- [ONNX] Add a post-pass for If folding (#49410)

Pull Request resolved: #50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed oncall: jit Add this issue/PR to JIT oncall triage queue open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants