-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Dump operator names of a script module #30467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ZolotukhinM
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this doesn't have to be a method of Module, it can be a standalone pass. Please also find a couple of other comments inline.
torch/csrc/jit/script/module.cpp
Outdated
| std::vector<std::string> Module::opnames() const { | ||
| std::unordered_set<std::string> names; | ||
| export_opnames(*this, names); | ||
| return std::vector<std::string>(names.begin(), names.end()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Order of elements in the returned vector is non-deterministic since they come from std::unordered_set. That might result in hard to debug issues, so please make sure that the order is well-specified.
torch/csrc/jit/script/module.h
Outdated
| const std::string& filename, | ||
| const ExtraFilesMap& extra_files = ExtraFilesMap()) const; | ||
|
|
||
| std::vector<std::string> opnames() const; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't have to be a method of torch::jit::script::Module class. I think it can and should live separately, and you actually already implemented it this way.
torch/csrc/jit/script/module.cpp
Outdated
| auto methods = m.get_methods(); | ||
| for (const auto& method : methods) { | ||
| const auto& func = method.function(); | ||
| torch::jit::Code code(func.graph()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to construct the Code object? Will traversing all nodes in the graph not be enough for some reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the code would contain the final ops for interpreter (when we bypass the optimizations in mobile). I'm not sure if emitting nodes would introduce more ops in future? Another reason is that I'm not sure how to know if a node is an operator node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All exporters use nodes directly and hence this function should use that too. Also, Code doesn't do any optimizations, it just wraps some nodes with Instructions - it's an internal detail of how interpreter works and should not affect exporting.
torch/csrc/jit/script/module.cpp
Outdated
| namespace { | ||
| void export_opnames(const script::Module& m, std::unordered_set<std::string>& opnames) { | ||
| auto methods = m.get_methods(); | ||
| for (const auto& method : methods) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: for (const auto& method : m.get_methods()) {, no need in having methods variable.
Introduce function script.module.opnames(), which returns a list of all operator names used in that module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(m.opnames())
Outputs:
['aten::append.Tensor', 'aten::addmm', 'aten::tanh', 'aten::add_.Tensor', 'aten::matmul', 'aten::relu', 'aten::cat', 'prim::TupleUnpack', 'prim::ListConstruct', 'aten::t', 'aten::_convolution', 'aten::mul.Tensor', 'aten::permute', 'aten::add.Tensor', 'aten::max.dim', 'aten::dropout', 'aten::embedding', 'prim::TupleConstruct']
Introduce function jit.opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.opnames(m))
The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
torch/csrc/jit/export.cpp
Outdated
| } | ||
| } // namespace | ||
|
|
||
| std::vector<std::string> opnames(const script::Module& m) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: It's better to use a verb for the function name (e.g. export_opnames).
torch/csrc/jit/script/module.cpp
Outdated
| auto methods = m.get_methods(); | ||
| for (const auto& method : methods) { | ||
| const auto& func = method.function(); | ||
| torch::jit::Code code(func.graph()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All exporters use nodes directly and hence this function should use that too. Also, Code doesn't do any optimizations, it just wraps some nodes with Instructions - it's an internal detail of how interpreter works and should not affect exporting.
ZolotukhinM
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, it looks better! I think we still should switch to use nodes directly (without going through Code - please see the comment inline) and also we need some tests for this.
Introduce function jit.opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.opnames(m))
The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
Introduce function jit.opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.opnames(m))
The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
|
Updated to use nodes in graph directly. @ZolotukhinM |
Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))
The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
ZolotukhinM
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good to go once you add tests, thanks!
torch/csrc/jit/export.cpp
Outdated
| void export_opnames(const script::Module& m, std::set<std::string>& opnames) { | ||
| for (const auto& method : m.get_methods()) { | ||
| const auto& func = method.function(); | ||
| torch::jit::Code code(func.graph()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unused now.
torch/csrc/jit/export.cpp
Outdated
| if (op) { | ||
| auto opname = node->schema().operator_name(); | ||
| std::string namestr = opname.name; | ||
| if (!opname.overload_name.empty()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: use curly-braces even for one-line ifs.
Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))
The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
Summary: Pull Request resolved: pytorch#30467 Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. Example: import torch m = torch.jit.load("example.pt") print(torch.jit.export_opnames(m)) The outputs are in alphabetical order: ['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack'] Test Plan: Imported from OSS Differential Revision: D18801619 Pulled By: iseeyuan fbshipit-source-id: f9b198d3e82b095daf704ee595d8026ad889bb13
Stack from ghstack:
Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.
Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))
The outputs are in alphabetical order:
['aten::convolution', 'aten::add.Tensor', 'aten::add.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
Differential Revision: D18801619