Skip to content

Conversation

@iseeyuan
Copy link
Contributor

@iseeyuan iseeyuan commented Nov 26, 2019

Stack from ghstack:

Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.

Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))

The outputs are in alphabetical order:
['aten::convolution', 'aten::add.Tensor', 'aten::add.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']

Differential Revision: D18801619

@iseeyuan iseeyuan requested a review from apaszke as a code owner November 26, 2019 17:45
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Nov 26, 2019
iseeyuan pushed a commit that referenced this pull request Nov 26, 2019
ghstack-source-id: c5dad13
Pull Request resolved: #30467
Copy link

@ZolotukhinM ZolotukhinM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this doesn't have to be a method of Module, it can be a standalone pass. Please also find a couple of other comments inline.

std::vector<std::string> Module::opnames() const {
std::unordered_set<std::string> names;
export_opnames(*this, names);
return std::vector<std::string>(names.begin(), names.end());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Order of elements in the returned vector is non-deterministic since they come from std::unordered_set. That might result in hard to debug issues, so please make sure that the order is well-specified.

const std::string& filename,
const ExtraFilesMap& extra_files = ExtraFilesMap()) const;

std::vector<std::string> opnames() const;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't have to be a method of torch::jit::script::Module class. I think it can and should live separately, and you actually already implemented it this way.

auto methods = m.get_methods();
for (const auto& method : methods) {
const auto& func = method.function();
torch::jit::Code code(func.graph());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to construct the Code object? Will traversing all nodes in the graph not be enough for some reason?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the code would contain the final ops for interpreter (when we bypass the optimizations in mobile). I'm not sure if emitting nodes would introduce more ops in future? Another reason is that I'm not sure how to know if a node is an operator node.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All exporters use nodes directly and hence this function should use that too. Also, Code doesn't do any optimizations, it just wraps some nodes with Instructions - it's an internal detail of how interpreter works and should not affect exporting.

namespace {
void export_opnames(const script::Module& m, std::unordered_set<std::string>& opnames) {
auto methods = m.get_methods();
for (const auto& method : methods) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: for (const auto& method : m.get_methods()) {, no need in having methods variable.

Introduce function script.module.opnames(), which returns a list of all operator names used in that module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. 

Example: 
import torch
m = torch.jit.load("example.pt")
print(m.opnames())

Outputs:
['aten::append.Tensor', 'aten::addmm', 'aten::tanh', 'aten::add_.Tensor', 'aten::matmul', 'aten::relu', 'aten::cat', 'prim::TupleUnpack', 'prim::ListConstruct', 'aten::t', 'aten::_convolution', 'aten::mul.Tensor', 'aten::permute', 'aten::add.Tensor', 'aten::max.dim', 'aten::dropout', 'aten::embedding', 'prim::TupleConstruct']
iseeyuan pushed a commit that referenced this pull request Nov 27, 2019
ghstack-source-id: bc4ffaa
Pull Request resolved: #30467
@iseeyuan iseeyuan requested a review from ZolotukhinM November 27, 2019 19:43
Introduce function jit.opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. 

Example: 
import torch
m = torch.jit.load("example.pt")
print(torch.jit.opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
iseeyuan pushed a commit that referenced this pull request Nov 27, 2019
ghstack-source-id: 1911b10
Pull Request resolved: #30467
@suo suo removed their request for review November 28, 2019 00:19
}
} // namespace

std::vector<std::string> opnames(const script::Module& m) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: It's better to use a verb for the function name (e.g. export_opnames).

auto methods = m.get_methods();
for (const auto& method : methods) {
const auto& func = method.function();
torch::jit::Code code(func.graph());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All exporters use nodes directly and hence this function should use that too. Also, Code doesn't do any optimizations, it just wraps some nodes with Instructions - it's an internal detail of how interpreter works and should not affect exporting.

Copy link

@ZolotukhinM ZolotukhinM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, it looks better! I think we still should switch to use nodes directly (without going through Code - please see the comment inline) and also we need some tests for this.

Introduce function jit.opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. 

Example: 
import torch
m = torch.jit.load("example.pt")
print(torch.jit.opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
iseeyuan pushed a commit that referenced this pull request Dec 3, 2019
ghstack-source-id: e9a13fd
Pull Request resolved: #30467
Introduce function jit.opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. 

Example: 
import torch
m = torch.jit.load("example.pt")
print(torch.jit.opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
iseeyuan pushed a commit that referenced this pull request Dec 3, 2019
ghstack-source-id: 45a7cec
Pull Request resolved: #30467
@iseeyuan iseeyuan requested a review from ZolotukhinM December 3, 2019 15:53
@iseeyuan
Copy link
Contributor Author

iseeyuan commented Dec 3, 2019

Updated to use nodes in graph directly. @ZolotukhinM

Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. 

Example: 
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
Copy link

@ZolotukhinM ZolotukhinM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good to go once you add tests, thanks!

void export_opnames(const script::Module& m, std::set<std::string>& opnames) {
for (const auto& method : m.get_methods()) {
const auto& func = method.function();
torch::jit::Code code(func.graph());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unused now.

if (op) {
auto opname = node->schema().operator_name();
std::string namestr = opname.name;
if (!opname.overload_name.empty())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: use curly-braces even for one-line ifs.

Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size. 

Example: 
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']
iseeyuan pushed a commit that referenced this pull request Dec 4, 2019
ghstack-source-id: ac72910
Pull Request resolved: #30467
@facebook-github-bot facebook-github-bot deleted the gh/iseeyuan/37/head branch December 10, 2019 15:19
wuhuikx pushed a commit to wuhuikx/pytorch that referenced this pull request Jan 30, 2020
Summary:
Pull Request resolved: pytorch#30467

Introduce function jit.export_opnames(module), which returns a list of all operator names used in the module and its submodules. One usage is to have mobile custom build to link only operators in the returned list to save the mobile size.

Example:
import torch
m = torch.jit.load("example.pt")
print(torch.jit.export_opnames(m))

The outputs are in alphabetical order:
['aten::_convolution', 'aten::add.Tensor', 'aten::add_.Tensor', 'aten::addmm', 'aten::append.Tensor', 'aten::cat', 'aten::dropout', 'aten::embedding', 'aten::matmul', 'aten::max.dim', 'aten::mul.Tensor', 'aten::permute', 'aten::relu', 'aten::t', 'aten::tanh', 'prim::ListConstruct', 'prim::TupleConstruct', 'prim::TupleUnpack']

Test Plan: Imported from OSS

Differential Revision: D18801619

Pulled By: iseeyuan

fbshipit-source-id: f9b198d3e82b095daf704ee595d8026ad889bb13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants