-
Notifications
You must be signed in to change notification settings - Fork 26.3k
ONNX Export Adaptive Pooling #17412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX Export Adaptive Pooling #17412
Conversation
|
I believe this will fix #17377 |
|
@maximegregoire yes, as long as the output size is a factor of the input size (which is the case in your example) |
houseroad
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think later we should add AdaptivePooling to ONNX. Since not all cases could be covered by regular Pooling node.
| # (input is not CompleteTensorType or output size not factor of input size) | ||
| # then we call GlobalAveragePool and return None for the indices | ||
| if output_size == [1] * len(output_size) and type == "AveragePool": | ||
| return g.op("GlobalAveragePool", input) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not put "GlobalAveragePool" and "GlobalMaxPool" both under this branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GlobalMaxPool does not return indices information, so we will have to return None for the indices when it is used.
That's why GlobalMaxPool is only called when max_poolxd_with_indices cannot be called (when the input is not CompleteTensorType or the output size not a factor of input size).
Let me know if the comment is not clear and I'll try reformulating :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I understand that. My point is that you can reorganize the code a bit.
| return g.op("GlobalAveragePool", input) | |
| if ouptut_size == [1] * len(output_size): | |
| if type == "AveragePool": | |
| return g.op("GlobalAveragePool", input) | |
| if type == "MaxPool": | |
| return g.op("GlobalMaxPool", input), None | |
| if input.type().kind() != "CompleteTensorType": | |
| return _unimplemented(name, 'input size not accesible') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that would be clearer and easier to read, but I still think we would be missing support to one of the cases.
The difference occurs with the following example:
AdaptiveMaxPool1d((1), return_indices=True), with any input shape.
In the proposed code, since ouptut_size == [1] * len(output_size) and type == "MaxPool", then 'GlobalMaxPool' will be called and None will be returned for the indices.
So exporting the indices will fail.
But by calling 'max_pool1d_with_indices' instead of 'GlobalMaxPool', we could get the indices output and return it successfully, that's why 'GlobalMaxPool' is not called right ahead.
| if mod != [0] * len(mod): | ||
| if output_size == [1] * len(output_size): | ||
| return g.op("GlobalMaxPool", input), None | ||
| return _unimplemented(name, 'output size that are not factor of input size') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make this error message more clear: if output size is not factor of input size, the exporting is not supported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_unimplemented already decorates the string as follows : "ONNX export failed on " + op + " because " + msg + " not supported".
maybe something like "exporting when the output size is not a factor of input size is" would be clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good.
houseroad
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, thanks.
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@houseroad is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
No description provided.