Skip to content

Conversation

@lara-hdr
Copy link
Contributor

No description provided.

@maximegregoire
Copy link

I believe this will fix #17377

@lara-hdr
Copy link
Contributor Author

@maximegregoire yes, as long as the output size is a factor of the input size (which is the case in your example)

@houseroad houseroad self-requested a review February 23, 2019 00:08
Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think later we should add AdaptivePooling to ONNX. Since not all cases could be covered by regular Pooling node.

# (input is not CompleteTensorType or output size not factor of input size)
# then we call GlobalAveragePool and return None for the indices
if output_size == [1] * len(output_size) and type == "AveragePool":
return g.op("GlobalAveragePool", input)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not put "GlobalAveragePool" and "GlobalMaxPool" both under this branch?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GlobalMaxPool does not return indices information, so we will have to return None for the indices when it is used.
That's why GlobalMaxPool is only called when max_poolxd_with_indices cannot be called (when the input is not CompleteTensorType or the output size not a factor of input size).
Let me know if the comment is not clear and I'll try reformulating :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I understand that. My point is that you can reorganize the code a bit.

Suggested change
return g.op("GlobalAveragePool", input)
if ouptut_size == [1] * len(output_size):
if type == "AveragePool":
return g.op("GlobalAveragePool", input)
if type == "MaxPool":
return g.op("GlobalMaxPool", input), None
if input.type().kind() != "CompleteTensorType":
return _unimplemented(name, 'input size not accesible')

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that would be clearer and easier to read, but I still think we would be missing support to one of the cases.

The difference occurs with the following example:
AdaptiveMaxPool1d((1), return_indices=True), with any input shape.
In the proposed code, since ouptut_size == [1] * len(output_size) and type == "MaxPool", then 'GlobalMaxPool' will be called and None will be returned for the indices.
So exporting the indices will fail.
But by calling 'max_pool1d_with_indices' instead of 'GlobalMaxPool', we could get the indices output and return it successfully, that's why 'GlobalMaxPool' is not called right ahead.

if mod != [0] * len(mod):
if output_size == [1] * len(output_size):
return g.op("GlobalMaxPool", input), None
return _unimplemented(name, 'output size that are not factor of input size')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make this error message more clear: if output size is not factor of input size, the exporting is not supported.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_unimplemented already decorates the string as follows : "ONNX export failed on " + op + " because " + msg + " not supported".
maybe something like "exporting when the output size is not a factor of input size is" would be clearer?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good.

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants