Skip to content

Conversation

@neginraoof
Copy link
Contributor

@neginraoof neginraoof commented Oct 17, 2019

This PR contains:
1- pad updates for opset11 symbolic
2- Updated avg_pool for opset11
3- TopK updates for opset 11

@neginraoof neginraoof requested a review from apaszke as a code owner October 17, 2019 16:12
@neginraoof
Copy link
Contributor Author

Looking into fixing test_onnx_opset test_topk

_unimplemented("TopK", "Out parameter is not supported")
if not _is_value(k):
k = g.op("Constant", value_t=torch.tensor(k, dtype=torch.int64))
k = g.op("Reshape", k, g.op("Constant", value_t=torch.tensor([1])))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Reshape can be put in an else branch. In the if branch construct the Constant with torch.tensor([k], dtype=torch.int64).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That won't work for opset 10, if k is value. I updated this with a minor change.

@neginraoof
Copy link
Contributor Author

@pytorchbot retest this please

Copy link

@spandantiwari spandantiwari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. Added one comment that needs attention.

mode_s='constant',
value_f=0.)
else:
input = g.op("Pad", input,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about this pattern of capturing branched implementations of different opsets in a single helper function in symbolic_helper.py. I think it is better to have symbolic implementations of different opsets in their own files, e.g. in this case symbolic_opset10.py and symbolic_opset11.py.

Copy link
Contributor Author

@neginraoof neginraoof Oct 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Spandan. I agree with you about scenarios like this with different branches for opset versions.
In this case, I want to avoid adding the whole symbolic function for avg_pool in opset 11 file, since the diff between opset 10 and 11 symbolic functions is this line.

@neginraoof
Copy link
Contributor Author

cc @houseroad please review

1 similar comment
@neginraoof
Copy link
Contributor Author

cc @houseroad please review

extension = g.op("Sub", g.op("Mul", g.op("Constant", value_t=torch.tensor(dim, dtype=torch.int64)),
g.op("Constant", value_t=torch.tensor(2, dtype=torch.int64))), pad_len)
# Concat pad with extension: paddings = [dim_n_begin, dim_n_end, dim_n-1_begin, dim_n-1_end, 0, 0, ... ]
paddings = g.op("Concat", pad, g.op("ConstantOfShape", extension, value_t=torch.tensor([0])), axis_i=0)
Copy link
Contributor

@dashesy dashesy Nov 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get this error from this line:

Type parameter (T) bound to different types (tensor(int32) and tensor(int64) in node (__Concat_213).

Changed my code to pass pad.long() to nn.functional.pad instead of pad.int() to work around, maybe the code here should convert types?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ONNX only supports int64 for now. I added the cast, but you cannot have int pads in ONNX for now.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@houseroad has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@neginraoof
Copy link
Contributor Author

@houseroad Could you please let me know if there are any related failures?

Copy link
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@facebook-github-bot
Copy link
Contributor

@houseroad merged this pull request in ebc216a.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants