Skip to content

Conversation

@mrsalehi
Copy link

Summary:
Adds torch::nn::Flatten module support for the C++ API.

Issue: #25883

Reviewer: @yf225

Copy link
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrsalehi Thanks a lot for the great work! I left some comments.

}

Tensor FlattenImpl::forward(const Tensor& input) {
return input.flatten();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Python version, torch.nn.Flatten takes start_dim and end_dim as optional constructor arguments, and we'd need to do the same for the C++ version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We would also need to add tests for the optional constructor arguments start_dim and end_dim as well. Thanks a lot for your help!

@mrsalehi
Copy link
Author

mrsalehi commented Oct 24, 2019

@yf225 Sorry for my delay in making the changes.
I think that the implementation should be alright, though I am not quite sure if I have run the tests properly. Could you please guide me through running the tests of a C++ module?

Copy link
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrsalehi Thanks a lot for the update! I left some comments. For running the tests of a C++ module, we can run ./build/bin/test_api --gtest_filter=ModulesTest* --gtest_stack_trace_depth=10 --gmock_verbose=info in the PyTorch root folder after building PyTorch locally :D


/// Options for the `Flatten` module.
struct TORCH_API FlattenOptions {
FlattenOptions(int64_t start_dim, int64_t end_dim);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for options that has two or more optional arguments, we usually don't provide the non-default constructor (e.g. CosineEmbeddingLossOptions), and we should likely remove the constructor here

}

Tensor FlattenImpl::forward(const Tensor& input) {
return torch::flatten(input, options.start_dim(), options.end_dim());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might be able to call

Suggested change
return torch::flatten(input, options.start_dim(), options.end_dim());
return input.flatten(input, options.start_dim(), options.end_dim());

to match the Python version even better :D

/// A placeholder for Flatten operator
class TORCH_API FlattenImpl : public Cloneable<FlattenImpl> {
public:
FlattenImpl(int64_t start_dim, int64_t end_dim) : FlattenImpl(FlattenOptions(start_dim, end_dim)) {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove this constructor and only keep explicit FlattenImpl(const FlattenOptions& options_);

@mrsalehi
Copy link
Author

mrsalehi commented Oct 29, 2019

Hi @yf225!
I fixed the issues and ran the tests and everything worked fine. But, I still have some questions.
It is mentioned that the implementation of C++ modules must exactly match their Python's counterparts. I doubt if this is true for CosineEmbeddingLoss; all of the arguments of this module are optional in Python yet there is no options={} in the signature of the constructor in the C++ version. Moreover, margin is printed into the stream in C++ implementation while it is not printed in the Python's implementation.
I would appreciate if you could clarify this issue for me!

@yf225
Copy link
Contributor

yf225 commented Oct 29, 2019

there is no options={} in the signature of the constructor in the C++ version

I believe this should be what we are looking for:

struct TORCH_API CosineEmbeddingLossImpl : public Cloneable<CosineEmbeddingLossImpl> {
explicit CosineEmbeddingLossImpl(
const CosineEmbeddingLossOptions& options_ = {});

margin is printed into the stream in C++ implementation while it is not printed in the Python's implementation

Thanks a lot for catching the issue! Yes I think the pretty print is not consistent now and we are thinking about whether we should print the full set of options or strictly follow the Python version (which does not print the full set of options in all cases). For now I think we can print torch::nn::Flatten() to be the same as the Python version.

Copy link
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mrsalehi Thanks so much for the awesome work! :D

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yf225 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@yf225 merged this pull request in dfe7b25.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants