Skip to content

Conversation

@anjali411
Copy link
Contributor

No description provided.

anjali411 and others added 30 commits September 16, 2019 17:35
…ruct a module which has no default constructor

ASSERT_TRUE(torch::allclose(output, expected));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing the extra spaces :D

if (options.elementwise_affine()) {
torch::nn::init::ones_(weight);
torch::nn::init::zeros_(bias);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code block

  if (options.elementwise_affine()) {
    torch::nn::init::ones_(weight);
    torch::nn::init::zeros_(bias);
  }

should be at the end the reset() function, because the expectation is that all initialization logic lives in the reset() function :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright

@yf225
Copy link
Contributor

yf225 commented Oct 18, 2019

@anjali411 This PR has some conflicts with master, and it also contains changes to third_party/fbgemm which should be removed. We can work on those two items tomorrow if you need any help from me :)

@yf225
Copy link
Contributor

yf225 commented Oct 18, 2019

@anjali411 The code conflict and submodule issue should be fixed :D

inline Tensor layer_norm(const Tensor& input,
const LayerNormOptions& options,
const Tensor& weight,
const Tensor& bias) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the Python version, weight and bias can take None as default value, and in the C++ version I think we should take Tensor() as default value to match the Python version :D

const auto input = torch::randn({2, 2});
const auto weight = torch::randn({2, 2});
const auto bias = torch::randn({2, 2});
auto y = F::layer_norm(input, LayerNormOptions({2, 2}).eps(2e-5), weight, bias);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to test the "no weights and no bias" case for F::layer_norm, let's probably change this to

Suggested change
auto y = F::layer_norm(input, LayerNormOptions({2, 2}).eps(2e-5), weight, bias);
auto y = F::layer_norm(input, LayerNormOptions({2, 2}).eps(2e-5));

and remove weight and bias for this test.

const auto weight = torch::randn({2, 2});
const auto bias = torch::randn({2, 2});
auto y = F::layer_norm(input, LayerNormOptions({2, 2}).eps(2e-5), weight, bias);
auto y_exp = torch::layer_norm(input, {2, 2}, weight, bias, 2e-5);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and we can pass Tensor() as weight and bias here.

Copy link
Contributor

@yf225 yf225 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 Thanks a lot for the update! I think we are almost there :D

namespace torch {
namespace nn {

LayerNormImpl::LayerNormImpl(const LayerNormOptions& options_) : options(options_) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based on clang-tidy:

{
    path: 'torch/csrc/api/src/nn/modules/normalization.cpp',
    start_line: 15,
    end_line: 15,
    start_column: 30,
    end_column: 30,
    annotation_level: 'failure',
    message: '[[modernize-pass-by-value]] warning: pass by value and use std::move'
  },

we should probably add // NOLINT(modernize-pass-by-value) at the end of this line.

namespace torch {
namespace nn {

LayerNormOptions::LayerNormOptions(std::vector<int64_t> normalized_shape) : normalized_shape_(normalized_shape) {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clang-tidy complains:

{
    path: 'torch/csrc/api/src/nn/options/normalization.cpp',
    start_line: 6,
    end_line: 6,
    start_column: 36,
    end_column: 36,
    annotation_level: 'failure',
    message: '[[modernize-pass-by-value]] warning: pass by value and use std::move'
  }

for this one I think we can do normalized_shape_(std::move(normalized_shape)).

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@yf225 yf225 added the module: cpp Related to C++ API label Oct 21, 2019
@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in 7b59174.

facebook-github-bot pushed a commit that referenced this pull request Oct 23, 2019
Summary:
Since now we have merged #28032 (thanks anjali411!)
Pull Request resolved: #28484

Differential Revision: D18085844

Pulled By: yf225

fbshipit-source-id: 4be972687addea8f57f48dfe9707837196593062
thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
Summary: Pull Request resolved: pytorch#28032

Differential Revision: D18047371

Pulled By: anjali411

fbshipit-source-id: fb61aea52d6622a67ec1d84950e17e85686461ae
thiagocrepaldi pushed a commit to thiagocrepaldi/pytorch that referenced this pull request Feb 4, 2020
Summary:
Since now we have merged pytorch#28032 (thanks anjali411!)
Pull Request resolved: pytorch#28484

Differential Revision: D18085844

Pulled By: yf225

fbshipit-source-id: 4be972687addea8f57f48dfe9707837196593062
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpp Related to C++ API

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants