Skip to content

Conversation

@goldsborough
Copy link
Contributor

@goldsborough goldsborough commented Aug 30, 2018

Linting torch/csrc/ (non-recursive) and torch/csrc/autograd (non-recursive).

Fixed things like:

  • typedef vs using
  • Use .empty() instead of comparing with empty string/using .size() == 0
  • Use range for loops instead of old style loops (modernize-)
  • Remove some virtual + override
  • Replace stdint.h with cstdint
  • Replace return Type(x, y) with return {x, y}
  • Use boolean values (true/false) instead of numbers (1/0)
  • More ...

@ezyang @apaszke @cpuhrsch

@goldsborough goldsborough changed the title [codemod] Bag of clang tidy fixes [codemod] Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd Aug 30, 2018
@ezyang
Copy link
Contributor

ezyang commented Aug 30, 2018

Replace return Type(x, y) with return {x, y}

Why this one?

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure why not.

@ezyang
Copy link
Contributor

ezyang commented Aug 30, 2018

I'd approve the Phabricator diff but it's not imported yet.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

goldsborough has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@goldsborough
Copy link
Contributor Author

@ezyang the rationale for return {x, y} is in e.g.

VeryLongToTypeAnnoyingName foo(...) {
  return VeryLongToTypeAnnoyingName(x, y);
}

The type is already clear from the return value, so it's more neat to write

VeryLongToTypeAnnoyingName foo(...) {
  return {x, y};
}

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

goldsborough is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

goldsborough is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 6, 2018
Summary:
Linting `torch/csrc/` (non-recursive) and `torch/csrc/autograd` (non-recursive).

Fixed things like:
- `typedef` vs `using`
- Use `.empty()` instead of comparing with empty string/using `.size() == 0`
- Use range for loops instead of old style loops (`modernize-`)
- Remove some `virtual` + `override`
- Replace `stdint.h` with `cstdint`
- Replace `return Type(x, y)` with `return {x, y}`
- Use boolean values (`true`/`false`)  instead of numbers (1/0)
- More ...

ezyang apaszke cpuhrsch
Pull Request resolved: pytorch/pytorch#11050

Differential Revision: D9597505

Pulled By: goldsborough

fbshipit-source-id: cb0fb4793ade885a8dbf4b10484487b84c64c7f2
petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 6, 2018
* upstream/master: (26 commits)
  cudnn 7 upgrade with spatialBN fix (pytorch#11291)
  Ignore FuseGraph Call on Windows (pytorch#11015)
  defer resolution of mkl to a cmake wrapper library (pytorch#11298)
  Cleanup dependency of distributed flags (pytorch#11221)
  Move minimal wrapdim functionality to core, remove THTensor include i… (pytorch#11283)
  Change includes from ATen/Storage.h to ATen/core/Storage.h (pytorch#11217)
  Fix scalar tensor assert in fusion compiler (pytorch#10952)
  Add dead code elimination pass (pytorch#10101)
  Distributed Data Parallel CPU module for C10D (pytorch#11168)
  Back out "[pt1][tensor] Add strides to caffe2::Tensor"
  Fix conv gradient conversion (pytorch#11312)
  Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (pytorch#11050)
  Sparse tensor printing; add NotImplemented autograd fn (pytorch#10181)
  Add convertToCaffe2Proto to python API
  fix doc for functional.dropout* (pytorch#10417)
  typo fix Tranpose2D -> Transpose2D (pytorch#11281)
  Remove THFinalizer
  Forward declarations of needed curand functions (pytorch#10911)
  nomnigraph - simplify core graph API and test (pytorch#11256)
  Small fixes to cppdocs for sync script (pytorch#11300)
  ...
PenghuiCheng pushed a commit to PenghuiCheng/pytorch that referenced this pull request Sep 11, 2018
…rch#11050)

Summary:
Linting `torch/csrc/` (non-recursive) and `torch/csrc/autograd` (non-recursive).

Fixed things like:
- `typedef` vs `using`
- Use `.empty()` instead of comparing with empty string/using `.size() == 0`
- Use range for loops instead of old style loops (`modernize-`)
- Remove some `virtual` + `override`
- Replace `stdint.h` with `cstdint`
- Replace `return Type(x, y)` with `return {x, y}`
- Use boolean values (`true`/`false`)  instead of numbers (1/0)
- More ...

ezyang apaszke cpuhrsch
Pull Request resolved: pytorch#11050

Differential Revision: D9597505

Pulled By: goldsborough

fbshipit-source-id: cb0fb4793ade885a8dbf4b10484487b84c64c7f2
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants