Skip to content

Conversation

@bowangbj
Copy link
Contributor

@bowangbj bowangbj commented Aug 26, 2021

Stack from ghstack:

Summary:
Use torch_function to extend torch.nn.init.uniform_
The Init is done in SPMD fashion. Note that ideally we want to aggregate sharded tensors into a global tensor, init it and reshard. It's fine to run it SPMD since uniform is I.I.D indepenent and identifically distributed.
Also enable unit test for test_linear.py for OSS test

Test Plan:

a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_linear.py --v (before runs this command is no-op)

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D30563017

cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang

… to mimic [torch.nn.init.normal_, uniform_, kaiming_uniform_]

Summary:

Note _sharded_tensor module is a temporary place.
Ideally we want something like torch.nn.init(ShardedTensor, ...) to ensure consistent UX with Tensor.

To support that, we need either:
a) Update torch/nn/init.py with normal_(ShardedTensor, ), uniform_(ShardedTensor,...), and kaiming_uniform_(ShardedTensor, ...), or
b) Add torch.nn.init.{funcs} into __torch_function__ dispatchers (Currently __torch_function__ does not handle these funcs)

Test Plan:

(pytorch) ... $ python test/distributed/_sharded_tensor/test_sharded_tensor.py TestShardedTensorNNInit  --v

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 26, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit a570b13 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@bowangbj
Copy link
Contributor Author

@bowangbj has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bowangbj bowangbj changed the title Introduce _sharded_tensor.[normal_, uniform_, kaiming_uniform_] utils to mimic [torch.nn.init.normal_, uniform_, kaiming_uniform_] Introduce _sharded_tensor.[normal_, uniform_, kaiming_uniform_] utils to mimic torch.nn.init.[normal_, uniform_, kaiming_uniform_] Aug 26, 2021
…ing_uniform_] utils to mimic torch.nn.init.[normal_, uniform_, kaiming_uniform_]"

Summary:

Note _sharded_tensor module is a temporary place.
Ideally we want something like torch.nn.init(ShardedTensor, ...) to ensure consistent UX with Tensor.

To support that, we need either:
a) Update torch/nn/init.py with normal_(ShardedTensor, ), uniform_(ShardedTensor,...), and kaiming_uniform_(ShardedTensor, ...), or
b) Add torch.nn.init.{funcs} into __torch_function__ dispatchers (Currently __torch_function__ does not handle these funcs)

Test Plan:

(pytorch) ... $ python test/distributed/_sharded_tensor/test_sharded_tensor.py TestShardedTensorNNInit  --v

Reviewers:

Subscribers:

Tasks:

Tags:

tmp

Differential Revision: [D30563017](https://our.internmc.facebook.com/intern/diff/D30563017)

[ghstack-poisoned]
…rm_, kaiming_uniform_] utils to mimic torch.nn.init.[normal_, uniform_, kaiming_uniform_]"

Summary:

Note _sharded_tensor module is a temporary place.
Ideally we want something like torch.nn.init(ShardedTensor, ...) to ensure consistent UX with Tensor.

To support that, we need either:
a) Update torch/nn/init.py with normal_(ShardedTensor, ), uniform_(ShardedTensor,...), and kaiming_uniform_(ShardedTensor, ...), or
b) Add torch.nn.init.{funcs} into __torch_function__ dispatchers (Currently __torch_function__ does not handle these funcs)

Test Plan:

(pytorch) ... $ python test/distributed/_sharded_tensor/test_sharded_tensor.py TestShardedTensorNNInit  --v

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D30563017](https://our.internmc.facebook.com/intern/diff/D30563017)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Aug 26, 2021
… to mimic torch.nn.init.[normal_, uniform_, kaiming_uniform_]

Summary:

Note _sharded_tensor module is a temporary place.
Ideally we want something like torch.nn.init(ShardedTensor, ...) to ensure consistent UX with Tensor.

To support that, we need either:
a) Update torch/nn/init.py with normal_(ShardedTensor, ), uniform_(ShardedTensor,...), and kaiming_uniform_(ShardedTensor, ...), or
b) Add torch.nn.init.{funcs} into __torch_function__ dispatchers (Currently __torch_function__ does not handle these funcs)

Test Plan:

(pytorch) ... $ python test/distributed/_sharded_tensor/test_sharded_tensor.py TestShardedTensorNNInit  --v

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: c7712e4
Pull Request resolved: #63997
@bowangbj
Copy link
Contributor Author

@bowangbj has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/f18df46844bb6d5fd3d5ac8c3f0206e12bcb95ec/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

1 similar comment
@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/f18df46844bb6d5fd3d5ac8c3f0206e12bcb95ec/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@bowangbj
Copy link
Contributor Author

@bowangbj has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@bowangbj
Copy link
Contributor Author

Thanks Wanchao and Pritam, resolved all the cmts, ready to submit. Will follow up the args / kargs issue in follow up PR.

Summary:
Use torch_function to extend torch.nn.init.uniform_
The Init is done in SPMD fashion. Note that ideally we want to aggregate sharded tensors into a global tensor, init it and reshard. It's fine to run it SPMD since uniform is I.I.D indepenent and identifically distributed.
Also enable unit test for test_linear.py for OSS test

Test Plan:

a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit  --v
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_linear.py --v  (before runs this command is no-op)

or b) Manual run:  Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D30563017](https://our.internmc.facebook.com/intern/diff/D30563017)

[ghstack-poisoned]
@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/a570b13a4e887244cec236193fe3252ed8fc5cae/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

bowangbj added a commit that referenced this pull request Oct 21, 2021
Summary:
Use torch_function to extend torch.nn.init.uniform_
The Init is done in SPMD fashion. Note that ideally we want to aggregate sharded tensors into a global tensor, init it and reshard. It's fine to run it SPMD since uniform is I.I.D indepenent and identifically distributed.
Also enable unit test for test_linear.py for OSS test

Test Plan:

a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit  --v
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_linear.py --v  (before runs this command is no-op)

or b) Manual run:  Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 99e45f7
Pull Request resolved: #63997
@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/a570b13a4e887244cec236193fe3252ed8fc5cae/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@bowangbj
Copy link
Contributor Author

@bowangbj has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in b6df043.

bowangbj added a commit that referenced this pull request Oct 21, 2021
…hardedTensor

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 21, 2021
…kaiming_uniform_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 21, 2021
… ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 21, 2021
…t.kaiming_uniform_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 21, 2021
…m_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 21, 2021
…hardedTensor

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 012f671
Pull Request resolved: #67057
bowangbj added a commit that referenced this pull request Oct 22, 2021
…torch.nn.init.kaiming_uniform_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 22, 2021
…iming_uniform_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 22, 2021
…hardedTensor

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 54ce4ba
Pull Request resolved: #67057
bowangbj added a commit that referenced this pull request Oct 22, 2021
… and torch.nn.init.kaiming_uniform_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
bowangbj added a commit that referenced this pull request Oct 22, 2021
…it.kaiming_uniform_ ops to ShardedTensor"

Summary:
Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D31845654](https://our.internmc.facebook.com/intern/diff/D31845654)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Oct 26, 2021
…hardedTensor (#67057)

Summary:
Pull Request resolved: #67057

Extend ShardedTensor with torch.nn.init.[normal_, and kaiming_uniform_] ops
Follow up from #63997

Test Plan:
a) Unit Test
(pytorch) ... $ python test/distributed/_sharded_tensor/ops/test_init.py TestShardedTensorNNInit --v

or b) Manual run: Instruction here: https://docs.google.com/document/d/1_m1Hdo5w51-hhPlZ_F8Y6PIWrN7UgJZqiSpARYvhsaE/edit#
s/uniform_/normal_ or kaiming_uniform_

Imported from OSS

Reviewed By: pritamdamania87

Differential Revision: D31845654

fbshipit-source-id: e7aedc0972539da59f7b84bbbf617caf6b206d52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants