Skip to content

Conversation

@saumishr
Copy link
Contributor

@saumishr saumishr commented Feb 24, 2025

Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Differential Revision: D70112642

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta @LucasLLC @pradeepfn @kwen2501 @c-p-i-o @MeetVadakkanchery @mhorowitz @ekr0

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/147758

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 147fb0e with merge base 8eee08d (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added module: distributed_checkpoint oncall: distributed Add this issue/PR to distributed oncall triage queue labels Feb 24, 2025
@meetv18 meetv18 added the oncall: distributed checkpointing Oncall label should be attached to any issues related to distributed checkpointing. label Feb 25, 2025
gkroiz added a commit to gkroiz/pytorch that referenced this pull request Mar 9, 2025
@saumishr saumishr force-pushed the export-D70112642 branch 2 times, most recently from 36ee9c7 to eb8d1a7 Compare April 2, 2025 13:56
@saumishr saumishr force-pushed the export-D70112642 branch 2 times, most recently from 78dcad0 to d3c31ff Compare April 6, 2025 17:56
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D70112642

@pytorch pytorch deleted a comment from facebook-github-bot Apr 6, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D70112642

saumishr added a commit to saumishr/tnt that referenced this pull request Aug 13, 2025
Summary:

X-link: pytorch/pytorch#147758


Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Reviewed By: meetv18

Differential Revision: D70112642
saumishr added a commit to saumishr/pytorch that referenced this pull request Aug 13, 2025
…ch#147758)

Summary:
X-link: meta-pytorch/tnt#991



Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Test Plan:
E2E UTs

Save and load test with internal DCP components: https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-lv5d7qcfmnqzkd

Save and load test with OSS DCP components: https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-z1vz46vkkgtcld
https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-njvvbn07rv5ckd

Reviewed By: meetv18

Differential Revision: D70112642
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D70112642

saumishr added a commit to saumishr/tnt that referenced this pull request Aug 13, 2025
Summary:

X-link: pytorch/pytorch#147758


Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Reviewed By: meetv18

Differential Revision: D70112642
saumishr added a commit to saumishr/pytorch that referenced this pull request Aug 13, 2025
…ch#147758)

Summary:
X-link: meta-pytorch/tnt#991



Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Test Plan:
E2E UTs

Save and load test with internal DCP components: https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-lv5d7qcfmnqzkd

Save and load test with OSS DCP components: https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-z1vz46vkkgtcld
https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-njvvbn07rv5ckd

Reviewed By: meetv18

Differential Revision: D70112642
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D70112642

saumishr added a commit to saumishr/tnt that referenced this pull request Aug 13, 2025
Summary:

X-link: pytorch/pytorch#147758


Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Reviewed By: meetv18

Differential Revision: D70112642
…ch#147758)

Summary:
X-link: meta-pytorch/tnt#991



Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Test Plan:
E2E UTs

Save and load test with internal DCP components: https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-lv5d7qcfmnqzkd

Save and load test with OSS DCP components: https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-z1vz46vkkgtcld
https://www.internalfb.com/mlhub/pipelines/runs/mast/torchx-textray-pretrain_mlm-njvvbn07rv5ckd

Reviewed By: meetv18

Differential Revision: D70112642
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D70112642

saumishr added a commit to saumishr/tnt that referenced this pull request Aug 13, 2025
Summary:

X-link: pytorch/pytorch#147758


Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Reviewed By: meetv18

Differential Revision: D70112642
@saumishr
Copy link
Contributor Author

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR!

Details for Dev Infra team Raised by workflow job

facebook-github-bot pushed a commit to meta-pytorch/tnt that referenced this pull request Aug 13, 2025
Summary:
Pull Request resolved: #991

X-link: pytorch/pytorch#147758

Context:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Reviewed By: meetv18

Differential Revision: D70112642

fbshipit-source-id: 5558a1d2440e539f87a9b7b6295b4199fb4b448a
@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

chuanhaozhuge pushed a commit that referenced this pull request Aug 14, 2025
Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Differential Revision: D70112642

Pull Request resolved: #147758
Approved by: https://github.com/meetv18
chuanhaozhuge pushed a commit that referenced this pull request Aug 18, 2025
Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Differential Revision: D70112642

Pull Request resolved: #147758
Approved by: https://github.com/meetv18
can-gaa-hou pushed a commit to can-gaa-hou/pytorch that referenced this pull request Aug 22, 2025
…ch#147758)

Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Differential Revision: D70112642

Pull Request resolved: pytorch#147758
Approved by: https://github.com/meetv18
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
…ch#147758)

Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Differential Revision: D70112642

Pull Request resolved: pytorch#147758
Approved by: https://github.com/meetv18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/h100-distributed ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged oncall: distributed checkpointing Oncall label should be attached to any issues related to distributed checkpointing. oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (checkpoint)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants