-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[FSDP2] Fix backward-compatible imports #142419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/142419
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (3 Unrelated Failures)As of commit 5aabc65 with merge base beeffe7 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following jobs failed but were likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
cc H-Huang kwen2501 wanchaol fegin fduwjj wz337 wconstab d4l3k c-p-i-o [ghstack-poisoned]
|
@awgu has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
| @@ -1,8 +1,3 @@ | |||
| from torch.distributed.fsdp import ( | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internal only: the before way meant that from torch.distributed._composable.fsdp import fully_shard was importing fully_shard.py not the function fully_shard. For some reason, the resolution order is different from open source.
To fix this, we match the old import as closely as possible. Namely, we import fully_shard.py contents from .fully_shard. This should force that import to take precedence.
|
Since all distributed unit tests in I checked that passes locally still. I will have to ninja land this from internal to ensure internal gets this fix asap. I will do so after I verify that lint finishes. |
|
Both lintrunner jobs (clang, noclang) have passed. I am going to land. |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
4 similar comments
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
|
@pytorchbot merge -f "fb bot is spamming again, landed internally" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
Internal only: the before way meant that
from torch.distributed._composable.fsdp import fully_shardwas importingfully_shard.pynot the functionfully_shard. For some reason, the resolution order is different from open source.To fix this, we match the old import as closely as possible. Namely, we import
fully_shard.pycontents from.fully_shard. This should force that import to take precedence.cc @H-Huang @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o
@diff-train-skip-merge
Differential Revision: D66990327