Skip to content

Conversation

@ngimel
Copy link
Collaborator

@ngimel ngimel commented Sep 24, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163712

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 28e7772 with merge base 8e6b0c7 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Sep 24, 2025
@ezyang
Copy link
Contributor

ezyang commented Sep 24, 2025

I wonder if we have to check the other collectives

@ngimel
Copy link
Collaborator Author

ngimel commented Sep 24, 2025

I wonder if we have to check the other collectives

Other collectives that are not doing local copy are fine. Collectives that have channels-last on some ranks and contiguous on others will produce silent wrong results but we have no way to check it.

@ngimel
Copy link
Collaborator Author

ngimel commented Sep 24, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Sep 24, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: trunk / inductor-build / build

Details for Dev Infra team Raised by workflow job

return at::empty(sizes, t.options());
} else {
// memory-dense, but not necessarily contiguous tensor
std::vector<int64_t> strides{t.numel()};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This always allocate it to the wrong length, might want to explictly reserve based on the length vector, emplace_back the first element, then do the insertion. Should prevent the std::vector from being reallocated.

}
auto& t = tensors[0];
at::DeviceGuard gpuGuard(t.device());
std::vector<int64_t> sizes{static_cast<int64_t>(tensors.size())};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, do reserve, emplace_back first value, then do the insertion.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-existing condition

@ngimel
Copy link
Collaborator Author

ngimel commented Sep 24, 2025

@pytorchbot merge -i

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 2 checks: trunk / inductor-build / build, trunk / linux-jammy-rocm-py3.10 / test (default, 1, 2, linux.rocm.gpu.gfx942.1)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@ngimel
Copy link
Collaborator Author

ngimel commented Sep 26, 2025

@pytorchbot cherry-pick --onto release/2.9 -c critical --fixes #163483

@pytorchbot
Copy link
Collaborator

Cherry picking #163712

The cherry pick PR is at #163987 and it is linked with issue #163483. The following tracker issues are updated:

Details for Dev Infra team Raised by workflow job

Camyll pushed a commit that referenced this pull request Sep 26, 2025
facebook-github-bot pushed a commit that referenced this pull request Oct 24, 2025
…g results (#166181)

Summary:
Per title


cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci


Reviewed By: kwen2501

Differential Revision: D85457960

Pulled By: ngimel
@github-actions github-actions bot deleted the ngimel/allgather_format branch October 27, 2025 02:19
pytorchbot pushed a commit that referenced this pull request Nov 1, 2025
atalman pushed a commit that referenced this pull request Nov 4, 2025
…tiguous (#166779)

Reverts #163712 and forces allgather/scatter inputs/outputs to be contiguous (#166181)

Per title

Pull Request resolved: #166181
Approved by: https://github.com/kwen2501

(cherry picked from commit 2efcf3c)

Co-authored-by: Natalia Gimelshein <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

all_gather can change memory ordering of tensor

7 participants