Skip to content

Conversation

@Aidyn-A
Copy link
Collaborator

@Aidyn-A Aidyn-A commented Aug 30, 2022

There there are conflicts between torch.clear_autocast_cache() and cudaMallocAsync from #82682.
Moreover, the use of autocast caching is not reasonable during training which is the main target of make_graphed_callables.

cc @eqy @ptrblck

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 30, 2022

🔗 Helpful links

✅ No Failures (0 Pending)

As of commit d8b7efd (more details on the Dr. CI page):

Expand to see more

💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@soulitzer soulitzer requested a review from ngimel September 1, 2022 19:30
@soulitzer soulitzer added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 1, 2022
@ngimel
Copy link
Collaborator

ngimel commented Sep 1, 2022

Autocast caching has been added for weight reuse in training, but I'm not against removing it.

@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Sep 1, 2022

Yep, we decided to remove caching because we realized its potential danger in make_graphed_callables.

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered without a flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2022

Hey @Aidyn-A.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Sep 6, 2022
…84289)

Summary:
There there are conflicts between `torch.clear_autocast_cache()` and `cudaMallocAsync` from #82682.
Moreover, the use of autocast caching is not reasonable during training which is the main target of `make_graphed_callables`.

cc eqy ptrblck

Pull Request resolved: #84289
Approved by: https://github.com/ngimel

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/ce1b727e774c75f8e31b28ff5915851385c70dcf

Reviewed By: mehtanirav, izaitsevfb

Differential Revision: D39277326

fbshipit-source-id: aaa15276397f082bdc8d8eab08b653eeeb7e8fb7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants