Skip to content

Conversation

@xmfan
Copy link
Member

@xmfan xmfan commented Oct 17, 2024

We need a way to unify CA checks across ctx manager or torch.compile APIs. Setting this config is actually a no-op, but unifies the state

One edge case is user using torch.compile on regions that have .backward() but don't want CA to execute on them and another thread is within the context manager. The general solution to the problem would be to first make the state thread local, then have FSDP create threads that carryover the compiled autograd state

Stack from ghstack (oldest at bottom):

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames @rec @yf225

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 17, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/138241

Note: Links to docs will display an error until the docs builds have been completed.

❌ 17 New Failures, 1 Unrelated Failure

As of commit 7250431 with merge base d531bd5 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

if not prior:
compiled_autograd_enabled = False
torch._dynamo.config.compiled_autograd = prior_config
torch._C._dynamo.compiled_autograd.set_autograd_compiler(prior)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we can rebase on top of #138113 to avoid potential merge conflict?

…nfig"

We need a way to unify CA checks across ctx manager or torch.compile APIs. Setting this config is actually a no-op, but unifies the state

One edge case is user using torch.compile on regions that have .backward() but don't want CA to execute on them and another thread is within the context manager. The general solution to the problem would be to first make the state thread local, then have FSDP create threads that carryover the compiled autograd state




cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec yf225

[ghstack-poisoned]
yf225 added 2 commits October 17, 2024 13:34
…nfig"

We need a way to unify CA checks across ctx manager or torch.compile APIs. Setting this config is actually a no-op, but unifies the state

One edge case is user using torch.compile on regions that have .backward() but don't want CA to execute on them and another thread is within the context manager. The general solution to the problem would be to first make the state thread local, then have FSDP create threads that carryover the compiled autograd state




cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec yf225

[ghstack-poisoned]
…nfig"

We need a way to unify CA checks across ctx manager or torch.compile APIs. Setting this config is actually a no-op, but unifies the state

One edge case is user using torch.compile on regions that have .backward() but don't want CA to execute on them and another thread is within the context manager. The general solution to the problem would be to first make the state thread local, then have FSDP create threads that carryover the compiled autograd state




cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames rec yf225

[ghstack-poisoned]
@xmfan xmfan closed this Oct 18, 2024
@xmfan
Copy link
Member Author

xmfan commented Oct 18, 2024

nvm this doesnt work for reentrants

@github-actions github-actions bot deleted the gh/xmfan/115/head branch November 18, 2024 02:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants