-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Add private escape hatches to fall back to pre-swap tensors behavior #126984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add private escape hatches to fall back to pre-swap tensors behavior #126984
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/126984
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 9855948 with merge base 5196ef1 ( UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Tensor types that have been opted in to the
swap_tensorspath.The following table lists tensor types that have been opted in by default to the swap_tensors path in either
_applyorload_state_dict. All other types have not been opted in by default and will only useswap_tensorsiftorch.__future__.set_swap_module_params_on_conversion(True)This PR adds the escape hatches mentioned in the above table to
torch.__future___{get/set}_swap_overwrite_escape_hatch: which allows tensor types that assigned new params innn.Module._applythat were opted in to swap_tensors (right now justsrc/destonxla/metato fall back to this behavior_{get/set}_swap_load_state_dict_escape_hatch: which allows tensor types that were opted in to swap_tensors path innn.Module.load_state_dict(for now just to fall back to old behaviorThis allows backward compatibility as it provides an option to get the old behavior. In the future, we will need one more escape hatch for tensors that used
tensor.data =innn.Module._applyCan we let these be private for now?
Stack from ghstack (oldest at bottom):