-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
I propose that we define a thread local flag which is false (default value) in PyTorch user code, and true when we have called into "PyTorch core" code (defined to be the set of code we maintain in this codebase). The most obvious place this flag is flipped is when a user calls an operator we define. The semantics of some operators may behave differently depending on if we are in core or not.
Here are some applications of the flag:
-
[feature request] Global GPU Flag #7535 is a longstanding, frequently requested feature to have a "global GPU flag" that causes all tensors to be allocated in CUDA by default. We have been wary of implementing this, as such a flag would also affect internal allocations in our library, which probably would not be able to handle this change correctly. With a PyTorch core flag, matters are simple: respect the global GPU flag if !core, and ignore it otherwise. We can similarly make the "default dtype is double" flag more safe this way.
-
empty_like,to,resize_as_andclonenow preserve memory format #23899 wishes to make a major backwards-compatibility breaking change to the stride-handling of some major operations. @VitalyFedyunin proposes that we introduce a flag to let users pick between which behavior they want; @gchanan is concerned it will be too difficult to maintain core code to work in both cases, in this regime. With a PyTorch core flag, respect the memory propagation flag if !core, and ignore it otherwise.