__torch_function__ overrides for torch.functional and torch.nn.functional#32799
__torch_function__ overrides for torch.functional and torch.nn.functional#32799ngoldbaum wants to merge 20 commits intopytorch:masterfrom
Conversation
There was a problem hiding this comment.
Note that the API of this function changed after I moved it to this file. I'm sorry that makes this a little difficult to review!
I'm now passing in torch_api as a PyObject*. I'm also passing in the module name to facilitate the code that generates the error message below. Both of these changes facilitate using this function from both python_torch_functions.cpp and python_nn_functions.cpp.
There should be no other changes to this function.
💊 CircleCI build failures summary and remediationsAs of commit 39b204b:
Detailed failure analysisOne may explore the probable reasons each build failed interactively on the Dr. CI website. 🕵️ 5 new failures recognized by patternsThe following build failures do not appear to be due to upstream breakage:
|
|
Nifty! Does this still count as WIP? It looks like some merging is in order. |
|
I marked it as WIP last night after seeing the test failures which I’ll work through today. Please feel free to review, I doubt fixing the test failures will change the structure of this code too much. |
|
Adding @bhosmer for codegen changes |
torch/nn/functional.py
Outdated
torch/nn/functional.py
Outdated
There was a problem hiding this comment.
For my curiosity; how come in some of the adjusted code, we need to define tens_ops, and in other cases, we don't? Is this just a code reduction measure?
There was a problem hiding this comment.
If there's multiple tensor operands I create the tuple before calling type because I need to create the tuple no matter what. When there's only one tensor operand I don't create the tuple because it's only needed if type(input) ends up not being Tensor, so we don't need to pay the cost of creating the tuple unless we're passed non-tensor operands. I figure optimizing for tensor operands is what you'd prefer :)
|
Can you post the modified generated code, before and after (a diff works good too) |
|
This all looks very reasonable. It'll need a rebase and some testfixing though. |
There was a problem hiding this comment.
Codegen changes LGTM, mod echoing @ezyang's codegen diff request :)
Also, apologies for the rebase in your future - refacttor of gen_python_functions.py just landed. Rebase should be conceptually simple but nontrivial. Definitely hit me up here if anything is unclear.
BTW, I notice that the existing setup doesn't generate the check into no-args bindings, was that intentional?
|
I pinged @ezyang about this in the quansight channel on the pytorch slack but I figure someone else might chime in here. It looks like most of the test failures are because torchscript doesn't like the code I've added to check for I've been playing around with rephrasing that code in a way that appeases torchscript but I don't see it. Is there a way I can disable torchscript for a block of code? Or does this mean I'll need to touch the jit code? |
|
@ngoldbaum tangentially related to your last comment, worth noting that the JIT doesn't see the generated interception logic at all - the bindings generated by Approach and timeline for teaching the JIT about the overrides is TBD, but in the meantime, thought it was worth noting the disparity. |
|
@ngoldbaum if that would could compile in TorchScript. But I guess that probably doesn't work because the inputs that will have torch_functions probably inherit from Tensor. |
|
Yeah, we want to check if it's specifically not Thank you for the suggestions! |
|
Hmm, that doesn't seem to work. It's still trying to compile the generator expression in |
|
@ngoldbaum yea in that case you would have to factor out the |
|
@ngoldbaum when #32871 lands you will be able to just put the block you don't want compiled under |
|
Excellent, thank you for the quick fix on that :) |
|
I've rebased and am on a branch based on a version of master after #32871, which I will push right now. |
631e4bf to
3aaf9b5
Compare
|
And it looks like I can trigger this in the test @eellison added with the following addition: DetailsSo it's still trying to compile code in the block even though it doesn't get evaluated at runtime. |
It's still trying to parse the code and create the AST, which is expected. Looks like you ran into a AST node that we don't parse yet. What is the node that isn't supported exactly ? |
|
In this case it's a generator expression I'd like to use |
a5f7b18 to
44e6cb3
Compare
|
@ezyang this should be good for re-review. Here's a gist with the diffs you asked for: https://gist.github.com/ngoldbaum/ebe6a0ad1fb91a0c701dc43ffbd65aee |
| // PyType_GenericNew returns a new reference | ||
| THPVariableFunctionsModule = PyType_GenericNew(&THPVariableFunctions, Py_None, Py_None); | ||
| // PyModule_AddObject steals a reference | ||
| if (PyModule_AddObject(module, "_VariableFunctions", THPVariableFunctionsModule) < 0) { |
There was a problem hiding this comment.
I had to slightly adjust how _VariableFunctions is set up to make it match how _nn is set up in python_nn_functions.cpp. I think this is safe and should behave identically but it may be a risky change because I don't know why this namespace was originally set up as a type instead of an instance.
|
Sorry this has taken a while, hoping this will get reviewed today |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Looks like the internal tests are failing. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
yay all the tests are passing :) |
…onal (pytorch#32799) Summary: This adds `__torch_function__` support for all functions in `torch.functional` and `torch.nn.functional`. The changes to C++ code and codegen scripts are to facilitate adding `__torch_function__` support for the native functions in `torch._C._nn`. Note that I moved the `handle_torch_function` C++ function to a header that both `python_torch_functions.cpp` and `python_nn_functions.cpp` include. The changes to `python_nn_functions.cpp` mirror the changes I made to `python_torch_functions.cpp` when `__torch_function__` support was first added in pytorch#27064. Due to the somewhat different way the `torch._C` and `torch._C._nn` namespaces are initialized I needed to create a new static reference to the `torch._C._nn` namespace (`THPNNVariableFunctions`). I'm not sure if that is the best way to do this. In principle I could import these namespaces in each kernel and avoid the global variable but that would have a runtime cost. I added `__torch_function__` support to the Python functions in `torch.nn.functional` following the approach in pytorch#32194. I re-enabled the test that checks if all functions in the `torch` namespace are explicitly tested for `__torch_function__` support. I also generalized the check to work for `torch.functional` and `torch.nn.functional` as well. This test was explicitly disabled in pytorch#30730 and I'm happy to disable it again if you think that's appropriate. I figured now was as good a time as any to try to re-enable it. Finally I adjusted the existing torch API tests to suppress deprecation warnings and add keyword arguments used by some of the code in `torch.nn.functional` that were missed when I originally added the tests in pytorch#27064. Pull Request resolved: pytorch#32799 Differential Revision: D19956809 Pulled By: ezyang fbshipit-source-id: 40d34e0109cc4b9f3ef62f409d2d35a1d84e3d22
| torch.zeros, | ||
| torch.nn.functional.assert_int_or_pair, | ||
| torch.nn.functional.boolean_dispatch, | ||
| torch.nn.functional.division, |
There was a problem hiding this comment.
Do you know why this is here? AFAICT, the only division in torch.nn.functional comes from from __future__ import division, which is probably not what you want.
There was a problem hiding this comment.
It's a blacklist: IGNORED_TORCH_FUNCTIONS. The test loops over dir of the module and somehow division was being exported lol
This adds
__torch_function__support for all functions intorch.functionalandtorch.nn.functional.The changes to C++ code and codegen scripts are to facilitate adding
__torch_function__support for the native functions intorch._C._nn. Note that I moved thehandle_torch_functionC++ function to a header that bothpython_torch_functions.cppandpython_nn_functions.cppinclude. The changes topython_nn_functions.cppmirror the changes I made topython_torch_functions.cppwhen__torch_function__support was first added in #27064. Due to the somewhat different way thetorch._Candtorch._C._nnnamespaces are initialized I needed to create a new static reference to thetorch._C._nnnamespace (THPNNVariableFunctions). I'm not sure if that is the best way to do this. In principle I could import these namespaces in each kernel and avoid the global variable but that would have a runtime cost.I added
__torch_function__support to the Python functions intorch.nn.functionalfollowing the approach in #32194.I re-enabled the test that checks if all functions in the
torchnamespace are explicitly tested for__torch_function__support. I also generalized the check to work fortorch.functionalandtorch.nn.functionalas well. This test was explicitly disabled in #30730 and I'm happy to disable it again if you think that's appropriate. I figured now was as good a time as any to try to re-enable it.Finally I adjusted the existing torch API tests to suppress deprecation warnings and add keyword arguments used by some of the code in
torch.nn.functionalthat were missed when I originally added the tests in #27064.