Skip to content

Conversation

@driazati
Copy link
Contributor

@driazati driazati commented May 20, 2019

Following on #19747, this implements most of the torch.jit.script() changes laid out in #20939.

Still to do:

  • Accessing a method from Python does not add it as a ScriptMethod (so only @exported methods and forward are compiled)
  • Calling a method other than forward on a submodule doesn't work

Differential Revision: D15560490

@pytorchbot pytorchbot added oncall: jit Add this issue/PR to JIT oncall triage queue module: pybind Related to our Python bindings / interactions with other Python libraries labels May 20, 2019
@driazati driazati requested review from suo and zdevito May 20, 2019 17:57
@suo
Copy link
Member

suo commented May 20, 2019

What would it take to get rid of __constants__ as well?

@driazati
Copy link
Contributor Author

Similar to #19439 we could just make everything that we see in __init__ that matches the supported constant types a constant. This could be a bad UX though since there's no explicit thing saying "I won't change this". Alternatively everything could just be added as an attribute (unless explicitly made a constant in __constants__) and the UX would be preserved.

@zdevito
Copy link
Contributor

zdevito commented May 21, 2019

This change deserves a design document before we land any code. There are some key questions which I am not clear on:

  • When something becomes strong, how is it related to the original module? Should it be the same object (I prefer this if possible)? Should we have coupled objects? If we have coupled objects what happens when we edit one but not the other?

  • How do we decide what methods should be scripted? Assuming just forward is used is not general, and making it configurable is not great because then it requires user input. We should make it possible that each method becomes scripted as required.

We should come up with a consistent design (across weak and strong modules) that we look at first.

@pytorchbot pytorchbot added the module: nn Related to torch.nn label May 23, 2019
@driazati driazati force-pushed the driazati/automod branch from dd9f511 to e260151 Compare May 28, 2019 22:24
Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good start. I have a few inline comments, but generally I think we should merge it and then discuss the next steps.

saving. This decorator explicitly marks that a method should be included
even if it is not called from Python.
"""
class_dict = get_class_attribute_dict(fn, frames_up=2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this would be idiomatic if it just set a __torchscript_export property on fn and the class then searched its own __dict__ for things marked export. Searching the frames for what you hope to be the class dict is not great. What happens when we add @property and then we chain @export @property def. It would break.



def _make_strong_submodule(field, module, parent):
if field not in parent._modules:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does this happen? There are nn.Modules in module that are not in the _modules list?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was more of a sanity check though it's possible someone does self.__dict__["secret_module"] = nn.Linear()

class WeakScriptModuleProxy(ScriptModule):
def __init__(self, original, stubs):
# Guards behavior of __setattr__ and __getattr__ so ScriptModule
# __init__ can run correctly
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a plan to get rid of WeakScriptModuleProxy. Eventually all ScriptModules will just always be this class, and they will not longer be proxies at all. It is difficult to understand all the components we have after this patch, so we need some followup effort merging the concepts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the functionality here can operate directly on a ScriptModule instance instead of constructing a WeakScriptModuleProxy, so deleting it should be pretty painless. There is still the question of what something like this would do:

scripted_linear = torch.jit.script(nn.Linear(10, 10))
isinstance(scripted_linear, nn.Linear) # true or false?

Intuitively I feel like it should be False since it's constructing a new module, but I can see people doing something like this, which could lead to confusing behavior.

class A(nn.Module):
def __init__(self, mod):
	self.module = mod
	if isinstance(self.module, nn.Linear):
		...

A(torch.jit.script(B())

constants_set = set(getattr(original, "__constants__", []))
self.__dict__["_constants_set"] = {}

if not hasattr(original, '_parameters'):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to document better exactly what happens when a we create a ScriptModule from a Module: what gets copied, what is translated in some way, what is ignored.

# TODO: need to handle this more generally when non-tensor attributes added to module
object.__setattr__(self, name, item)
elif isinstance(item, Parameter) or (isinstance(item, Module) and item is not self):
elif item is self:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't follow this code here. Not sure why it is deciding to ignore some things, and not others.

exported = tuple(mod.__torchscript_export__)
methods = methods + exported

if mod in _jit_internal.weak_modules:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible we no longer want to cache this. Should calling script twice on the same nn.Module return the same module? What if you mutate something in the original class (e.g. do some model surgery) and recall it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, it should make a fresh copy each time to stay with the idea of it being a "pure" operation. I think that should be in a follow up diff though.

@facebook-github-bot
Copy link
Contributor

@driazati merged this pull request in 8d3388a.

return fn


def ignore(maybe_fn=None, *, drop_on_export=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kw only arg is not supported in python 2

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unlanding now

@driazati driazati reopened this May 30, 2019
@facebook-github-bot facebook-github-bot deleted the driazati/automod branch July 13, 2020 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn module: pybind Related to our Python bindings / interactions with other Python libraries oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants