Skip to content

Conversation

@driazati
Copy link
Contributor

@driazati driazati commented May 30, 2019

Dynamically creating a type at runtime was messing up the MRO and has been causing many other problems. I think it's best to delete it, this causes a regression since

self.linear = nn.Linear(10, 10)
isinstance(self.linear, nn.Linear)

will now be False again, but this will be fixed once recursive script mode is the default (#20939)

Differential Revision: D15560549

@pytorchbot pytorchbot added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 30, 2019
@driazati driazati requested a review from eellison May 30, 2019 00:11
Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mind explaining what MRO was and how exactly htis fixed the cuda() bug (and maybe add test for that bug) ?


self.checkScript(int1, ())

def test_number_all(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this meant to be in here?

weak_mod.weight = torch.nn.Parameter(torch.ones(5, 5) * 100)
self.assertFalse(strong_mod(inp).allclose(weak_mod(inp)))

@unittest.skipIf(hasattr(torch.jit, 'WeakScriptModuleProxy'), "# TODO: re-enable"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re-enable this when what lol

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The full message is split over 2 strings

davidriazati added 2 commits May 29, 2019 17:35
@driazati
Copy link
Contributor Author

For one it was making an inheritance diamond on nn.Module and was letting you use non-ScriptModule stuff it inherited on something that should have been a ScriptModule, so the changes were all happening the Python side.

@eellison
Copy link
Contributor

Yea the CUDA bug seems worse than the isinstance bug. Is this PR workable instead #19403 ? We should also re open #19403 if this lands

@eellison
Copy link
Contributor

Could you add a test ?

Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with follow up to test. Once this lands we should re-open the isinstance bug

self.lstm = torch.nn.LSTM(5, 5)
self.lstm.cuda()

m = M()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you assert that the output is a cuda ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean? This isn't testing the forward pass, just that a weak module can be made into a cuda module.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes could you verify that the ouptut of the module is a cuda tensor. This is just testing that it doesn't error, it's not testing the runtime.

@facebook-github-bot
Copy link
Contributor

@driazati merged this pull request in e13b483.

@facebook-github-bot facebook-github-bot deleted the driazati/fixm branch July 13, 2020 17:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants