Skip to content

Conversation

@wanchaol
Copy link
Collaborator

@wanchaol wanchaol commented Apr 24, 2019

this fix #19039

@pytorchbot pytorchbot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Apr 24, 2019
@wanchaol wanchaol requested review from driazati and eellison April 24, 2019 22:59
@wanchaol wanchaol changed the title Let script module buffer attributes can also cast device/type [jit] Let script module buffer attributes can also cast device/type Apr 24, 2019
@eellison
Copy link
Contributor

Take a look at test/cpp/api/jit.cpp, pretty sure you can create a scriptmodule in cpp

@wanchaol
Copy link
Collaborator Author

Take a look at test/cpp/api/jit.cpp, pretty sure you can create a scriptmodule in cpp

@eellison I was looking at this file, it can let you create a ScripModule from the function text using torch::jit::compile, but it could not let you defined a ScriptModule class manually. In the case of the issue, we could only trigger this case when we have a ScriptModule class in C++, where it could not be created directly in C++ (only via python + torch.jit.save + load)

@eellison
Copy link
Contributor

In cpp you can define a script module with text, save it, and load it. Not sure i'm understanding how this doesn't cover your use case.

eellison
eellison previously approved these changes Apr 25, 2019
Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! thanks for adding the test.



TEST(TorchScriptTest, Conversion_MultiCUDA) {
auto module = torch::jit::load("bn_gpu.pt");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a run of it with GPU tensors first ?

}


TEST(TorchScriptTest, Conversion_MultiCUDA) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test should be in cpp/jit/, not api/jit.cpp, see testModuleDefine to see an example of a manually added parameter (attributes have pretty much the same API on modules)

for (auto& parameter : get_attributes()) {
if (parameter.type()->isSubtypeOf(TensorType::get())) {
// Need to access the `at::Tensor` as a `Variable` here.
autograd::Variable variable = parameter.value().toTensor();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be moved out to a function so it's not just a copy-paste of the above code for parameters

Copy link
Contributor

@driazati driazati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should be committing binaries to the repo to be used in tests

@eellison eellison dismissed their stale review April 25, 2019 23:35

wait for update

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wanchaol has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@wanchaol wanchaol requested a review from driazati April 26, 2019 00:47
@wanchaol wanchaol deleted the castfix branch April 26, 2019 22:06
@facebook-github-bot
Copy link
Contributor

@wanchaol merged this pull request in 236c2b2.

zhangguanheng66 pushed a commit to zhangguanheng66/pytorch that referenced this pull request May 6, 2019
…h#19700)

Summary:
Tested locally this  fix pytorch#19039, did not add a test since there's no way to create a script module in the cpp world.
Pull Request resolved: pytorch#19700

Differential Revision: D15094195

Pulled By: wanchaol

fbshipit-source-id: fcc2c1e5efbc160d976ae485ba2457442f62f065
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cannot change device of JIT module after initial loading using C++ frontend

5 participants