Skip to content

Conversation

@zhangguanheng66
Copy link
Contributor

@zhangguanheng66 zhangguanheng66 commented Jul 18, 2019

Fix #17357
Unblock 1.2 release.

@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen labels Jul 18, 2019
Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a minimum, I think THCPModule_getDevice_wrap and THCPModule_getDeviceCount_wrap should also call torch::utils::cuda_lazy_init so that a program that calls torch.cuda.get_device_count() in a bad fork gets a better error message.

Copy link
Member

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good other than the inline comment in THCPModule_getDevice_wrap which should be fixed before this is landed

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@pytorchbot pytorchbot added the module: multiprocessing Related to torch.multiprocessing label Jul 19, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhangguanheng66 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zhangguanheng66 merged this pull request in a6e45a6.

facebook-github-bot pushed a commit that referenced this pull request Jul 24, 2019
Summary:
Re-land #23030
Pull Request resolved: #23209

Differential Revision: D16440000

Pulled By: zhangguanheng66

fbshipit-source-id: e05683275522835a33d5a7e6d76b7e94774e4d98
facebook-github-bot pushed a commit that referenced this pull request Jul 25, 2019
Summary:
Re-land #23030
Pull Request resolved: #23322

Differential Revision: D16469442

Pulled By: zhangguanheng66

fbshipit-source-id: 70b63ab6265efa3f289111ef0ce46bb3c0d353bc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cuda Related to torch.cuda, and CUDA support in general module: internals Related to internal abstractions in c10 and ATen module: multiprocessing Related to torch.multiprocessing

Projects

None yet

Development

Successfully merging this pull request may close these issues.

cuda runtime error (3): we're not detecting bad forks

5 participants