Skip to content

An outer torch.no_grad() is ignored inside a thread #20528

@boeddeker

Description

@boeddeker

🐛 Bug

An outer torch.no_grad() is ignored inside a thread (e.g. ThreadPoolExecutor.map).

To Reproduce

Steps to reproduce the behavior:

import torch
from concurrent.futures import ThreadPoolExecutor

# dummy data and network
input_data = torch.arange(10, dtype=torch.float32)
net = torch.nn.Sequential(
    torch.nn.Linear(10, 1)
)

with ThreadPoolExecutor(2) as ex:
    with torch.no_grad():
        # no_grad is working
        assert net(input_data).grad_fn is None
        
        # Should fail because of the "not" <-------------------------------
        assert list(ex.map(net, [input_data]))[0].grad_fn is not None
        
        # no_grad is working
        assert list(ex.map(torch.no_grad()(net), [input_data]))[0].grad_fn is None
        
        # no_grad is working
        assert net(input_data).grad_fn is None
        
    # Can calculate the gradient
    assert net(input_data).grad_fn is not None

Expected behavior

The result of list(ex.map(net, [input_data])) should not have a grad_fn.

Environment

PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130

OS: Ubuntu 18.04.2 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.10.2

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration:
GPU 0: GeForce RTX 2070
GPU 1: GeForce RTX 2070

Nvidia driver version: 415.25
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0

Versions of relevant libraries:
[pip] numpy==1.16.3
[pip] numpy-ringbuffer==0.2.1
[pip] numpydoc==0.8.0
[pip] pytorch-sconce==1.3.4
[pip] pytorchviz==0.0.1
[pip] torch==1.1.0
[pip] torch-nightly==1.0.0.dev20181114
[pip] torchvision==0.2.1
[conda] blas 1.0 mkl
[conda] mkl 2019.3 199
[conda] mkl-service 1.1.2 py36he904b0f_5
[conda] mkl_fft 1.0.10 py36ha843d7b_0
[conda] mkl_random 1.0.2 py36hd81dba3_0
[conda] torch 1.1.0 pypi_0 pypi
[conda] torchvision 0.2.2.post3 pypi_0 pypi

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    high prioritymodule: autogradRelated to torch.autograd, and the autograd engine in generalmodule: docsRelated to our documentation, both in docs/ and docblocksmodule: multithreadingRelated to issues that occur when running on multiple CPU threadssmallWe think this is a small issue to fix. Consider knocking off high priority small issuestriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions