Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 20, 2021

Stack from ghstack:

I thought about a few possible ways of doing this. The main hazard is
that if I create a CPU tensor that doesn't have any real storage, the
moment I actually try to access the data on the tensor I will segfault.
So I don't want to use _make_subclass on a "cpu meta tensor" because
the CPU meta tensor (with no subclass) is radioactive: printing it
will immediately cause a segfault. So instead, I have to create
the CPU meta tensor AND subclass all in one go, and that means I need
another function for it. One downside to doing it this way is
I need another overload for explicit strides, and in general it is
difficult to get the view relationships to all work out properly;
tracked at #65339

Fixes #62972
Fixes #62730

Signed-off-by: Edward Z. Yang [email protected]

Differential Revision: D31057231

I thought about a few possible ways of doing this.  The main hazard is
that if I create a CPU tensor that doesn't have any real storage, the
moment I actually try to access the data on the tensor I will segfault.
So I don't want to use _make_subclass on a "cpu meta tensor" because
the CPU meta tensor (with no subclass) is radioactive: printing it
will immediately cause a segfault.  So instead, I have to create
the CPU meta tensor AND subclass all in one go, and that means I need
another function for it.  One downside to doing it this way is
I need another overload for explicit strides, and in general it is
difficult to get the view relationships to all work out properly;
tracked at #65339

Fixes #62972
Fixes #62730

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
@pytorch-probot
Copy link

pytorch-probot bot commented Sep 20, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/48c76f908d0e5d4f930029a07ebc73f9d5fd4458/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-bionic-py3.8-gcc9-coverage ciflow/all, ciflow/coverage, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
win-vs2019-cuda10.2-py3 ciflow/all, ciflow/cuda, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 20, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 48c76f9 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-py3.8-gcc9-coverage / test (default, 1, 2, linux.2xlarge) (1/2)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-22T14:58:41.5933517Z CONTINUE_THROUGH_ERROR: false
  "cla signed",
  "ciflow/default"
]
2021-09-22T14:58:41.5929438Z   GITHUB_TOKEN: ***
2021-09-22T14:58:41.5930365Z   DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.8-gcc9:74e757e8b0cf750d2f91db6aa4c29640abce32ea
2021-09-22T14:58:41.5931551Z   JOB_BASE_NAME: linux-bionic-py3.8-gcc9-coverage-test
2021-09-22T14:58:41.5932129Z   TEST_CONFIG: default
2021-09-22T14:58:41.5932439Z   SHARD_NUMBER: 1
2021-09-22T14:58:41.5932753Z   NUM_TEST_SHARDS: 2
2021-09-22T14:58:41.5933108Z   PYTORCH_IGNORE_DISABLED_ISSUES: 
2021-09-22T14:58:41.5933517Z   CONTINUE_THROUGH_ERROR: false
2021-09-22T14:58:41.5933845Z   SHM_SIZE: 1g
2021-09-22T14:58:41.5934134Z   PR_NUMBER: 65340
2021-09-22T14:58:41.5934412Z   IS_GHA: 1
2021-09-22T14:58:41.5934703Z   CIRCLE_BRANCH: pull/65340
2021-09-22T14:58:41.5935152Z   CIRCLE_PR_NUMBER: 65340
2021-09-22T14:58:41.5935616Z   CIRCLE_SHA1: 48c76f908d0e5d4f930029a07ebc73f9d5fd4458
2021-09-22T14:58:41.5936127Z   AWS_DEFAULT_REGION: us-east-1
2021-09-22T14:58:41.5936504Z ##[endgroup]
2021-09-22T14:58:53.7301427Z Processing ./dist/torch-1.10.0a0+git90ee2f6-cp38-cp38-linux_x86_64.whl
2021-09-22T14:58:53.7558494Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.8/site-packages (from torch==1.10.0a0+git90ee2f6) (3.10.0.2)

See GitHub Actions build win-vs2019-cpu-py3 / build (2/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-09-22T14:21:05.0325563Z FAILED: caffe2/CMa...s/torch_cpu.dir/__/aten/src/ATen/core/type.cpp.obj
2021-09-22T14:21:05.0195905Z caused by: Failed to read response header
2021-09-22T14:21:05.0196327Z caused by: failed to fill whole buffer
2021-09-22T14:21:05.0227079Z [3907/5686] C:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep
2021-09-22T14:21:05.0246187Z FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/BatchingRegistrations.cpp.obj 
2021-09-22T14:21:05.0265578Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\include -IC
2021-09-22T14:21:05.0284397Z error: failed to execute compile
2021-09-22T14:21:05.0284873Z caused by: error reading compile response from server
2021-09-22T14:21:05.0285374Z caused by: Failed to read response header
2021-09-22T14:21:05.0285808Z caused by: failed to fill whole buffer
2021-09-22T14:21:05.0306941Z [3908/5686] C:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep
2021-09-22T14:21:05.0325563Z FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/core/type.cpp.obj 
2021-09-22T14:21:05.0344558Z C:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\include -IC
2021-09-22T14:21:05.0363004Z error: failed to execute compile
2021-09-22T14:21:05.0363489Z caused by: error reading compile response from server
2021-09-22T14:21:05.0363979Z caused by: Failed to read response header
2021-09-22T14:21:05.0364396Z caused by: failed to fill whole buffer
2021-09-22T14:21:07.8279698Z [3909/5686] C:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\pytorch-1262117262\third_party\ideep
2021-09-22T14:21:07.8298725Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-09-22T14:21:07.8299309Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-09-22T14:21:07.8299661Z 
2021-09-22T14:21:07.8299993Z ninja: build stopped: subcommand failed.

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

I thought about a few possible ways of doing this.  The main hazard is
that if I create a CPU tensor that doesn't have any real storage, the
moment I actually try to access the data on the tensor I will segfault.
So I don't want to use _make_subclass on a "cpu meta tensor" because
the CPU meta tensor (with no subclass) is radioactive: printing it
will immediately cause a segfault.  So instead, I have to create
the CPU meta tensor AND subclass all in one go, and that means I need
another function for it.  One downside to doing it this way is
I need another overload for explicit strides, and in general it is
difficult to get the view relationships to all work out properly;
tracked at #65339

Fixes #62972
Fixes #62730

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 20, 2021
I thought about a few possible ways of doing this.  The main hazard is
that if I create a CPU tensor that doesn't have any real storage, the
moment I actually try to access the data on the tensor I will segfault.
So I don't want to use _make_subclass on a "cpu meta tensor" because
the CPU meta tensor (with no subclass) is radioactive: printing it
will immediately cause a segfault.  So instead, I have to create
the CPU meta tensor AND subclass all in one go, and that means I need
another function for it.  One downside to doing it this way is
I need another overload for explicit strides, and in general it is
difficult to get the view relationships to all work out properly;
tracked at #65339

Fixes #62972
Fixes #62730

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: a6a9b11
Pull Request resolved: #65340
@ezyang
Copy link
Contributor Author

ezyang commented Sep 20, 2021

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the properties from the Tensor created by for_blob?
Can we make sure that we just raise an error if we go below the Python key with it?

});
ParsedArgs<8> parsed_args{};
auto r = parser.parse(args, kwargs, parsed_args);
// TODO: handle has_torch_function, maybe?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no Tensor input here right?
That would be for Mode?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Also, if there's a hypothetical version of this which takes a Tensor and reads attributes directly from the Tensor maybe this should also be overrideable? I am not exactly sure what the use case for any of this is, it is mostly an argument from symmetry.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess technically torch function isn't getting a mode so we can't override this anyway lol


// don't bother releasing GIL here, as we are not allocating any nontrivial
// data
auto data = at::for_blob(nullptr, r.intlist(1))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any doc anywhere about what for_blob is? That sounds very interesting?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes go read ATen/templates/Functions.h

@ezyang
Copy link
Contributor Author

ezyang commented Sep 20, 2021

What are the properties from the Tensor created by for_blob?

It's non-resizable (this may be a mistake, but we can fix it later), the data pointer is null, and we have no deleter for it. Or is there something else you're thinking of?

Can we make sure that we just raise an error if we go below the Python key with it?

Not easily. We could add another dispatch key below Python to catch such cases (if I had spare dispatch keys), but I can't straightforwardly remove the CPU dispatch key as this will cause certain type tests to work incorrectly.

@albanD
Copy link
Collaborator

albanD commented Sep 20, 2021

It's non-resizable (this may be a mistake, but we can fix it later), the data pointer is null, and we have no deleter for it. Or is there something else you're thinking of?

Sounds good!

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Comment on lines +35 to +43
# The wrapping tensor (LoggingTensor) shouldn't hold any
# memory for the class in question, but it should still
# advertise the same device as before
r = torch.Tensor._make_wrapper_subclass(
cls, elem.size(),
# TODO: clone strides and storage aliasing
dtype=elem.dtype, layout=elem.layout,
device=elem.device, requires_grad=elem.requires_grad
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if someone calls .storage() on a LoggingTensor now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You get a storage, but its data pointer is a null pointer and alas, you can probably induce a segfault this way. Good catch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed it segfaults

>>> class A(torch.Tensor):
...   def __torch_dispatch__(): pass
... 
>>> torch.Tensor._make_wrapper_subclass(A, (2,)).storage()[0]
Segmentation fault (core dumped)

I checked if meta tensors have this problem and they just error out when returning their storage, so I'll just do that here as a patch up

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is cool! Will follow along with strides and view relationship handling (the lack of view relationship handling has been causing internal asserts with functorch on debug builds)

END_HANDLE_TH_ERRORS
}

static PyObject* THPVariable_make_wrapper_subclass(PyObject* _ignored, PyObject* args, PyObject* kwargs) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit. Write it as follows to avoid unused variable warnings

Suggested change
static PyObject* THPVariable_make_wrapper_subclass(PyObject* _ignored, PyObject* args, PyObject* kwargs) {
static PyObject* THPVariable_make_wrapper_subclass(PyObject* , PyObject* args, PyObject* kwargs) {

I thought about a few possible ways of doing this.  The main hazard is
that if I create a CPU tensor that doesn't have any real storage, the
moment I actually try to access the data on the tensor I will segfault.
So I don't want to use _make_subclass on a "cpu meta tensor" because
the CPU meta tensor (with no subclass) is radioactive: printing it
will immediately cause a segfault.  So instead, I have to create
the CPU meta tensor AND subclass all in one go, and that means I need
another function for it.  One downside to doing it this way is
I need another overload for explicit strides, and in general it is
difficult to get the view relationships to all work out properly;
tracked at #65339

Fixes #62972
Fixes #62730

Signed-off-by: Edward Z. Yang <ezyangfb.com>

Differential Revision: [D31057231](https://our.internmc.facebook.com/intern/diff/D31057231)

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 22, 2021
I thought about a few possible ways of doing this.  The main hazard is
that if I create a CPU tensor that doesn't have any real storage, the
moment I actually try to access the data on the tensor I will segfault.
So I don't want to use _make_subclass on a "cpu meta tensor" because
the CPU meta tensor (with no subclass) is radioactive: printing it
will immediately cause a segfault.  So instead, I have to create
the CPU meta tensor AND subclass all in one go, and that means I need
another function for it.  One downside to doing it this way is
I need another overload for explicit strides, and in general it is
difficult to get the view relationships to all work out properly;
tracked at #65339

Fixes #62972
Fixes #62730

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: dd257c9
Pull Request resolved: #65340
@ezyang
Copy link
Contributor Author

ezyang commented Sep 22, 2021

@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 70a545b.

if (storage.device_type() == at::DeviceType::Meta) {
TORCH_CHECK_NOT_IMPLEMENTED(false, "python bindings for meta storage objects not supported");
}
if (storage.data() == nullptr && storage.nbytes() != 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could instead set set_storage_access_should_throw

pytorch/c10/core/TensorImpl.h

Lines 2342 to 2344 in 760aefd

void set_storage_access_should_throw() {
storage_access_should_throw_ = true;
}
in THPVariable_make_wrapper_subclass

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is, per #65339 in the long term we do want storage represented adequately

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants