Skip to content

Conversation

@IvanYashchuk
Copy link
Collaborator

@IvanYashchuk IvanYashchuk commented Jan 7, 2022

The time has come to remove deprecated linear algebra related functions. This PR removes torch.lstsq.

There's a note in tools/codegen/gen.py about lstsq schema in native_function.yaml that I will not remove:

# Note [name and field_name]
# ~~~~~~~~~~~~~~~~~~~~~~~~~~
# To understand name_to_field_name, we must first talk about this
# schema:
#
# lstsq.X(Tensor self, Tensor A, *, Tensor(a!) X, Tensor(b!) qr) -> (Tensor(a!) solution, Tensor(b!) QR)
#
# There is something very odd about this schema: it is an out
# variant of the function (that is to say, it will convert into
# at::lstsq_out() in the C++ API), but the names of the output
# return arguments don't match the keyword argument names of
# the inputs. It TURNS OUT that in this situation, the historical
# Declarations.yaml we want to output is this (abbreviated to
# only show relevant fields):
#
# arguments:
# ...
# - field_name: solution
# name: X
# - field_name: QR
# name: qr
# ...
#
# returns:
# - field_name: solution
# name: X
# - field_name: QR
# name: qr
#
# The name of the return fields is stored in 'field_name', and the
# name of the arguments is stored in 'name'. So when we process
# arguments, we need a way to get at the corresponding return. At
# the moment, this is most conveniently done by constructing a
# mapping from name (the argument concept) to field_name (the
# return concept) while processing return arguments, since we don't
# directly maintain this correspondence in the modeling of function
# schema itself.

cc @jianyuh @nikitaved @pearu @mruberry @walterddr @IvanYashchuk @xwang233 @lezcano

@IvanYashchuk IvanYashchuk added module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul module: deprecation labels Jan 7, 2022
@pytorch-probot pytorch-probot bot assigned pytorchbot and unassigned pytorchbot Jan 7, 2022
@pytorch-probot
Copy link

pytorch-probot bot commented Jan 7, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/IvanYashchuk/pytorch/blob/a65fb5e1f5969d4fec64ea5ed5d81fc27542eee5/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/cuda,ciflow/all

Workflows Labels (bold enabled) Status
Triggered Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk ✅ triggered
docker-builds ciflow/all, ciflow/trunk ✅ triggered
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk ✅ triggered
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk ✅ triggered
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk ✅ triggered
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk ✅ triggered
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk ✅ triggered
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk ✅ triggered
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk ✅ triggered
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck ✅ triggered
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled ✅ triggered
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win ✅ triggered
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
Skipped Workflows

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 7, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit aaf7113 (more details on the Dr. CI page):


  • 20/20 failures introduced in this PR

🕵️ 20 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-cuda11.3-py3.7-gcc7-bazel-test / build-and-test (1/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T12:27:08.6770050Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-14T12:27:08.6767237Z �[36;1mfi�[0m
2022-03-14T12:27:08.6767472Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-14T12:27:08.6767808Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-14T12:27:08.6768129Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-14T12:27:08.6768465Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-14T12:27:08.6768727Z �[36;1m  exit 1�[0m
2022-03-14T12:27:08.6768887Z �[36;1mfi�[0m
2022-03-14T12:27:08.6769118Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-14T12:27:08.6769432Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-14T12:27:08.6769728Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-14T12:27:08.6770050Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-14T12:27:08.6770387Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-14T12:27:08.6770601Z �[36;1m  exit 1�[0m
2022-03-14T12:27:08.6770761Z �[36;1mfi�[0m
2022-03-14T12:27:08.6770955Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-14T12:27:08.6781261Z shell: /usr/bin/bash -e {0}
2022-03-14T12:27:08.6781445Z env:
2022-03-14T12:27:08.6781716Z   BUILD_ENVIRONMENT: linux-xenial-cuda11.3-py3.7-gcc7-bazel-test
2022-03-14T12:27:08.6782118Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7
2022-03-14T12:27:08.6782493Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2022-03-14T12:27:08.6782799Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build periodic-win-vs2019-cuda11.5-py3 / build (2/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T13:18:05.5287470Z FAILED: caffe2/CMa...ATen/native/cuda/linalg/BatchLinearAlgebra.cpp.obj
2022-03-14T13:18:01.4285858Z 
2022-03-14T13:18:03.7433364Z [5639/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\RegisterQuantizedCUDA.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterQuantizedCUDA.cpp
2022-03-14T13:18:03.7450879Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2022-03-14T13:18:03.7452013Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-03-14T13:18:03.7452626Z 
2022-03-14T13:18:03.8873608Z [5640/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.5\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\int_repr_quant.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\int_repr_quant.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\quantized\cuda\int_repr_quant.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\int_repr_quant.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-03-14T13:18:03.8891904Z int_repr_quant.cu
2022-03-14T13:18:04.7827217Z [5641/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.5\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-03-14T13:18:04.7841151Z make_quantized_tensor.cu
2022-03-14T13:18:05.5276459Z [5642/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp
2022-03-14T13:18:05.5287470Z FAILED: caffe2/CMakeFiles/torch_cuda_cu.dir/__/aten/src/ATen/native/cuda/linalg/BatchLinearAlgebra.cpp.obj 
2022-03-14T13:18:05.5297986Z C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp
2022-03-14T13:18:05.5308358Z C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp(35): fatal error C1083: Cannot open include file: 'ATen/ops/lstsq_native.h': No such file or directory
2022-03-14T13:18:08.9605352Z [5643/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.5\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUB_WRAPPED_NAMESPACE=at_cuda_detail -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-03-14T13:18:08.9619339Z affine_quantizer.cu
2022-03-14T13:18:10.4830053Z [5644/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\RegisterSparseCsrCUDA.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterSparseCsrCUDA.cpp
2022-03-14T13:18:10.4840188Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2022-03-14T13:18:10.4840770Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-03-14T13:18:10.4841120Z 
2022-03-14T13:18:36.6619765Z [5645/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\RegisterCUDA.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterCUDA.cpp
2022-03-14T13:18:36.6629682Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64

See GitHub Actions build parallelnative-linux-xenial-py3.7-gcc5.4 / test (default, 1, 1, linux.2xlarge) (3/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:06:23.3499212Z RuntimeError: test_torch failed!
2022-03-14T14:06:23.1555207Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCPU-20220314140610.xml
2022-03-14T14:06:23.1556960Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCPU-20220314140610.xml
2022-03-14T14:06:23.3347017Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:06:23.3347403Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:06:23.3347693Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T14:06:23.3494190Z Traceback (most recent call last):
2022-03-14T14:06:23.3494674Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T14:06:23.3496814Z     main()
2022-03-14T14:06:23.3497021Z   File "test/run_test.py", line 1027, in main
2022-03-14T14:06:23.3499003Z     raise RuntimeError(err_message)
2022-03-14T14:06:23.3499212Z RuntimeError: test_torch failed!
2022-03-14T14:06:23.5713674Z + cleanup
2022-03-14T14:06:23.5714005Z + retcode=1
2022-03-14T14:06:23.5714298Z + set +x
2022-03-14T14:06:23.5758378Z ##[error]Process completed with exit code 1.
2022-03-14T14:06:23.5809703Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:06:23.5810067Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-03-14T14:06:23.5810399Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-03-14T14:06:23.5957459Z shell: /usr/bin/bash -e {0}
2022-03-14T14:06:23.5957650Z env:
2022-03-14T14:06:23.5957905Z   BUILD_ENVIRONMENT: parallelnative-linux-xenial-py3.7-gcc5.4

See GitHub Actions build win-vs2019-cpu-py3 / test (default, 2, 2, windows.4xlarge) (4/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:19:06.5928177Z RuntimeError: test_torch failed!
2022-03-14T14:19:06.3783383Z Generated XML report: test-reports\dist-gloo\test_torch\TEST-TestTorchDeviceTypeCPU-20220314141852.xml
2022-03-14T14:19:06.3784721Z Generated XML report: test-reports\dist-gloo\test_torch\TEST-TestVitalSignsCudaCPU-20220314141852.xml
2022-03-14T14:19:06.5687242Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T14:19:06.5687784Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:19:06.5688384Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:19:06.5925696Z Traceback (most recent call last):
2022-03-14T14:19:06.5926484Z   File "run_test.py", line 1049, in <module>
2022-03-14T14:19:06.5926820Z     main()
2022-03-14T14:19:06.5927278Z   File "run_test.py", line 1027, in main
2022-03-14T14:19:06.5927766Z     raise RuntimeError(err_message)
2022-03-14T14:19:06.5928177Z RuntimeError: test_torch failed!
2022-03-14T14:19:06.8104294Z 
2022-03-14T14:19:06.8104960Z (base) C:\actions-runner\_work\pytorch\pytorch\test>popd
2022-03-14T14:19:06.8109190Z 
2022-03-14T14:19:06.8109642Z (base) C:\actions-runner\_work\pytorch\pytorch>if ERRORLEVEL 1 exit /b 1 
2022-03-14T14:19:06.8135185Z + cleanup
2022-03-14T14:19:06.8135489Z + retcode=1
2022-03-14T14:19:06.8135762Z + set +x
2022-03-14T14:19:06.8165416Z ##[error]Process completed with exit code 1.
2022-03-14T14:19:06.8316978Z ##[group]Run # -ir => recursive include all files in pattern
2022-03-14T14:19:06.8318154Z �[36;1m# -ir => recursive include all files in pattern�[0m

See GitHub Actions build linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (5/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:52:44.3609511Z RuntimeError: test_torch failed!
2022-03-14T14:52:43.7587339Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCUDA-20220314145103.xml
2022-03-14T14:52:43.7592170Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCUDA-20220314145103.xml
2022-03-14T14:52:44.2157060Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:52:44.2157451Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:52:44.2157770Z [TORCH_VITAL] CUDA.used		 true
2022-03-14T14:52:44.3602739Z Traceback (most recent call last):
2022-03-14T14:52:44.3603122Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T14:52:44.3605759Z     main()
2022-03-14T14:52:44.3606402Z   File "test/run_test.py", line 1027, in main
2022-03-14T14:52:44.3609138Z     raise RuntimeError(err_message)
2022-03-14T14:52:44.3609511Z RuntimeError: test_torch failed!
2022-03-14T14:52:44.9395234Z + cleanup
2022-03-14T14:52:44.9395506Z + retcode=1
2022-03-14T14:52:44.9395738Z + set +x
2022-03-14T14:52:44.9460627Z ##[error]Process completed with exit code 1.
2022-03-14T14:52:44.9502176Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:52:44.9502658Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-03-14T14:52:44.9503089Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-03-14T14:52:44.9517612Z shell: /usr/bin/bash -e {0}
2022-03-14T14:52:44.9517850Z env:
2022-03-14T14:52:44.9518166Z   BUILD_ENVIRONMENT: linux-xenial-cuda11.3-py3.7-gcc7

See GitHub Actions build libtorch-linux-xenial-cuda11.3-py3.7-gcc7 / build (6/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T12:28:19.8981514Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-14T12:28:19.8978700Z �[36;1mfi�[0m
2022-03-14T12:28:19.8978936Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-14T12:28:19.8979272Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-14T12:28:19.8979594Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-14T12:28:19.8979933Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-14T12:28:19.8980196Z �[36;1m  exit 1�[0m
2022-03-14T12:28:19.8980355Z �[36;1mfi�[0m
2022-03-14T12:28:19.8980586Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-14T12:28:19.8980894Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-14T12:28:19.8981192Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-14T12:28:19.8981514Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-14T12:28:19.8981977Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-14T12:28:19.8982223Z �[36;1m  exit 1�[0m
2022-03-14T12:28:19.8982384Z �[36;1mfi�[0m
2022-03-14T12:28:19.8982566Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-14T12:28:19.8993154Z shell: /usr/bin/bash -e {0}
2022-03-14T12:28:19.8993335Z env:
2022-03-14T12:28:19.8993586Z   BUILD_ENVIRONMENT: libtorch-linux-xenial-cuda11.3-py3.7-gcc7
2022-03-14T12:28:19.8993969Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7
2022-03-14T12:28:19.8994338Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2022-03-14T12:28:19.8994655Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-py3.7-gcc7 / test (default, 2, 2, linux.2xlarge) (7/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T13:44:09.5645478Z RuntimeError: test_torch failed!
2022-03-14T13:44:09.3845366Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCPU-20220314134354.xml
2022-03-14T13:44:09.3847789Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCPU-20220314134354.xml
2022-03-14T13:44:09.5490500Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T13:44:09.5490938Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T13:44:09.5491171Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T13:44:09.5632786Z Traceback (most recent call last):
2022-03-14T13:44:09.5633061Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T13:44:09.5643166Z     main()
2022-03-14T13:44:09.5643520Z   File "test/run_test.py", line 1027, in main
2022-03-14T13:44:09.5645026Z     raise RuntimeError(err_message)
2022-03-14T13:44:09.5645478Z RuntimeError: test_torch failed!
2022-03-14T13:44:09.7660859Z + cleanup
2022-03-14T13:44:09.7661137Z + retcode=1
2022-03-14T13:44:09.7661327Z + set +x
2022-03-14T13:44:09.7704967Z ##[error]Process completed with exit code 1.
2022-03-14T13:44:09.7740986Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T13:44:09.7741392Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-03-14T13:44:09.7741844Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-03-14T13:44:09.7760178Z shell: /usr/bin/bash -e {0}
2022-03-14T13:44:09.7769636Z env:
2022-03-14T13:44:09.7769908Z   BUILD_ENVIRONMENT: linux-xenial-py3.7-gcc7

See GitHub Actions build linux-bionic-rocm4.5-py3.7 / build (8/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T12:27:11.0597375Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-14T12:27:11.0594494Z �[36;1mfi�[0m
2022-03-14T12:27:11.0594795Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-14T12:27:11.0595132Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-14T12:27:11.0595453Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-14T12:27:11.0595790Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-14T12:27:11.0596050Z �[36;1m  exit 1�[0m
2022-03-14T12:27:11.0596211Z �[36;1mfi�[0m
2022-03-14T12:27:11.0596440Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-14T12:27:11.0596750Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-14T12:27:11.0597049Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-14T12:27:11.0597375Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-14T12:27:11.0597716Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-14T12:27:11.0597929Z �[36;1m  exit 1�[0m
2022-03-14T12:27:11.0598091Z �[36;1mfi�[0m
2022-03-14T12:27:11.0598284Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-14T12:27:11.0608370Z shell: /usr/bin/bash -e {0}
2022-03-14T12:27:11.0608552Z env:
2022-03-14T12:27:11.0608754Z   BUILD_ENVIRONMENT: linux-bionic-rocm4.5-py3.7
2022-03-14T12:27:11.0609102Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-rocm4.5-py3.7
2022-03-14T12:27:11.0609452Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2022-03-14T12:27:11.0609755Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-bionic-rocm4.5-py3.7-distributed / build (9/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T12:27:11.5994767Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-14T12:27:11.5991825Z �[36;1mfi�[0m
2022-03-14T12:27:11.5992085Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-14T12:27:11.5992424Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-14T12:27:11.5992755Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-14T12:27:11.5993111Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-14T12:27:11.5993403Z �[36;1m  exit 1�[0m
2022-03-14T12:27:11.5993557Z �[36;1mfi�[0m
2022-03-14T12:27:11.5993796Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-14T12:27:11.5994131Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-14T12:27:11.5994428Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-14T12:27:11.5994767Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-14T12:27:11.5995118Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-14T12:27:11.5995343Z �[36;1m  exit 1�[0m
2022-03-14T12:27:11.5995509Z �[36;1mfi�[0m
2022-03-14T12:27:11.5995710Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-14T12:27:11.6005575Z shell: /usr/bin/bash -e {0}
2022-03-14T12:27:11.6005760Z env:
2022-03-14T12:27:11.6006006Z   BUILD_ENVIRONMENT: linux-bionic-rocm4.5-py3.7-distributed
2022-03-14T12:27:11.6006382Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-rocm4.5-py3.7
2022-03-14T12:27:11.6006746Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2022-03-14T12:27:11.6007075Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-py3.7-gcc5.4 / test (default, 1, 2, linux.2xlarge) (10/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T13:32:13.0307998Z RuntimeError: test_torch failed!
2022-03-14T13:32:12.8610237Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCPU-20220314133200.xml
2022-03-14T13:32:12.8612643Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCPU-20220314133200.xml
2022-03-14T13:32:13.0170398Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T13:32:13.0170734Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T13:32:13.0170982Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T13:32:13.0302991Z Traceback (most recent call last):
2022-03-14T13:32:13.0303231Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T13:32:13.0305627Z     main()
2022-03-14T13:32:13.0305975Z   File "test/run_test.py", line 1027, in main
2022-03-14T13:32:13.0307799Z     raise RuntimeError(err_message)
2022-03-14T13:32:13.0307998Z RuntimeError: test_torch failed!
2022-03-14T13:32:13.2291544Z + cleanup
2022-03-14T13:32:13.2291823Z + retcode=1
2022-03-14T13:32:13.2292054Z + set +x
2022-03-14T13:32:13.2332948Z ##[error]Process completed with exit code 1.
2022-03-14T13:32:13.2365423Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T13:32:13.2365763Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-03-14T13:32:13.2366075Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-03-14T13:32:13.2378306Z shell: /usr/bin/bash -e {0}
2022-03-14T13:32:13.2378568Z env:
2022-03-14T13:32:13.2378769Z   BUILD_ENVIRONMENT: linux-xenial-py3.7-gcc5.4

See GitHub Actions build macos-11-py3-x86-64 / test (default, 1, 2, macos-11) (11/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:46:38.2200100Z [E request_callbac...yUniqueId(created_on=0, local_id=0) to be created.
2022-03-14T14:46:34.4092860Z INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmp_4dryc3n
2022-03-14T14:46:34.4150700Z INFO:torch.distributed.nn.jit.instantiator:Writing /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmp_4dryc3n/_remote_module_non_sriptable.py
2022-03-14T14:46:34.4449410Z INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmp2v6u6qgq
2022-03-14T14:46:34.4462820Z INFO:torch.distributed.nn.jit.instantiator:Writing /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmp2v6u6qgq/_remote_module_non_sriptable.py
2022-03-14T14:46:34.4659770Z INFO:torch.distributed.nn.jit.instantiator:Created a temporary directory at /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmpwbppp_5y
2022-03-14T14:46:34.4755600Z INFO:torch.distributed.nn.jit.instantiator:Writing /var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/tmpwbppp_5y/_remote_module_non_sriptable.py
2022-03-14T14:46:34.9853580Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 0
2022-03-14T14:46:35.0317520Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 1
2022-03-14T14:46:35.0508890Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 3
2022-03-14T14:46:35.0537630Z INFO:torch.testing._internal.common_distributed:Starting event listener thread for rank 2
2022-03-14T14:46:38.2200100Z [E request_callback_no_python.cpp:559] Received error while processing request type 261: false INTERNAL ASSERT FAILED at "../torch/csrc/distributed/rpc/rref_context.cpp":387, please report a bug to PyTorch. Expected OwnerRRef with id GloballyUniqueId(created_on=0, local_id=0) to be created.
2022-03-14T14:46:38.2200810Z Exception raised from getOwnerRRef at ../torch/csrc/distributed/rpc/rref_context.cpp:387 (most recent call first):
2022-03-14T14:46:38.2201300Z frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) + 98 (0x10bb8e532 in libc10.dylib)
2022-03-14T14:46:38.2202560Z frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 106 (0x10bb8ccaa in libc10.dylib)
2022-03-14T14:46:38.2203190Z frame #2: c10::detail::torchInternalAssertFail(char const*, char const*, unsigned int, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 64 (0x10bb8cee0 in libc10.dylib)
2022-03-14T14:46:38.2203750Z frame #3: torch::distributed::rpc::RRefContext::getOwnerRRef(torch::distributed::rpc::GloballyUniqueId const&, bool) + 1587 (0x110376aa3 in libtorch_cpu.dylib)
2022-03-14T14:46:38.2204440Z frame #4: torch::distributed::rpc::RequestCallbackNoPython::assignOwnerRRef(torch::distributed::rpc::GloballyUniqueId const&, torch::distributed::rpc::GloballyUniqueId const&, c10::intrusive_ptr<c10::ivalue::Future, c10::detail::intrusive_target_default_null_type<c10::ivalue::Future> >) const + 84 (0x110361394 in libtorch_cpu.dylib)
2022-03-14T14:46:38.2205600Z frame #5: torch::distributed::rpc::RequestCallbackImpl::processPythonRemoteCall(torch::distributed::rpc::RpcCommandBase&, std::__1::vector<c10::Stream, std::__1::allocator<c10::Stream> >) const + 179 (0x10b3150c3 in libtorch_python.dylib)
2022-03-14T14:46:38.2206290Z frame #6: torch::distributed::rpc::RequestCallbackNoPython::processRpc(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::__1::vector<c10::Stream, std::__1::allocator<c10::Stream> >) const + 617 (0x110360059 in libtorch_cpu.dylib)
2022-03-14T14:46:38.2207030Z frame #7: torch::distributed::rpc::RequestCallbackImpl::processRpcWithErrors(torch::distributed::rpc::RpcCommandBase&, torch::distributed::rpc::MessageType const&, std::__1::vector<c10::Stream, std::__1::allocator<c10::Stream> >) const + 74 (0x10b315c7a in libtorch_python.dylib)
2022-03-14T14:46:38.2209350Z frame #8: c10::intrusive_ptr<c10::ivalue::Future, c10::detail::intrusive_target_default_null_type<c10::ivalue::Future> > c10::ivalue::Future::thenAsync<torch::distributed::rpc::RequestCallbackNoPython::processMessage(torch::distributed::rpc::Message&, std::__1::vector<c10::Stream, std::__1::allocator<c10::Stream> >) const::$_1>(torch::distributed::rpc::RequestCallbackNoPython::processMessage(torch::distributed::rpc::Message&, std::__1::vector<c10::Stream, std::__1::allocator<c10::Stream> >) const::$_1, c10::Type::SingletonOrSharedTypePtr<c10::Type>)::'lambda'(c10::ivalue::Future&)::operator()(c10::ivalue::Future&) + 223 (0x110367edf in libtorch_cpu.dylib)

See GitHub Actions build linux-bionic-py3.7-clang9 / test (noarch, 1, 1, linux.2xlarge) (12/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:12:54.8500434Z RuntimeError: test_torch failed!
2022-03-14T14:12:54.6488305Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCPU-20220314141240.xml
2022-03-14T14:12:54.6506209Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaMETA-20220314141240.xml
2022-03-14T14:12:54.8342515Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:12:54.8342991Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:12:54.8343227Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T14:12:54.8495275Z Traceback (most recent call last):
2022-03-14T14:12:54.8495768Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T14:12:54.8497430Z     main()
2022-03-14T14:12:54.8497607Z   File "test/run_test.py", line 1027, in main
2022-03-14T14:12:54.8500003Z     raise RuntimeError(err_message)
2022-03-14T14:12:54.8500434Z RuntimeError: test_torch failed!
2022-03-14T14:12:55.0611863Z 
2022-03-14T14:12:55.0612188Z real	60m57.721s
2022-03-14T14:12:55.0612488Z user	134m23.210s
2022-03-14T14:12:55.0612778Z sys	12m28.857s
2022-03-14T14:12:55.0613050Z + cleanup
2022-03-14T14:12:55.0613330Z + retcode=1
2022-03-14T14:12:55.0613490Z + set +x
2022-03-14T14:12:55.0654315Z ##[error]Process completed with exit code 1.
2022-03-14T14:12:55.0732385Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:12:55.0732732Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build periodic-linux-xenial-cuda11.3-py3.7-gcc7-debug / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (13/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:50:17.1882380Z RuntimeError: test_torch failed!
2022-03-14T14:50:16.6523397Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCUDA-20220314144842.xml
2022-03-14T14:50:16.6526841Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCUDA-20220314144842.xml
2022-03-14T14:50:17.0497689Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:50:17.0498106Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:50:17.0498409Z [TORCH_VITAL] CUDA.used		 true
2022-03-14T14:50:17.1873190Z Traceback (most recent call last):
2022-03-14T14:50:17.1873664Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T14:50:17.1877757Z     main()
2022-03-14T14:50:17.1878203Z   File "test/run_test.py", line 1027, in main
2022-03-14T14:50:17.1881870Z     raise RuntimeError(err_message)
2022-03-14T14:50:17.1882380Z RuntimeError: test_torch failed!
2022-03-14T14:50:17.6721725Z + cleanup
2022-03-14T14:50:17.6722275Z + retcode=1
2022-03-14T14:50:17.6722727Z + set +x
2022-03-14T14:50:17.6781379Z ##[error]Process completed with exit code 1.
2022-03-14T14:50:17.6819550Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:50:17.6820006Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-03-14T14:50:17.6820405Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-03-14T14:50:17.6834411Z shell: /usr/bin/bash -e {0}
2022-03-14T14:50:17.6834639Z env:
2022-03-14T14:50:17.6834986Z   BUILD_ENVIRONMENT: periodic-linux-xenial-cuda11.3-py3.7-gcc7-debug

See GitHub Actions build libtorch-linux-xenial-cuda10.2-py3.7-gcc7 / build (14/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T12:28:17.5629439Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-14T12:28:17.5626595Z �[36;1mfi�[0m
2022-03-14T12:28:17.5626834Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-14T12:28:17.5627172Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-14T12:28:17.5627495Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-14T12:28:17.5627835Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-14T12:28:17.5628101Z �[36;1m  exit 1�[0m
2022-03-14T12:28:17.5628265Z �[36;1mfi�[0m
2022-03-14T12:28:17.5628497Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-14T12:28:17.5628810Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-14T12:28:17.5629110Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-14T12:28:17.5629439Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-14T12:28:17.5629776Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-14T12:28:17.5629991Z �[36;1m  exit 1�[0m
2022-03-14T12:28:17.5630151Z �[36;1mfi�[0m
2022-03-14T12:28:17.5630332Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-14T12:28:17.5640839Z shell: /usr/bin/bash -e {0}
2022-03-14T12:28:17.5641022Z env:
2022-03-14T12:28:17.5641274Z   BUILD_ENVIRONMENT: libtorch-linux-xenial-cuda10.2-py3.7-gcc7
2022-03-14T12:28:17.5641664Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7
2022-03-14T12:28:17.5642038Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2022-03-14T12:28:17.5642355Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-bionic-py3.7-clang9 / test (default, 2, 2, linux.2xlarge) (15/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T13:42:36.9447704Z RuntimeError: test_torch failed!
2022-03-14T13:42:36.7427493Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCPU-20220314134224.xml
2022-03-14T13:42:36.7430178Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCPU-20220314134224.xml
2022-03-14T13:42:36.9286537Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T13:42:36.9286988Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T13:42:36.9287335Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T13:42:36.9442788Z Traceback (most recent call last):
2022-03-14T13:42:36.9443069Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T13:42:36.9445390Z     main()
2022-03-14T13:42:36.9446955Z   File "test/run_test.py", line 1027, in main
2022-03-14T13:42:36.9447351Z     raise RuntimeError(err_message)
2022-03-14T13:42:36.9447704Z RuntimeError: test_torch failed!
2022-03-14T13:42:37.1751996Z 
2022-03-14T13:42:37.1752299Z real	30m52.983s
2022-03-14T13:42:37.1752527Z user	75m52.603s
2022-03-14T13:42:37.1752701Z sys	9m0.313s
2022-03-14T13:42:37.1752890Z + cleanup
2022-03-14T13:42:37.1753076Z + retcode=1
2022-03-14T13:42:37.1753228Z + set +x
2022-03-14T13:42:37.1795171Z ##[error]Process completed with exit code 1.
2022-03-14T13:42:37.1849103Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T13:42:37.1849450Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build linux-bionic-cuda10.2-py3.9-gcc7 / test (default, 1, 2, linux.4xlarge.nvidia.gpu) (16/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:29:22.5742657Z RuntimeError: test_torch failed!
2022-03-14T14:29:22.2150361Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCUDA-20220314142750.xml
2022-03-14T14:29:22.2154359Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCUDA-20220314142750.xml
2022-03-14T14:29:22.4381104Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:29:22.4381504Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:29:22.4381828Z [TORCH_VITAL] CUDA.used		 true
2022-03-14T14:29:22.5737219Z Traceback (most recent call last):
2022-03-14T14:29:22.5737682Z   File "/var/lib/jenkins/workspace/test/run_test.py", line 1049, in <module>
2022-03-14T14:29:22.5739978Z     main()
2022-03-14T14:29:22.5740296Z   File "/var/lib/jenkins/workspace/test/run_test.py", line 1027, in main
2022-03-14T14:29:22.5742321Z     raise RuntimeError(err_message)
2022-03-14T14:29:22.5742657Z RuntimeError: test_torch failed!
2022-03-14T14:29:23.0533385Z 
2022-03-14T14:29:23.0533714Z real	53m28.596s
2022-03-14T14:29:23.0533999Z user	52m41.285s
2022-03-14T14:29:23.0534238Z sys	5m8.709s
2022-03-14T14:29:23.0534451Z + cleanup
2022-03-14T14:29:23.0534669Z + retcode=1
2022-03-14T14:29:23.0534921Z + set +x
2022-03-14T14:29:23.0593455Z ##[error]Process completed with exit code 1.
2022-03-14T14:29:23.0632627Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:29:23.0633120Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 / build (17/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T12:28:28.2804520Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2022-03-14T12:28:28.2801547Z �[36;1mfi�[0m
2022-03-14T12:28:28.2801803Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2022-03-14T12:28:28.2802144Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2022-03-14T12:28:28.2802485Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2022-03-14T12:28:28.2802840Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2022-03-14T12:28:28.2803131Z �[36;1m  exit 1�[0m
2022-03-14T12:28:28.2803287Z �[36;1mfi�[0m
2022-03-14T12:28:28.2803533Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2022-03-14T12:28:28.2803875Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2022-03-14T12:28:28.2804177Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2022-03-14T12:28:28.2804520Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2022-03-14T12:28:28.2804881Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2022-03-14T12:28:28.2805107Z �[36;1m  exit 1�[0m
2022-03-14T12:28:28.2805276Z �[36;1mfi�[0m
2022-03-14T12:28:28.2805482Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2022-03-14T12:28:28.2815951Z shell: /usr/bin/bash -e {0}
2022-03-14T12:28:28.2816142Z env:
2022-03-14T12:28:28.2816433Z   BUILD_ENVIRONMENT: periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7
2022-03-14T12:28:28.2816980Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda11.5-cudnn8-py3-gcc7
2022-03-14T12:28:28.2817378Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2022-03-14T12:28:28.2817713Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build periodic-linux-bionic-cuda11.5-py3.7-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) (18/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:34:29.2624168Z RuntimeError: test_torch failed!
2022-03-14T14:34:28.7638645Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCUDA-20220314143257.xml
2022-03-14T14:34:28.7641939Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCUDA-20220314143257.xml
2022-03-14T14:34:29.1284686Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:34:29.1285114Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:34:29.1285440Z [TORCH_VITAL] CUDA.used		 true
2022-03-14T14:34:29.2616873Z Traceback (most recent call last):
2022-03-14T14:34:29.2617230Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T14:34:29.2620497Z     main()
2022-03-14T14:34:29.2620766Z   File "test/run_test.py", line 1027, in main
2022-03-14T14:34:29.2623834Z     raise RuntimeError(err_message)
2022-03-14T14:34:29.2624168Z RuntimeError: test_torch failed!
2022-03-14T14:34:29.7309695Z 
2022-03-14T14:34:29.7310477Z real	54m45.871s
2022-03-14T14:34:29.7311029Z user	73m14.406s
2022-03-14T14:34:29.7311441Z sys	5m40.620s
2022-03-14T14:34:29.7311841Z + cleanup
2022-03-14T14:34:29.7312066Z + retcode=1
2022-03-14T14:34:29.7312275Z + set +x
2022-03-14T14:34:29.7369883Z ##[error]Process completed with exit code 1.
2022-03-14T14:34:29.7407624Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:34:29.7408090Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m

See GitHub Actions build linux-xenial-py3.7-clang7-asan / test (default, 3, 3, linux.2xlarge) (19/20)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-03-14T14:45:45.1137147Z RuntimeError: test_torch failed!
2022-03-14T14:45:44.7435053Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestTorchDeviceTypeCPU-20220314144459.xml
2022-03-14T14:45:44.7437643Z Generated XML report: test-reports/python-unittest/test_torch/TEST-TestVitalSignsCudaCPU-20220314144459.xml
2022-03-14T14:45:45.0214862Z [TORCH_VITAL] Dataloader.enabled		 True
2022-03-14T14:45:45.0215402Z [TORCH_VITAL] Dataloader.basic_unit_test		 TEST_VALUE_STRING
2022-03-14T14:45:45.0215641Z [TORCH_VITAL] CUDA.used		 False
2022-03-14T14:45:45.1129096Z Traceback (most recent call last):
2022-03-14T14:45:45.1129577Z   File "test/run_test.py", line 1049, in <module>
2022-03-14T14:45:45.1132826Z     main()
2022-03-14T14:45:45.1133080Z   File "test/run_test.py", line 1027, in main
2022-03-14T14:45:45.1136903Z     raise RuntimeError(err_message)
2022-03-14T14:45:45.1137147Z RuntimeError: test_torch failed!
2022-03-14T14:45:45.4847487Z + cleanup
2022-03-14T14:45:45.4847798Z + retcode=1
2022-03-14T14:45:45.4848068Z + set +x
2022-03-14T14:45:45.4889271Z ##[error]Process completed with exit code 1.
2022-03-14T14:45:45.4920234Z ##[group]Run # Ensure the working directory gets chowned back to the current user
2022-03-14T14:45:45.4920579Z �[36;1m# Ensure the working directory gets chowned back to the current user�[0m
2022-03-14T14:45:45.4920891Z �[36;1mdocker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .�[0m
2022-03-14T14:45:45.4938248Z shell: /usr/bin/bash -e {0}
2022-03-14T14:45:45.4938429Z env:
2022-03-14T14:45:45.4938636Z   BUILD_ENVIRONMENT: linux-xenial-py3.7-clang7-asan

See GitHub Actions build win-vs2019-cuda11.3-py3 / build (20/20)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-14T13:16:52.9346099Z FAILED: caffe2/CMa...ATen/native/cuda/linalg/BatchLinearAlgebra.cpp.obj
2022-03-14T13:16:41.8691320Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2022-03-14T13:16:41.8691852Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-03-14T13:16:41.8692176Z 
2022-03-14T13:16:47.9113171Z [5649/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\RegisterSparseCsrCUDA.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterSparseCsrCUDA.cpp
2022-03-14T13:16:47.9127668Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2022-03-14T13:16:47.9128511Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-03-14T13:16:47.9129031Z 
2022-03-14T13:16:52.0358535Z [5650/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\make_quantized_tensor.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-03-14T13:16:52.0383392Z make_quantized_tensor.cu
2022-03-14T13:16:52.9327515Z [5651/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp
2022-03-14T13:16:52.9346099Z FAILED: caffe2/CMakeFiles/torch_cuda_cu.dir/__/aten/src/ATen/native/cuda/linalg/BatchLinearAlgebra.cpp.obj 
2022-03-14T13:16:52.9365068Z C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp
2022-03-14T13:16:52.9383780Z C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\linalg\BatchLinearAlgebra.cpp(35): fatal error C1083: Cannot open include file: 'ATen/ops/lstsq_native.h': No such file or directory
2022-03-14T13:16:56.2598751Z [5652/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\RegisterSparseCUDA.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterSparseCUDA.cpp
2022-03-14T13:16:56.2617555Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2022-03-14T13:16:56.2618594Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-03-14T13:16:56.2619228Z 
2022-03-14T13:17:02.8788387Z [5653/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\quantized\cuda\affine_quantizer.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-03-14T13:17:02.8801448Z affine_quantizer.cu
2022-03-14T13:17:06.9473080Z [5654/6218] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -I"C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\RegisterCUDA.cpp.obj /Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\aten\src\ATen\RegisterCUDA.cpp
2022-03-14T13:17:06.9482462Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@dagitses dagitses added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jan 10, 2022
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just left a small comment out of curiosity, but it LGTM


// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ legacy_lstsq ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

std::tuple<Tensor, Tensor> legacy_lstsq_cuda(const Tensor &B, const Tensor &A) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a bit odd. Did we reimplement all the calling code for magmaGels within linalg_lstsq_cuda? I guess we did because of the TORCH_WARN_ONCE that's here, but that seems so odd.

lstsq is a mess, it could do with a reasonable clean-up.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Porting of TH version of lstsq to ATen was done independently from implementing torch.linalg.lstsq.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic to get rid of this code then (same applies to all the other PRs, but to this one in particular).

@IvanYashchuk
Copy link
Collaborator Author

@pytorchbot ciflow rerun

@pytorch-probot pytorch-probot bot assigned pytorchbot and unassigned pytorchbot Jan 11, 2022
@de-gozaru
Copy link

de-gozaru commented Jan 12, 2022

Hi @IvanYashchuk ,

Is this #71222 a known issue?

@nikitaved
Copy link
Collaborator

nikitaved commented Jan 12, 2022

@de-gozaru , I have had some issues with the determinism of linalg.lstsq while working on its forward/backward ADs. If my memory serves me well, it was the case with only wide or tall matrices.

@IvanYashchuk
Copy link
Collaborator Author

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR is too stale; the last push date was more than 3 days ago. Please rebase and try again.

Details for Dev Infra team Raised by workflow job

@IvanYashchuk
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: The following mandatory check(s) are pending/not yet run (Rule superuser):

  • Facebook CLA Check

Dig deeper by viewing the pending checks on hud

Details for Dev Infra team Raised by workflow job

@IvanYashchuk
Copy link
Collaborator Author

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: The following mandatory check(s) are pending/not yet run (Rule superuser):

  • Facebook CLA Check

Dig deeper by viewing the pending checks on hud

Details for Dev Infra team Raised by workflow job

@facebook-github-bot
Copy link
Contributor

@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kit1980
Copy link
Contributor

kit1980 commented Sep 22, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: Command git -C /home/runner/work/pytorch/pytorch rebase origin/master returned non-zero exit code 1

Rebasing (1/1)
Auto-merging aten/src/ATen/autocast_mode.cpp
Auto-merging aten/src/ATen/native/native_functions.yaml
Auto-merging docs/source/torch.rst
Auto-merging test/forward_backward_compatibility/check_forward_backward_compatibility.py
Auto-merging test/test_linalg.py
Auto-merging torch/__init__.py
CONFLICT (content): Merge conflict in torch/__init__.py
Auto-merging torch/_linalg_utils.py
Auto-merging torch/_torch_docs.py
Auto-merging torch/overrides.py
error: could not apply e265b70a5a... Remove deprecated torch.lstsq (#70980)
hint: Resolve all conflicts manually, mark them as resolved with
hint: "git add/rm <conflicted_files>", then run "git rebase --continue".
hint: You can instead skip this commit: run "git rebase --skip".
hint: To abort and get back to the state before "git rebase", run "git rebase --abort".
Could not apply e265b70a5a... Remove deprecated torch.lstsq (#70980)
Details for Dev Infra team Raised by workflow job

@IvanYashchuk
Copy link
Collaborator Author

I resolved the merge conflict.

@facebook-github-bot
Copy link
Contributor

@kit1980 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@kit1980
Copy link
Contributor

kit1980 commented Sep 22, 2022

@pytorchbot merge -g

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the green (-g) flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 additional jobs have failed, first few of them are: trunk

Details for Dev Infra team Raised by workflow job

@kit1980
Copy link
Contributor

kit1980 commented Sep 23, 2022

@pytorchbot merge -f "Unrelated ROCm issue, tests passed previously"

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the force (-f) flag. This means your change will be merged immediately, bypassing any CI checks (ETA: 1-5 minutes). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Contributor

Hey @IvanYashchuk.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged module: deprecation module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.