Skip to content

Conversation

@peterbell10
Copy link
Collaborator

  • Replace THCNumerics with at::_isnan
  • Replace contiguous with expect_contiguous
  • Don't use contiguous on output tensors. Instead skip the copy and
    just create a new empty tensor.

- Replace THCNumerics with `at::_isnan`
- Replace `contiguous` with `expect_contiguous`
- Don't use `contiguous` on output tensors. Instead skip the copy and
  just create a new empty tensor.
@pytorch-probot
Copy link

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/peterbell10/pytorch/blob/1a9a3c708f79d8c0d34a75101520e93e40d40425/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-bionic-py3.8-gcc9-coverage ciflow/all, ciflow/coverage, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
win-vs2019-cuda10.2-py3 ciflow/all, ciflow/cuda, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Sep 20, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 1a9a3c7 (more details on the Dr. CI page):


  • 3/3 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-py3.6-clang9 / test (default, 1, 2, linux.2xlarge) (1/2)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-20T17:06:35.4525352Z CONTINUE_THROUGH_ERROR: false
2021-09-20T17:06:35.4519025Z   CUSTOM_TEST_ARTIFACT_BUILD_DIR: build/custom_test_artifacts
2021-09-20T17:06:35.4519668Z   ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine
2021-09-20T17:06:35.4520198Z   PR_LABELS: []
2021-09-20T17:06:35.4521343Z   GITHUB_TOKEN: ***
2021-09-20T17:06:35.4522301Z   DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.6-clang9:74e757e8b0cf750d2f91db6aa4c29640abce32ea
2021-09-20T17:06:35.4523413Z   JOB_BASE_NAME: linux-bionic-py3.6-clang9-test
2021-09-20T17:06:35.4523891Z   TEST_CONFIG: default
2021-09-20T17:06:35.4524211Z   SHARD_NUMBER: 1
2021-09-20T17:06:35.4524500Z   NUM_TEST_SHARDS: 2
2021-09-20T17:06:35.4524865Z   PYTORCH_IGNORE_DISABLED_ISSUES: 
2021-09-20T17:06:35.4525352Z   CONTINUE_THROUGH_ERROR: false
2021-09-20T17:06:35.4525671Z   SHM_SIZE: 1g
2021-09-20T17:06:35.4525953Z   PR_NUMBER: 65350
2021-09-20T17:06:35.4526242Z ##[endgroup]
2021-09-20T17:06:48.9021498Z Processing ./dist/torch-1.10.0a0+gitddb1277-cp36-cp36m-linux_x86_64.whl
2021-09-20T17:06:48.9315012Z Requirement already satisfied: dataclasses in /opt/conda/lib/python3.6/site-packages (from torch==1.10.0a0+gitddb1277) (0.8)
2021-09-20T17:06:48.9318401Z Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.6/site-packages (from torch==1.10.0a0+gitddb1277) (3.10.0.0)
2021-09-20T17:06:49.2119423Z Installing collected packages: torch
2021-09-20T17:06:54.9974227Z Successfully installed torch-1.10.0a0+gitddb1277
2021-09-20T17:06:55.3229603Z ++++ dirname .jenkins/pytorch/common.sh
2021-09-20T17:06:55.3236593Z +++ cd .jenkins/pytorch

See GitHub Actions build linux-bionic-py3.8-gcc9-coverage / test (default, 2, 2, linux.2xlarge) (2/2)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-09-20T17:49:55.3310018Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-20T17:49:55.2778519Z 
2021-09-20T17:49:55.2779318Z Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL.
2021-09-20T17:49:55.2836181Z ok (0.052s)
2021-09-20T17:49:55.3092487Z   test_svd_errors_and_warnings_cpu_float64 (__main__.TestLinalgCPU) ... 
2021-09-20T17:49:55.3093366Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-20T17:49:55.3093712Z 
2021-09-20T17:49:55.3094141Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-20T17:49:55.3308575Z 
2021-09-20T17:49:55.3309269Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-20T17:49:55.3309599Z 
2021-09-20T17:49:55.3310018Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-09-20T17:49:55.3367713Z ok (0.053s)
2021-09-20T17:50:03.6552325Z   test_svd_lowrank_cpu_float64 (__main__.TestLinalgCPU) ... ok (8.318s)
2021-09-20T17:50:04.7592200Z   test_svd_memory_allocation_cpu_complex128 (__main__.TestLinalgCPU) ... test_linalg.py:3008: UserWarning: An output with one or more elements was resized since it had shape [3, 3], which does not match the required output shape [3].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:16.)
2021-09-20T17:50:04.7594673Z   torch.linalg.svdvals(a, out=out0)
2021-09-20T17:50:04.9870240Z test_linalg.py:3009: UserWarning: An output with one or more elements was resized since it had shape [3], which does not match the required output shape [3, 3].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at  /var/lib/jenkins/workspace/aten/src/ATen/native/Resize.cpp:16.)
2021-09-20T17:50:04.9872513Z   torch.linalg.svd(a, full_matrices=False, out=(out0, out1, out2))
2021-09-20T17:50:04.9951825Z ok (1.340s)
2021-09-20T17:50:05.8495654Z   test_svd_memory_allocation_cpu_complex64 (__main__.TestLinalgCPU) ... ok (0.854s)
2021-09-20T17:50:06.2917263Z   test_svd_memory_allocation_cpu_float32 (__main__.TestLinalgCPU) ... ok (0.442s)
2021-09-20T17:50:06.8867821Z   test_svd_memory_allocation_cpu_float64 (__main__.TestLinalgCPU) ... ok (0.595s)

1 failure not recognized by patterns:

Job Step Action
CircleCI pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single pytorch android gradle custom build single architecture (for PR) 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@soulitzer soulitzer added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 22, 2021
Copy link
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clean up!

@facebook-github-bot
Copy link
Contributor

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in 2898ef7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants