Skip to content

Conversation

@vishwakftw
Copy link
Contributor

Changelog:

  • De-duplicate the code in tests for torch.solve, torch.cholesky_solve, torch.triangular_solve
  • Skip tests explicitly if requirements aren't met for e.g., if NumPy / SciPy aren't available in the environment
  • Add generic helpers for these tests in test/common_utils.py

Test Plan:

  • All tests should pass to confirm that the change is not erroneous

Clears one point specified in the discussion in #24333.

Changelog:
- De-duplicate the code in tests for torch.solve, torch.cholesky_solve, torch.triangular_solve
- Skip tests explicitly if requirements aren't met for e.g., if NumPy / SciPy aren't available in the environment
- Add generic helpers for these tests in test/common_utils.py

Test Plan:
- All tests should pass to confirm that the change is not erroneous
@pytorchbot pytorchbot added module: cuda Related to torch.cuda, and CUDA support in general module: operators module: tests Issues related to tests (not the torch.testing module) labels Sep 5, 2019
@vishwakftw vishwakftw requested a review from zou3519 September 5, 2019 22:53
@vishwakftw vishwakftw force-pushed the refactor-linalg-tests branch from b049ba7 to ad763e0 Compare September 5, 2019 23:03
@vishwakftw vishwakftw force-pushed the refactor-linalg-tests branch from ad763e0 to e740e98 Compare September 5, 2019 23:17
b, A, L = cholesky_solve_test_helper((n,), (n, k), cast, upper)
x = torch.cholesky_solve(b, L, upper=upper)
b_ = torch.matmul(A, x)
self.assertEqual(b_, b)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the original test we are checking that the delta is less than 1e-12, should we keep that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it would matter all that much.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have context on why the original error bound was 1e-12 so I'm a little hesitant to remove it. Does the test still pass with the 1e-12?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ll re-add it to the tests, push and await CI signal. Hope that is fine.

@jerryzh168 jerryzh168 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 11, 2019
@vishwakftw
Copy link
Contributor Author

vishwakftw commented Sep 11, 2019

Failures are unrelated, and previously failing ROCm test has passed now

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in eee58f8.

@gchanan
Copy link
Contributor

gchanan commented Sep 12, 2019

Any chance this has caused spurious failures? I see things like https://circleci.com/gh/pytorch/pytorch/2760063?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link/console:

Sep 12 16:13:22 ======================================================================
Sep 12 16:13:22 FAIL: test_triangular_solve_batched (__main__.TestCuda)
Sep 12 16:13:22 ----------------------------------------------------------------------
Sep 12 16:13:22 Traceback (most recent call last):
Sep 12 16:13:22   File "/var/lib/jenkins/workspace/test/common_utils.py", line 570, in wrapper
Sep 12 16:13:22     method(*args, **kwargs)
Sep 12 16:13:22   File "/var/lib/jenkins/workspace/test/common_utils.py", line 570, in wrapper
Sep 12 16:13:22     method(*args, **kwargs)
Sep 12 16:13:22   File "test_cuda.py", line 2786, in test_triangular_solve_batched
Sep 12 16:13:22     _TestTorchMixin._test_triangular_solve_batched(self, lambda t: t.cuda())
Sep 12 16:13:22   File "/var/lib/jenkins/workspace/test/test_torch.py", line 5995, in _test_triangular_solve_batched
Sep 12 16:13:22     upper, unitriangular, transpose)
Sep 12 16:13:22   File "/var/lib/jenkins/workspace/test/test_torch.py", line 5987, in triangular_solve_batch_helper
Sep 12 16:13:22     self.assertEqual(x_act, x_exp)  # Equality check
Sep 12 16:13:22   File "/var/lib/jenkins/workspace/test/common_utils.py", line 734, in assertEqual
Sep 12 16:13:22     assertTensorsEqual(x, y)
Sep 12 16:13:22   File "/var/lib/jenkins/workspace/test/common_utils.py", line 714, in assertTensorsEqual
Sep 12 16:13:22     self.assertLessEqual(max_err, prec, message)
Sep 12 16:13:22 AssertionError: tensor(62.7710, device='cuda:0') not less than or equal to 1e-05 : 
Sep 12 16:13:22 
Sep 12 16:13:22 ----------------------------------------------------------------------

not sure if it's related.

@vishwakftw
Copy link
Contributor Author

I see no reason why that test would fail. I'll re-open the PR, rebase on master and wait on CI.

@vishwakftw vishwakftw reopened this Sep 12, 2019
@vishwakftw
Copy link
Contributor Author

I found out the source of the error: it's 276bde3 which enable non default stream testing.

I'll skip the test appropriately.

@vishwakftw vishwakftw closed this Sep 12, 2019
@vishwakftw vishwakftw deleted the refactor-linalg-tests branch September 12, 2019 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cuda Related to torch.cuda, and CUDA support in general module: tests Issues related to tests (not the torch.testing module) open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants