Skip to content

Conversation

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Jul 31, 2020

Stack from ghstack:

Test Plan - Updated existing tests to run for complex dtypes as well.

Also added tests for torch.addmm, torch.badmm

Differential Revision: D22960339

anjali411 added a commit that referenced this pull request Jul 31, 2020
ghstack-source-id: 0bb31cb
Pull Request resolved: #42383
@anjali411 anjali411 requested review from mruberry and zasdfgbnm July 31, 2020 20:01
@dtypesIfCUDA(torch.float32, torch.float64, torch.cfloat, torch.cdouble)
@tf32_on_and_off(0.01)
def test_mm(self, device, dtype):
def _test_mm(n, m, p, dtype, genf):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LOL, GitHub UI is so confusing.

Screenshot_20200731_131828

1e-1, 1e-1, 1e-5, _float_types2),
('addbmm', '', _small_2d, lambda t, d: [_small_3d(t, d), _small_3d(t, d)],
1e-1, 1e-1, 1e-4, _float_types2, _cpu_types, True, [tf32_on_and_off(0.005)]),
1e-1, 1e-1, 1e-4, _complex_and_float_types2, _cpu_types + _complex_types, True, [tf32_on_and_off(0.005)]),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry IIUC, _cpu_types is empty list, and _cpu_types + _complex_types would mean run CPU test only with complex?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Boy does this set of tests need to be entirely rewritten.

Your understanding is correct. Note that "testing" on CPU here just means verifying that the function and method variants of the op produce the same value. There's no check those values are correct.

Copy link
Contributor Author

@anjali411 anjali411 Aug 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know that the cpu tests in TestTensorDeviceOps only compare the function and method result which basically doesn't test anything for a lot of ops. will remove it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"testing" on cpu means results in low precision (bfloat16 or whatever _cpu_types is) is compared to result in fp32 precision. For _cpu_types=fp32 it would indeed just compare function and method.

@dr-ci
Copy link

dr-ci bot commented Aug 1, 2020

💊 CI failures summary and remediations

As of commit 42de72f (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 16 times.

@anjali411 anjali411 requested a review from zasdfgbnm August 5, 2020 20:51
Test Plan - Updated existing tests to run for complex dtypes as well. 

Also added tests for `torch.addmm`, `torch.badmm`

[ghstack-poisoned]
np_out = torch.full((n, p), float('nan'), device=device)
self.assertEqual(torch.mm(nm, mp), torch.mm(nm, mp, out=np_out))

if dtype.is_complex and device.startswith('cuda'):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.device_type == 'cuda' or torch.device(device).device_type

Copy link
Collaborator

@zasdfgbnm zasdfgbnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM


@onlyCPU
@dtypes(torch.float)
@dtypes(*(torch.testing.get_all_complex_dtypes() + [torch.float, torch.double]))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: Could we unify

torch.float32, torch.float64, torch.cfloat, torch.cdouble

vs

torch.testing.get_all_complex_dtypes() + [torch.float, torch.double]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean, replace all instances of torch.float32, torch.float64, torch.cfloat, torch.cdouble with torch.testing.get_all_complex_dtypes() + [torch.float, torch.double] ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean the opposite:

torch.float32, torch.float64, torch.cfloat, torch.cdouble

looks better than

*(torch.testing.get_all_complex_dtypes() + [torch.float, torch.double])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to use torch.testing.get_all_complex_dtypes() since in case we add more complex dtypes in future (for example, support for torch.chalf or others), we wouldn't have to update the tests one by one to add tests for that complex dtype. Just including it in torch.testing.get_all_complex_dtypes() would suffice.

@zasdfgbnm
Copy link
Collaborator

I think you need to rebase to fix the docker image not found error.

Test Plan - Updated existing tests to run for complex dtypes as well. 

Also added tests for `torch.addmm`, `torch.badmm`

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Aug 5, 2020
ghstack-source-id: 7834fd4
Pull Request resolved: #42383
Test Plan - Updated existing tests to run for complex dtypes as well. 

Also added tests for `torch.addmm`, `torch.badmm`

Differential Revision: [D22960339](https://our.internmc.facebook.com/intern/diff/D22960339)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Aug 6, 2020
ghstack-source-id: f84ba3d
Pull Request resolved: #42383
Test Plan - Updated existing tests to run for complex dtypes as well. 

Also added tests for `torch.addmm`, `torch.badmm`

Differential Revision: [D22960339](https://our.internmc.facebook.com/intern/diff/D22960339)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Aug 6, 2020
ghstack-source-id: e6c3feb
Pull Request resolved: #42383
@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in c9346ad.

@ngimel
Copy link
Collaborator

ngimel commented Aug 7, 2020

@jeffdaily currently cgemm/zgemm is disabled on rocm, but rocblas actually supports it - what would it take to enable it?

@facebook-github-bot facebook-github-bot deleted the gh/anjali411/45/head branch August 11, 2020 14:16
facebook-github-bot pushed a commit that referenced this pull request Aug 31, 2020
Summary:
Revert "Skips some complex tests on ROCm (#42759)".  This reverts commit 55b1706.

Use new cuda_to_hip_mappings.py from #43004.

Fixes #42383 (comment)

CC sunway513

Pull Request resolved: #43744

Reviewed By: glaringlee

Differential Revision: D23391263

Pulled By: ngimel

fbshipit-source-id: ddf734cea3ba69c24f0d79cf1b87c05cdb45ec3d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants