Skip to content

Conversation

@alykhantejani
Copy link
Contributor

fix for #1606

@soumith
Copy link
Contributor

soumith commented Jun 1, 2017

ERROR: test_Conv2d_missing_argument (__main__.TestNN)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_nn.py", line 1359, in test_Conv2d_missing_argument
    self.assertRaises(RuntimeError, lambda: c(None))
  File "/opt/python/3.6.1/lib/python3.6/unittest/case.py", line 728, in assertRaises
    return context.handle('assertRaises', args, kwargs)
  File "/opt/python/3.6.1/lib/python3.6/unittest/case.py", line 177, in handle
    callable_obj(*args, **kwargs)
  File "test_nn.py", line 1359, in <lambda>
    self.assertRaises(RuntimeError, lambda: c(None))
  File "/home/travis/virtualenv/python3.6.1/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/travis/virtualenv/python3.6.1/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 244, in forward
    self.padding, self.dilation, self.groups)
  File "/home/travis/virtualenv/python3.6.1/lib/python3.6/site-packages/torch/nn/functional.py", line 41, in conv2d
    if input.dim() != 4:
AttributeError: 'NoneType' object has no attribute 'dim'

@alykhantejani
Copy link
Contributor Author

I had to add the check below

    if input is not None and input.dim() != 4:
        raise ValueError("Expected 4D tensor as input, got {}D tensor instead.".format(input.dim()))

as the usual check for None happens in the call to ConvNd, where it's too late to assert that the input should be 4D. Not super happy with this, but can't think of another, clean, way to do this.

@apaszke
Copy link
Contributor

apaszke commented Jun 2, 2017

Can you add dim checks to other conv functions too?

@alykhantejani
Copy link
Contributor Author

sure thing

@alykhantejani
Copy link
Contributor Author

Done for convxD and conv_transposedxD.

@soumith soumith merged commit f1c57ac into pytorch:master Jun 2, 2017
houseroad added a commit to houseroad/pytorch that referenced this pull request Jan 4, 2019
…b18ba1 (pytorch#15739)

Summary:
Pull Request resolved: pytorch#15739

Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8

Included changes:
- **[8384c78](onnx/onnx@8384c78)**: add constantofshape (pytorch#1582) <Rui Zhu>
- **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (pytorch#1707) <Paul Jesse Hellemn>
- **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (pytorch#1695)" (pytorch#1702) <Lu Fang>
- **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (pytorch#1695) <Hector Li>
- **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (pytorch#1699) <Dmitri Smirnov>
- **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (pytorch#1696) <Ke Zhang>
- **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (pytorch#1694) <Maik Riechert>
- **[f612532](onnx/onnx@f612532)**: Fix typos (pytorch#1686) <Eundoo Song>

Reviewed By: zrphercule

Differential Revision: D13581674

fbshipit-source-id: a961667184b09d2822815ba5d3fa4198a4c57e88
facebook-github-bot pushed a commit that referenced this pull request Jan 4, 2019
…b18ba1 (#15739)

Summary:
Pull Request resolved: #15739

Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8

Included changes:
- **[8384c78](onnx/onnx@8384c78)**: add constantofshape (#1582) <Rui Zhu>
- **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (#1707) <Paul Jesse Hellemn>
- **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (#1695)" (#1702) <Lu Fang>
- **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (#1695) <Hector Li>
- **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (#1699) <Dmitri Smirnov>
- **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (#1696) <Ke Zhang>
- **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (#1694) <Maik Riechert>
- **[f612532](onnx/onnx@f612532)**: Fix typos (#1686) <Eundoo Song>

Reviewed By: zrphercule

Differential Revision: D13581674

fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464
mrshenli pushed a commit to mrshenli/pytorch that referenced this pull request Jan 6, 2019
…b18ba1 (pytorch#15739)

Summary:
Pull Request resolved: pytorch#15739

Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8

Included changes:
- **[8384c78](onnx/onnx@8384c78)**: add constantofshape (pytorch#1582) <Rui Zhu>
- **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (pytorch#1707) <Paul Jesse Hellemn>
- **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (pytorch#1695)" (pytorch#1702) <Lu Fang>
- **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (pytorch#1695) <Hector Li>
- **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (pytorch#1699) <Dmitri Smirnov>
- **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (pytorch#1696) <Ke Zhang>
- **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (pytorch#1694) <Maik Riechert>
- **[f612532](onnx/onnx@f612532)**: Fix typos (pytorch#1686) <Eundoo Song>

Reviewed By: zrphercule

Differential Revision: D13581674

fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464
jagadish-amd pushed a commit to jagadish-amd/pytorch that referenced this pull request Jan 14, 2025
…torch#137717) (pytorch#1695)

The logsumexp tensor was considered for internal use only but apparently
exposed to unit tests and inductors.

The stream should be selected after picking the current device.
Otherwise the code is checking the default device's architecture.

Fixes pytorch#131316 pytorch#137414

Pull Request resolved: pytorch#137717
Approved by: https://github.com/drisspg

Co-authored-by: Jack Taylor <[email protected]>
(cherry picked from commit 770fcaf)

Co-authored-by: Xinya Zhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants