Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Jul 26, 2017

Signed-off-by: Edward Z. Yang [email protected]

Signed-off-by: Edward Z. Yang <[email protected]>
@soumith
Copy link
Contributor

soumith commented Jul 26, 2017

python 2.7.8 seems to not be supported by trusty travis

@soumith
Copy link
Contributor

soumith commented Jul 26, 2017

@ezyang
Copy link
Contributor Author

ezyang commented Jul 26, 2017

Yes. According to Travis travis-ci/travis-ci#8153 they did not build this version of Python for Trusty. Is there are particular reason why this is the beginning of our version support?

@soumith
Copy link
Contributor

soumith commented Jul 26, 2017

it's the default python version in 12.04 if I remember correctly

Signed-off-by: Edward Z. Yang <[email protected]>
@soumith soumith merged commit cb9ad7a into pytorch:master Jul 26, 2017
@cdluminate
Copy link
Contributor

ruotianluo added a commit to ruotianluo/pytorch-1 that referenced this pull request Jul 31, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits)
  Add ATen overload to AutoGPU. (pytorch#2234)
  Add comments for default value (pytorch#2242)
  Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235)
  fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239)
  Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170)
  Added aarch64 support (pytorch#2226)
  Increase tol. for float tensor qr big test.
  Improve Variable.retain_grad
  add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables
  Implement BatchNorm double backwards (pytorch#2207)
  [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221)
  fix for ATen API Change
  Opt into Trusty builds. (pytorch#2214)
  allow retain to be specified for unsafeTensorFromTH
  Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218)
  fix osx build errors related to long/int64_t
  Note [Undefined-dim versus 0-dim]
  Remove __func__ hack in auto nn.
  Enable Conv groups gradgradchecks. (pytorch#2216)
  fix a bug where some scalars were getting truncated to integers incorrectly.
  ...
ruotianluo added a commit to ruotianluo/pytorch-1 that referenced this pull request Aug 1, 2017
* commit '8262920b72374b1d9643f35057663ab02ab20330': (272 commits)
  Add ATen overload to AutoGPU. (pytorch#2234)
  Add comments for default value (pytorch#2242)
  Remove dead THPP code that has been replaced with ATen objects. (pytorch#2235)
  fix a bug where an uninitialized at::Tensor was passed to createPyObject (pytorch#2239)
  Replace thpp::Tensor with ATen Tensor in autograd csrc (pytorch#2170)
  Added aarch64 support (pytorch#2226)
  Increase tol. for float tensor qr big test.
  Improve Variable.retain_grad
  add `retain_grad` method, to variable, so gradient gets stored during backpop, on non-user variables
  Implement BatchNorm double backwards (pytorch#2207)
  [bugfix] in bce_with_logits logsumexp calculation (pytorch#2221)
  fix for ATen API Change
  Opt into Trusty builds. (pytorch#2214)
  allow retain to be specified for unsafeTensorFromTH
  Deduplicate THPUtils_checkLong/THPUtils_unpackLong (pytorch#2218)
  fix osx build errors related to long/int64_t
  Note [Undefined-dim versus 0-dim]
  Remove __func__ hack in auto nn.
  Enable Conv groups gradgradchecks. (pytorch#2216)
  fix a bug where some scalars were getting truncated to integers incorrectly.
  ...
@ezyang ezyang deleted the pr/trusty-travis branch September 7, 2017 20:30
zou3519 pushed a commit to zou3519/pytorch that referenced this pull request Mar 30, 2018
pytorch#2214)

* Avoid duplicated log when explicitly specified engine is not available

* Update operator.cc
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Nov 29, 2022
The complex type defined by `defineComplexType` doesn't seem to work
with nvcc. When compiled with nvcc, instead of using the definitions,
include <complex>. Also disable some other parts that don't work with
<complex>.

This would of course break the complex type support, but as long as it's
not used, it shouldn't matter. Compilation with nvcc should be just
ad-hoc experiments of code modifications, so the limitation should be
fine.
rraminen pushed a commit to rraminen/pytorch that referenced this pull request Aug 7, 2025
Additive on top of ROCm#2209

3D batchhorm tests (NHWC3D and NCHW3D)

NCHW 3D tests:

```
  test_batchnorm_3D_inference_NCHW_vs_cpu_float32 (__main__.TestNN) ... ok (0.149s)
  test_batchnorm_3D_inference_NCHW_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.062s)
  test_batchnorm_3D_inference_NCHW_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.042s)
  test_batchnorm_3D_inference_NCHW_vs_native_float32 (__main__.TestNN) ... ok (0.091s)
  test_batchnorm_3D_inference_NCHW_vs_native_mixed_bfloat16 (__main__.TestNN) ... ok (0.008s)
  test_batchnorm_3D_inference_NCHW_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.007s)
  test_batchnorm_3D_inference_NHWC_vs_NCHW_float32 (__main__.TestNN) ... ok (0.028s)
  test_batchnorm_3D_inference_NHWC_vs_NCHW_mixed_bfloat16 (__main__.TestNN) ... ok (0.010s)
  test_batchnorm_3D_inference_NHWC_vs_NCHW_mixed_float16 (__main__.TestNN) ... ok (0.010s)
  test_batchnorm_3D_inference_NHWC_vs_cpu_float32 (__main__.TestNN) ... ok (0.091s)
  test_batchnorm_3D_inference_NHWC_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.020s)
  test_batchnorm_3D_inference_NHWC_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.023s)
  test_batchnorm_3D_inference_NHWC_vs_native_float32 (__main__.TestNN) ... ok (0.010s)
  test_batchnorm_3D_inference_NHWC_vs_native_mixed_bfloat16 (__main__.TestNN) ... ok (0.015s)
  test_batchnorm_3D_inference_NHWC_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.007s)
  test_batchnorm_3D_train_NCHW_vs_cpu_float32 (__main__.TestNN) ... ok (0.011s)
  test_batchnorm_3D_train_NCHW_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_3D_train_NCHW_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_3D_train_NCHW_vs_native_float32 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_3D_train_NCHW_vs_native_mixed_bfloat16 (__main__.TestNN) ... skip: bfloat16 NCHW train failed due to native tolerance issue SWDEV-507600 (0.002s)
  test_batchnorm_3D_train_NCHW_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_3D_train_NHWC_vs_NCHW_float32 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_3D_train_NHWC_vs_NCHW_mixed_bfloat16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_3D_train_NHWC_vs_NCHW_mixed_float16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_3D_train_NHWC_vs_cpu_float32 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_3D_train_NHWC_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_3D_train_NHWC_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_3D_train_NHWC_vs_native_float32 (__main__.TestNN) ... ok (0.011s)
  test_batchnorm_3D_train_NHWC_vs_native_mixed_bfloat16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_3D_train_NHWC_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.004s)
```
Old batchnorm tests will have `2D` it their names
```
  test_batchnorm_2D_inference_NCHW_vs_cpu_float32 (__main__.TestNN) ... ok (0.023s)
  test_batchnorm_2D_inference_NCHW_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.005s)
  test_batchnorm_2D_inference_NCHW_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.005s)
  test_batchnorm_2D_inference_NCHW_vs_native_float32 (__main__.TestNN) ... ok (0.104s)
  test_batchnorm_2D_inference_NCHW_vs_native_mixed_bfloat16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_inference_NCHW_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.003s)
  test_batchnorm_2D_inference_NHWC_vs_NCHW_float32 (__main__.TestNN) ... ok (0.020s)
  test_batchnorm_2D_inference_NHWC_vs_NCHW_mixed_bfloat16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_inference_NHWC_vs_NCHW_mixed_float16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_inference_NHWC_vs_cpu_float32 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_inference_NHWC_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.003s)
  test_batchnorm_2D_inference_NHWC_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.003s)
  test_batchnorm_2D_inference_NHWC_vs_native_float32 (__main__.TestNN) ... ok (0.003s)
  test_batchnorm_2D_inference_NHWC_vs_native_mixed_bfloat16 (__main__.TestNN) ... ok (0.003s)
  test_batchnorm_2D_inference_NHWC_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.003s)
  test_batchnorm_2D_train_NCHW_vs_cpu_float32 (__main__.TestNN) ... ok (0.011s)
  test_batchnorm_2D_train_NCHW_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_2D_train_NCHW_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_2D_train_NCHW_vs_native_float32 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NCHW_vs_native_mixed_bfloat16 (__main__.TestNN) ... skip: bfloat16 NCHW train failed due to native tolerance issue SWDEV-507600 (0.002s)
  test_batchnorm_2D_train_NCHW_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NHWC_vs_NCHW_float32 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_2D_train_NHWC_vs_NCHW_mixed_bfloat16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_2D_train_NHWC_vs_NCHW_mixed_float16 (__main__.TestNN) ... ok (0.006s)
  test_batchnorm_2D_train_NHWC_vs_cpu_float32 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NHWC_vs_cpu_mixed_bfloat16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NHWC_vs_cpu_mixed_float16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NHWC_vs_native_float32 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NHWC_vs_native_mixed_bfloat16 (__main__.TestNN) ... ok (0.004s)
  test_batchnorm_2D_train_NHWC_vs_native_mixed_float16 (__main__.TestNN) ... ok (0.004s)
```

Tested in `compute-rocm-dkms-no-npi-hipclang` image build 16062:
`compute-artifactory.amd.com:5000/rocm-plus-docker/framework/compute-rocm-dkms-no-npi-hipclang:16062_ubuntu22.04_py3.10_pytorch_lw_release-2.7_1fee1967`

Tests can be run with environment variable `MIOPEN_ENABLE_LOGGING_CMD=1`
to collect MIOpenDriver commands

```
MIOPEN_ENABLE_LOGGING_CMD=1 python test_nn.py -v -k test_batchnorm_3D_train_NHWC_vs_NCHW_mixed_bfloat16

test_batchnorm_3D_train_NHWC_vs_NCHW_mixed_bfloat16 (__main__.TestNN) ... 
MIOpen(HIP): Command [LogCmdBNorm] ./bin/MIOpenDriver bnormbfp16 -n 4 -c 8 -D 2 -H 2 -W 2 -m 1 --forw 1 -b 0 -r 1 -s 1 --layout NDHWC
MIOpen(HIP): Command [LogCmdBNorm] ./bin/MIOpenDriver bnormbfp16 -n 4 -c 8 -D 2 -H 2 -W 2 -m 1 --forw 0 -b 1 -s 1 --layout NDHWC
MIOpen(HIP): Command [LogCmdBNorm] ./bin/MIOpenDriver bnormbfp16 -n 4 -c 8 -D 2 -H 2 -W 2 -m 1 --forw 1 -b 0 -r 1 -s 1 --layout NCDHW
MIOpen(HIP): Command [LogCmdBNorm] ./bin/MIOpenDriver bnormbfp16 -n 4 -c 8 -D 2 -H 2 -W 2 -m 1 --forw 0 -b 1 -s 1 --layout NCDHW

ok
```

Co-authored-by: Jeff Daily <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants