Skip to content

Conversation

@ansley
Copy link

@ansley ansley commented Feb 1, 2022

No description provided.

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 1, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/a75c29ee9fbe94d12fe09dbf932e3951d5f3598b/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default
Add ciflow labels to this PR to trigger more builds:

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
linux-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-manywheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk, ciflow/xla ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
windows-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-bionic-rocm4.5-py3.7 ciflow/linux, ciflow/rocm 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 1, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit efb3ecd (more details on the Dr. CI page):


  • 1/2 failures introduced in this PR
  • 1/2 broken upstream at merge base 8aa3620 on Feb 14 from 8:02am to 9:18am

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-bionic-py3.7-clang9 / test (xla, 1, 1, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-02-14T17:02:23.2655799Z /var/lib/jenkins/w... at::Tensor&, const at::Tensor&, c10::string_view)
2022-02-14T17:02:23.2649065Z  at::Tensor XLANativeFunctions::gelu(const at::Tensor& self) {
2022-02-14T17:02:23.2649423Z             ^~~~~~~~~~~~~~~~~~
2022-02-14T17:02:23.2649862Z In file included from /var/lib/jenkins/workspace/xla/torch_xla/csrc/aten_xla_type.cpp:13:0:
2022-02-14T17:02:23.2650669Z /var/lib/jenkins/workspace/xla/torch_xla/csrc/XLANativeFunctions.h:182:19: error: candidate is: static at::Tensor torch_xla::XLANativeFunctions::gelu(const at::Tensor&, c10::string_view)
2022-02-14T17:02:23.2651412Z  static at::Tensor gelu(const at::Tensor & self, c10::string_view approximate);
2022-02-14T17:02:23.2651979Z                    ^~~~
2022-02-14T17:02:23.2653188Z /var/lib/jenkins/workspace/xla/torch_xla/csrc/aten_xla_type.cpp:1514:12: error: prototype for ‘at::Tensor torch_xla::XLANativeFunctions::gelu_backward(const at::Tensor&, const at::Tensor&)’ does not match any in class ‘torch_xla::XLANativeFunctions’
2022-02-14T17:02:23.2654080Z  at::Tensor XLANativeFunctions::gelu_backward(const at::Tensor& grad,
2022-02-14T17:02:23.2654509Z             ^~~~~~~~~~~~~~~~~~
2022-02-14T17:02:23.2654947Z In file included from /var/lib/jenkins/workspace/xla/torch_xla/csrc/aten_xla_type.cpp:13:0:
2022-02-14T17:02:23.2655799Z /var/lib/jenkins/workspace/xla/torch_xla/csrc/XLANativeFunctions.h:183:19: error: candidate is: static at::Tensor torch_xla::XLANativeFunctions::gelu_backward(const at::Tensor&, const at::Tensor&, c10::string_view)
2022-02-14T17:02:23.2656663Z  static at::Tensor gelu_backward(const at::Tensor & grad_output, const at::Tensor & self, c10::string_view approximate);
2022-02-14T17:02:23.2657146Z                    ^~~~~~~~~~~~~
2022-02-14T17:02:24.9910381Z [9/179] c++ -MMD -MF /var/lib/jenkins/workspace/xla/build/temp.linux-x86_64-3.7/torch_xla/csrc/convolution.o.d -pthread -B /opt/conda/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/var/lib/jenkins/workspace/xla -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-bin -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/protobuf_archive/src -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_protobuf/src -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/eigen_archive -I/var/lib/jenkins/workspace/xla/third_party/tensorflow/bazel-tensorflow/external/com_google_absl -I/var/lib/jenkins/workspace -I/var/lib/jenkins/workspace/torch/csrc -I/var/lib/jenkins/workspace/torch/lib/tmp_install/include -I/opt/conda/lib/python3.7/site-packages/torch/include -I/opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/opt/conda/lib/python3.7/site-packages/torch/include/TH -I/opt/conda/lib/python3.7/site-packages/torch/include/THC -I/opt/conda/include/python3.7m -c -c /var/lib/jenkins/workspace/xla/torch_xla/csrc/convolution.cpp -o /var/lib/jenkins/workspace/xla/build/temp.linux-x86_64-3.7/torch_xla/csrc/convolution.o -std=c++14 -Wno-sign-compare -Wno-deprecated-declarations -Wno-return-type -DNDEBUG -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_clang"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1002"' -DTORCH_EXTENSION_NAME=_XLAC -D_GLIBCXX_USE_CXX11_ABI=1
2022-02-14T17:02:24.9914515Z cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
2022-02-14T17:02:24.9915098Z In file included from /var/lib/jenkins/workspace/c10/util/Logging.h:28:0,
2022-02-14T17:02:24.9915590Z                  from /var/lib/jenkins/workspace/c10/core/TensorImpl.h:14,
2022-02-14T17:02:24.9916288Z                  from /opt/conda/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:21,
2022-02-14T17:02:24.9916988Z                  from /opt/conda/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
2022-02-14T17:02:24.9917562Z                  from /var/lib/jenkins/workspace/torch/csrc/autograd/function_hook.h:5,
2022-02-14T17:02:24.9918085Z                  from /var/lib/jenkins/workspace/torch/csrc/autograd/variable.h:7,

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@ansley ansley marked this pull request as draft February 1, 2022 18:12
@ansley ansley removed the request for review from ezyang February 1, 2022 18:13
@ansley ansley marked this pull request as ready for review February 3, 2022 14:30
@ansley ansley changed the title [DRAFT] Port amax to structured kernel Port amax to structured kernel Feb 3, 2022
@ansley ansley requested a review from bdhirsh February 3, 2022 14:30
@ansley ansley force-pushed the structured_amax branch 4 times, most recently from 4a2ebde to aa0d82b Compare February 7, 2022 21:20
@ansley ansley requested a review from bdhirsh February 7, 2022 23:03
@bdhirsh
Copy link
Contributor

bdhirsh commented Feb 8, 2022

I see this error from CI (link):

AssertionError: The supported dtypes for amax on cpu according to its OpInfo are
        {torch.bfloat16, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16, torch.int32, torch.int64}, but the detected supported dtypes are {torch.bfloat16, torch.complex128, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16, torch.int32, torch.complex64, torch.int64}.
        The following dtypes should be added to the OpInfo: {torch.complex128, torch.complex64}.

I'm not sure what's causing that just from staring at it though- I'd have to repro locally.

It looks like the kernel eventually calls max_values_stub, which calls the kernel here. That kernel doesn't have any explicit support for complex dtypes, so I'm not sure why the OpInfo tests are saying that there's support for it with your change.

You can repro that OpInfo test with `python test/test_ops.py TestCommonCPU.test_dtypes_amax_cpu

@bdhirsh
Copy link
Contributor

bdhirsh commented Feb 8, 2022

ahh I think I found it. In your meta function, you use this to get the output dtype: get_result_or_self_value_dtype. That's implemented as (here):


ScalarType get_result_or_self_value_dtype(
    const Tensor& self,
    const Tensor& result,
    const c10::optional<ScalarType>& dtype) {
  if (result.defined()) {
    return result.scalar_type();
  } else {
    return dtype.value_or(toValueType(self.scalar_type()));
  }
}

And it looks like toValueType() is written to explicitly remove the "complex" part of the dtype (here). That ends up... forcing the output dtype to be a float even if the input is ComplexFloat. The existing amax() kernel knows to error out when it sees complex dtypes, but now it (incorrectly) sees an ordinary float dtype and stops failing, which is why you see the test failure.

My guess is that you used that helper function because it's used by some similar ops in that file, which seems totally reasonable. The easiest fix would probably be to just directly put the logic in get_result_or_self_value_dtype() in your meta function, but remove the toValueType() bit. That way you preserve the existing behavior.

(@ansley lmk if you have any questions about the diagnosis!)

cc @anjali411 - it looks like the other reduction ops in this file have the same problem - if the input tensor is complex, the output tensor is forced not to be complex:

>>> import torch
>>> a = torch.ones(2, dtype=torch.complex64) # make a complex tensor
>>> torch.std(a).dtype # torch.std output is not a complex tensor anymore
torch.float32
>>> torch.norm(a).dtype # same thing with norm (because it also uses toValueType())
torch.float32
>>> (a + a).dtype # but ordinary ops like (+) propagate complex dtypes
torch.complex64

Do you know if that's intended behavior for reduction ops? Otherwise I can file an issue for it.

@anjali411
Copy link
Contributor

@bdhirsh that's the intended behavior for norm and std. we should disable amax for complex dtypes and rename toValueType to toRealValueType

@IvanYashchuk IvanYashchuk removed their request for review February 10, 2022 20:00
@ansley ansley force-pushed the structured_amax branch 3 times, most recently from dfc8777 to 1bd8b7f Compare February 13, 2022 15:13
Copy link
Contributor

@bdhirsh bdhirsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for changing toValueType() in the PR too!

I'll approve, but I think you accidentally included changes to third_party/* folders - just back out those changes before landing!

@facebook-github-bot
Copy link
Contributor

@ansley has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Feb 16, 2022
Summary: Pull Request resolved: #72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
@github-actions
Copy link
Contributor

Hey @ansley.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 17, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 20, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 20, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 20, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Feb 21, 2022
Summary: Pull Request resolved: pytorch/pytorch#72124

Reviewed By: bdhirsh

Differential Revision: D34215708

Pulled By: ansley

fbshipit-source-id: fee887e331cb8bd9fab3d9d958ff13ac8d07be27
(cherry picked from commit 94dbb5b7e7e14a663dc02ecf5013fad10b8701b3)
@github-actions github-actions bot deleted the structured_amax branch February 15, 2024 01:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants