forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 75
Integrate from upstream #223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: Prompted by Alex Falcon's input on the forums. Thank you! Pull Request resolved: pytorch#11896 Differential Revision: D9976831 Pulled By: SsnL fbshipit-source-id: 460af51049c289ed4ce529b7b6ae6314e2bdaae4
Summary: To update the onnx model zoo. Pull Request resolved: pytorch#11873 Reviewed By: BIT-silence Differential Revision: D9953369 Pulled By: houseroad fbshipit-source-id: 5e96a982b8029dceeb08e3bea4094bae053e1865
Summary: - Finishes unifying Half type in pytorch and caffe2 - As a side effect, aten_op works for fp16 now Pull Request resolved: pytorch#11676 Reviewed By: weiyangfb Differential Revision: D9829019 Pulled By: li-roy fbshipit-source-id: b8c9663873c10fe64c90ef180dc81af2e866674e
Summary: Pull Request resolved: pytorch#11785 Replace each instead of float16 with Half. Reviewed By: Yangqing Differential Revision: D9892158 fbshipit-source-id: b9225ca7bd5c84fd1c04a9d24b026c8b6cbff120
Summary:
This PR serves two purposes:
1. Design an abstraction over a serialization scheme for C++ modules, optimizers and tensors in general,
2. Add serialization to the ONNX/PyTorch proto format.
This is currently a rough prototype I coded up today, to get quick feedback.
For this I propose the following serialization interface within the C++ API:
```cpp
namespace torch { namespace serialize {
class Reader {
public:
virtual ~Reader() = default;
virtual void read(const std::string& key, Tensor& tensor, bool is_buffer = false) = 0;
virtual void finish() { }
};
class Writer {
public:
virtual ~Reader() = default;
virtual void writer(const std::string& key, const Tensor& tensor, bool is_buffer = false) = 0;
virtual void finish() { }
};
}} // namespace torch::serialize
```
There are then subclasses of these two for (1) Cereal and (2) Protobuf (called the "DefaultWriter" and "DefaultReader" to hide the implementation details). See `torch/serialize/cereal.h` and `torch/serialize/default.h`. This abstraction and subclassing for these two allows us to:
1. Provide a cereal-less serialization forward that we can ship and iterate on going forward,
2. Provide no-friction backwards compatibility with existing C++ API uses, mainly StarCraft.
The user-facing API is (conceptually):
```cpp
void torch::save(const Module& module, Writer& writer);
void torch::save(const Optimizer& optimizer, Writer& writer);
void torch::read(Module& module, Reader& reader);
void torch::read(Optimizer& optimizer, Reader& reader);
```
with implementations for both optimizers and modules that write into the `Writer` and read from the `Reader`
ebetica ezyang zdevito dzhulgakov
Pull Request resolved: pytorch#11619
Differential Revision: D9984664
Pulled By: goldsborough
fbshipit-source-id: e03afaa646221546e7f93bb8dfe3558e384a5847
Summary: Updated requirements.txt and conf.py. Pull Request resolved: pytorch#11835 Reviewed By: SsnL Differential Revision: D9941160 Pulled By: brianjo fbshipit-source-id: fbac91214558e6d17beff74261d990c7dc762038
Summary: Pull Request resolved: pytorch#11817 Blob::Serialize() and Blob::Deserialize() are now free functions SerializeBlob(), DeserializeBlob() instead. This takes away access to Blob internals from them and makes future refactorings easier. Reviewed By: ezyang Differential Revision: D9882726 fbshipit-source-id: 3251ebd4b53fc12f5e6924a6e4a8db3846ab3729
Summary: TSIA Pull Request resolved: pytorch#11920 Differential Revision: D9985212 Pulled By: Yangqing fbshipit-source-id: 5f8e7ac94101177740e791f44eaa8c8ec55a908c
Summary: This PR fixes pytorch#11913. In order to test for this, the model is serialized twice in `getExportImportCopy`. Pull Request resolved: pytorch#11915 Differential Revision: D9984697 Pulled By: soumith fbshipit-source-id: ae0250c179000c03db1522b99410f6ecb9681297
Summary: Upstream PR: https://gitlab.kitware.com/cmake/cmake/merge_requests/2391/diffs Pull Request resolved: pytorch#11880 Differential Revision: D9989119 Pulled By: soumith fbshipit-source-id: 66e87367127975a5f1619fe447f74e76f101b503
Summary: Stack: :black_circle: **pytorch#11900 Retainable is no more** [:yellow_heart:](https://our.intern.facebook.com/intern/diff/D9977505/) :white_circle: pytorch#11902 Refactor fastGet/fastSet for clarity, removing a null pointer check. [:yellow_heart:](https://our.intern.facebook.com/intern/diff/D9977654/) Kill it with fire Pull Request resolved: pytorch#11900 Differential Revision: D9979779 Pulled By: ezyang fbshipit-source-id: 0a437e7a0baadb6440e7dc39a01b4a406171faa7
Summary: I also fix a bug that crept in while we had incorrect semantics where UndefinedTensorImpl was a CPU tensor, and thus some moves which shouldn't have been legal didn't crash. Moving out the Tensor* also moved out the Tensor* in the blob, and it's not supported to store an undefined tensor in a blob. Pull Request resolved: pytorch#11738 Reviewed By: gchanan Differential Revision: D9847859 fbshipit-source-id: db6be0f76a8e6526a89fd0e87b6a23b9cc820c8d
Summary: There are two parts: - Optional tensors cannot be dispatch tensors because dispatch tensors cannot be optional. - While the kernel dealt with undefined grad_outs, the logistics around it did not fully accomodate grad_hy being undefined. Fixes: pytorch#11800 Thank you, mttk for the reproduction! Pull Request resolved: pytorch#11872 Differential Revision: D9978527 Pulled By: apaszke fbshipit-source-id: e622c288d2eac93bd8388e141fb773f2588e2b8f
Summary: Pull Request resolved: pytorch#11909 Differential Revision: D9979595 fbshipit-source-id: 07b1027bd6bd1605a31afd4f57bcd58e307fa41e
Summary: The sample code in the docstring of `torch.jit.createResolutionCallback` is not working: `createResolutionCallback()` gets the frame of `bar`. In order to get the frame of `baz`, one need to use `createResolutionCallback(1)` Pull Request resolved: pytorch#11921 Differential Revision: D9989123 Pulled By: soumith fbshipit-source-id: a7166defdccbbf6979f7df4c871298e6b9a2b415
…11933) Summary: We do this by being more NaN tolerant. Fixes: pytorch#9062 Pull Request resolved: pytorch#11933 Differential Revision: D9991129 Pulled By: soumith fbshipit-source-id: c99b04462c1bee90d00eeabb0c111de12f855f4d
Summary: - fix PR pytorch#11061 by moving `detach_()` and `set_requires_grad()` to `torch.tensor_ctor()` and `tensor.new_tensor`, and also removed warnings and `args_requires_grad` from `internal_new_from_data ` - with this patch, the returned tensor from `tensor_ctor()` and `new_tensor` will be detached from source tensor, and set requires_grad based on the input args - `torch.as_tensor` retains its behavior as documented gchanan apaszke Pull Request resolved: pytorch#11815 Differential Revision: D9932713 Pulled By: weiyangfb fbshipit-source-id: 4290cbc57bd449954faadc597c24169a7b2d8259
…eletion in THD cmake Differential Revision: D9985212 Original commit changeset: 5f8e7ac94101 fbshipit-source-id: 1783cbfc91008ab3db36bad7c1bf51e16da7fb2d
Summary: They aren't recognized anywhere in the JIT Pull Request resolved: pytorch#11910 Differential Revision: D9979968 Pulled By: apaszke fbshipit-source-id: bb2505a14e3b1e54d5c243f99c80a4f4d918b204
Summary: Deleted this section by mistake in last PR. Pull Request resolved: pytorch#11938 Reviewed By: SsnL Differential Revision: D9993258 Pulled By: brianjo fbshipit-source-id: 2552178cebd005a1105a22930c4d128c67247378
Summary: Fixes: pytorch#11905 Pull Request resolved: pytorch#11935 Differential Revision: D9991120 Pulled By: soumith fbshipit-source-id: b00ad4f405440664ae5228b229a2ba0a5d3d92f6
Summary: `-O0` is problematic for compiling sleef kernels since they consist of a bunch of vector intrinsics. In `-O0`, the compiler spills *every* intermediate value to the stack. In one example (TestEndToEndHybridFrontendModels.test_snli in test_jit.py) the function `Sleef_tanhf8_u10avx2` would spill 30kB of AVX registers onto the stack and run two orders of magnitude slower than in opt mode, causing the test to take minutes rather than seconds. I've verified that this behavior is not present with `-O1` Pull Request resolved: pytorch#11942 Differential Revision: D9994658 Pulled By: jamesr66a fbshipit-source-id: cdd9474c6ae3aa9898d5715ac19a900f5f90468a
Summary: Spruriously added in pytorch#11261 I had a PR to catch these automatically (pytorch#11279), but it had some issues passing on some CI environments but not others (e.g. for `test_nn_group_norm`), any ideas? Pull Request resolved: pytorch#11916 Differential Revision: D9992065 Pulled By: driazati fbshipit-source-id: 05cfa8ed9af939e8ffd5827847ee7bfe0be799b2
Summary: - Disable addmm fusion. The reason for this is explained in the comment. - Tiny change in `stack.h` that lets us avoid constructing an unnecessary temporary `IValue` on the (C++) stack (it will only get created on the interpreter stack directly). - Fixed a correctness issue in requires grad propagation Pull Request resolved: pytorch#11654 Reviewed By: colesbury Differential Revision: D9813739 Pulled By: apaszke fbshipit-source-id: 23e83bc8605802f39bfecf447efad9239b9421c3
Summary: Previously, we didn't cast any 0-dim tensors used in CUDA operations. We can only avoid the casts for 0-dim CPU tensors used in CUDA operations. Fixes pytorch#11795 Pull Request resolved: pytorch#11808 Differential Revision: D9922406 Pulled By: colesbury fbshipit-source-id: 940b8a8534770aa5cd70d5d09b96be0f0f8146ff
Summary: This is pretty important because a common situation of passing LSTM hidden states as a tuple completely trashes performance of a network. Cleans up all our propagation/undef specialization passes, at a cost of increased complexity of `ArgumentSpec` and `GraphExecutor`. An alternative would be to simply flatten all tuple inputs to a graph ahead of time, but that might just end up being confusing in the future (you never know if you're working with a graph that can have tuple or not). Pull Request resolved: pytorch#11863 Differential Revision: D9992814 Pulled By: apaszke fbshipit-source-id: 0a565a3b23e32f8fa72c0534e07c1ce6187739fc
…rch#11877) Summary: Previously, aten::view returned a Dynamic type when attr::size is a prim::ListConstruct. See [this for a repro](https://gist.github.com/zou3519/cbd610472ba3369f556fa612a7d93b28). This prevented a pre-multipled lstm input graph from being fusible (aten::view is necessary to do premultiplication). If aten::view is passed an output of a prim::ListConstruct node, then shape prop should be able to figure out its TensorType because we statically know the number of inputs to prim::ListConstruct. This PR implements that. Pull Request resolved: pytorch#11877 Differential Revision: D9972356 Pulled By: zou3519 fbshipit-source-id: cb87786f6e7f222d4b8f07d8f2a9de34859cb6a5
Summary: This eliminates the need for any heuristics regarding stack size limits. This is a re-do pytorch#11534 with a fix to properly handle cases where multiple edges exist between a pair of functions. Pull Request resolved: pytorch#11611 Differential Revision: D9991198 Pulled By: resistor fbshipit-source-id: fecd2c5cac7e78f82a0f20cf33268bb1617bb4a0
Summary: - fixes pytorch#10723 - migrate PReLU to ATen and deprecate legacy PReLU - performance: CPU with weight.numel() = 1 ``` >>> m = nn.PReLU() >>> x = torch.randn(100, 100, 100, requires_grad=True) >>> %timeit -r 100 y = m(x) 100 loops, best of 100: 9.43 ms per loop >>> y = m(x).sum() >>> %timeit -r 100 y.backward(retain_graph=True) 10 loops, best of 100: 24.4 ms per loop >>> m = nn.PReLU() >>> x = torch.randn(100, 100, 100, requires_grad=True) >>> %timeit -r 100 y = m(x) 1000 loops, best of 100: 695 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 y.backward(retain_graph=True) 100 loops, best of 100: 2.47 ms per loop ``` CPU with weight.numel() = channels ``` >>> m = nn.PReLU(100) >>> x = torch.randn(100, 100, 100, requires_grad=True) >>> %timeit -r 100 y = m(x) 1000 loops, best of 100: 603 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 y.backward(retain_graph=True) 100 loops, best of 100: 13.3 ms per loop >>> m = nn.PReLU(100) >>> x = torch.randn(100, 100, 100, requires_grad=True) >>> %timeit -r 100 y = m(x) 1000 loops, best of 100: 655 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 y.backward(retain_graph=True) 100 loops, best of 100: 2.45 ms per loop ``` CUDA with weight.numel() = 1 ``` >>> m = nn.PReLU().cuda() >>> x = torch.randn(100, 100, 100, requires_grad=True).cuda() >>> %timeit -r 100 torch.cuda.synchronize(); y = m(x); torch.cuda.synchronize(); 10000 loops, best of 100: 187 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 torch.cuda.synchronize(); y.backward(retain_graph=True); torch.cuda.synchronize(); 100 loops, best of 100: 2.01 ms per loop >>> m = nn.PReLU().cuda() >>> x = torch.randn(100, 100, 100, requires_grad=True).cuda() >>> %timeit -r 100 torch.cuda.synchronize(); y = m(x); torch.cuda.synchronize(); 1000 loops, best of 100: 195 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 torch.cuda.synchronize(); y.backward(retain_graph=True); torch.cuda.synchronize(); 100 loops, best of 100: 2.28 ms per loop ``` CUDA with weight.numel() = channel ``` >>> m = nn.PReLU(100).cuda() >>> x = torch.randn(100, 100, 100, requires_grad=True).cuda() >>> %timeit -r 100 torch.cuda.synchronize(); y = m(x); torch.cuda.synchronize(); 1000 loops, best of 100: 174 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 torch.cuda.synchronize(); y.backward(retain_graph=True); torch.cuda.synchronize(); 100 loops, best of 100: 2.27 ms per loop >>> m = nn.PReLU(100).cuda() >>> x = torch.randn(100, 100, 100, requires_grad=True).cuda() >>> %timeit -r 100 torch.cuda.synchronize(); y = m(x); torch.cuda.synchronize(); 10000 loops, best of 100: 181 µs per loop >>> y = m(x).sum() >>> %timeit -r 100 torch.cuda.synchronize(); y.backward(retain_graph=True); torch.cuda.synchronize(); 100 loops, best of 100: 2.26 ms per loop ``` The huge performance regression in CPU when weight.numel() = 1 is addressed by replacing at::CPU_tensor_apply* with parallelized kernels. ezyang SsnL zou3519 soumith Pull Request resolved: pytorch#11758 Differential Revision: D9995799 Pulled By: weiyangfb fbshipit-source-id: d289937c78075f46a54dafbde92fab0cc4b5b86e
Summary: Annotations for DAI Reviewed By: duc0 Differential Revision: D9805867 fbshipit-source-id: 9ce2d9f3984817510ec8362a281f39878aad55e7
Summary: This PR is a large codemod to rewrite all C++ API tests with GoogleTest (gtest) instead of Catch. You can largely trust me to have correctly code-modded the tests, so it's not required to review every of the 2000+ changed lines. However, additional things I changed were: 1. Moved the cmake parts for these tests into their own `CMakeLists.txt` under `test/cpp/api` and calling `add_subdirectory` from `torch/CMakeLists.txt` 2. Fixing DataParallel tests which weren't being compiled because `USE_CUDA` wasn't correctly being set at all. 3. Updated README ezyang ebetica Pull Request resolved: pytorch#11953 Differential Revision: D9998883 Pulled By: goldsborough fbshipit-source-id: affe3f320b0ca63e7e0019926a59076bb943db80
Summary: Pull Request resolved: pytorch#11501 This doesn't really belong to TypeMeta, moving it to the error handling header Reviewed By: ezyang Differential Revision: D9763424 fbshipit-source-id: 127a8246171ab3a4475f2767d2dc1cc13c486a2e
Summary: Some of these symbols are used by device_test.cc . pytorch@d0db23e Pull Request resolved: pytorch#11965 Reviewed By: bwasti Differential Revision: D10002439 Pulled By: bddppq fbshipit-source-id: 4ae95b9c888b3c7685d0ffdbcbfa3441bcf90091
Summary: onnx/onnx@c4734c6 Pull Request resolved: pytorch#11958 Differential Revision: D10002779 Pulled By: bddppq fbshipit-source-id: 8bd7dfc8fdaf0b699a61f5b228f7102a16b92258
Summary: Old per-API+arch headers reside in /opt/android_ndk/r*/platforms/android-*/arch-*/usr/include/ New Unified headers reside in /opt/android_ndk/r*/sysroot/usr/include/ Unified headers are not exactly drop-in replacements for the old ones. Old headers had some nested includes that are absent in the unified versions, so we need to explicitly include them. Reviewed By: mzlee Differential Revision: D9952200 fbshipit-source-id: 6515e1d1ab576069db499c3fb23a69d507279c8c
Summary: Pull Request resolved: pytorch#11943 See title Reviewed By: ezyang Differential Revision: D9992645 fbshipit-source-id: e8f80d6ea762971513e5e8072975ceea53e1f11a
…11946) Summary: See title Pull Request resolved: pytorch#11946 Differential Revision: D9994625 Pulled By: cpuhrsch fbshipit-source-id: fca3d48ecbdab06ce53249db2402fc4613da4d21
lcskrishna
pushed a commit
to lcskrishna/pytorch
that referenced
this pull request
May 15, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance.
<details>
<summary>ASAN report</summary>
```
=================================================================
==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390
READ of size 8 at 0x61000013d790 thread T0
#0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154
ROCm#1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215
ROCm#2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69
ROCm#3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177
ROCm#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-
v11/bits/stl_algobase.h:1162
ROCm#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/
stl_algobase.h:1211
ROCm#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s
tl_algobase.h:1219
ROCm#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg
obase.h:1556
ROCm#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188
ROCm#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341
ROCm#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab
leTypeManual.cpp:408
ROCm#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
ROCm#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy
mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp
atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
ROCm#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
ROCm#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
ROCm#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten
sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)
const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
ROCm#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c
10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
ROCm#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144
ROCm#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847
ROCm#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto
rch/torch/csrc/autograd/VariableTypeManual.cpp:243
ROCm#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10
::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu
nctionIntoFunctor.h:13
ROCm#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor
const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c
10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor
.h:480
ROCm#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
ROCm#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
ROCm#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co
nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at
en/src/ATen/core/dispatch/Dispatcher.h:639
ROCm#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>,
c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
ROCm#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137
ROCm#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452
ROCm#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us
er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417
ROCm#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419
ROCm#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344
ROCm#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#33 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#56 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#65 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#72 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#81 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#90 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267
ROCm#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#111 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#118 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#133 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#142 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305
ROCm#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#159 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#168 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#183 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#190 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#205 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#214 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#225 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
ROCm#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
ROCm#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
ROCm#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#240 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
ROCm#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#249 0x3ffa2e05447 in call_function Python/ceval.c:5891
ROCm#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800)
freed by thread T0 here:
#0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
ROCm#1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75
previously allocated by thread T0 here:
#0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
ROCm#1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul
l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S
torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498
ROCm#2 0x3ff76f79e17 (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17)
SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const
Shadow bytes around the buggy address:
0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa
0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==1115867==ABORTING
```
</details>
<details>
<summary>Additional backtraces (not full)</summary>
Memory deallocation:
```
#0 operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
ROCm#1 0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75
ROCm#2 0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291
ROCm#3 0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370
ROCm#4 0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80
ROCm#5 0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90
ROCm#6 0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173
ROCm#7 0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (
this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
ROCm#8 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=...,
args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
ROCm#9 0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > (
unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0,
dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
ROCm#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=...,
dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96
ROCm#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
ROCm#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
ROCm#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
ROCm#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
ROCm#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401
```
Memory access:
```
#0 c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215
ROCm#1 0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69
ROCm#2 0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177
ROCm#3 0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162
ROCm#4 0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211
ROCm#5 0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219
ROCm#6 0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556
ROCm#7 0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188
ROCm#8 0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341
ROCm#9 0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408
ROCm#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c
10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...)
at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
ROCm#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt
>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<
c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
ROCm#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (
unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar
rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern
el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
ROCm#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=...,
dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
ROCm#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr
ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
ROCm#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&,
c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
ROCm#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
ROCm#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
ROCm#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...)
at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243
```
</details>
Pull Request resolved: pytorch#101064
Approved by: https://github.com/Skylion007, https://github.com/albanD
alugorey
pushed a commit
to alugorey/pytorch
that referenced
this pull request
May 17, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.
This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.
<details>
<summary>ASAN output</summary>
```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
#0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
#1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
#2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
ROCm#3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
ROCm#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
ROCm#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
ROCm#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
ROCm#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
ROCm#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
ROCm#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#14 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
ROCm#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#25 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#34 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#41 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#50 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#59 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
ROCm#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
ROCm#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
ROCm#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
ROCm#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#80 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#87 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
ROCm#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
ROCm#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
ROCm#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#102 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#111 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
ROCm#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
ROCm#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
ROCm#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#128 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#137 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
ROCm#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
ROCm#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
ROCm#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#152 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#159 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
ROCm#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
ROCm#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
ROCm#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#174 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#183 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#194 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
ROCm#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
ROCm#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
ROCm#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#209 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#218 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
ROCm#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
ROCm#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#229 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
ROCm#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#236 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#243 0x3ffab105447 in call_function Python/ceval.c:5891
ROCm#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
ROCm#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
ROCm#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
ROCm#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
#0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
previously allocated by thread T0 here:
#0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
#1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
#2 0x3fff5849ecf ([stack]+0xb2ecf)
SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==1134126==ABORTING
```
Additional backtraces (not full):
Allocation:
```
#0 __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1 0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
#2 0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
ROCm#3 0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
ROCm#4 0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
__n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
ROCm#5 0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
__n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
ROCm#6 0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
ROCm#7 0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
ROCm#8 0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
ROCm#9 0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
ROCm#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
__args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
ROCm#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
ROCm#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
ROCm#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
ROCm#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
ROCm#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
ROCm#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
ROCm#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
ROCm#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
ROCm#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
ROCm#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
ROCm#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```
Deallocation:
```
#0 operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1 0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
__p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
#2 0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
__a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
ROCm#3 0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
ROCm#4 0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
ROCm#5 0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
ROCm#6 0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
ROCm#7 0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
ROCm#8 0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
ROCm#9 0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
ROCm#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
ROCm#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
ROCm#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
ROCm#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
ROCm#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
ROCm#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
ROCm#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
ROCm#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: pytorch#101400
Approved by: https://github.com/zou3519
lcskrishna
pushed a commit
to lcskrishna/pytorch
that referenced
this pull request
May 29, 2023
3 disabled functions are attempting out of bounds reads. Disable them until sleef library is fixed.
<details>
<summary>ASAN report</summary>
```
=================================================================
==2030580==ERROR: AddressSanitizer: global-buffer-overflow on address 0x03ff70f54570 at pc 0x03ff6704e960 bp 0x03ffce128940 sp 0x03ffce128930
READ of size 4 at 0x03ff70f54570 thread T0
#0 0x3ff6704e95f in vgather_vf_p_vi2 /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129
ROCm#1 0x3ff6704e95f in rempif /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:550
ROCm#2 0x3ff6704e95f in Sleef_cosf4_u10vxe2 /home/user/pytorch/third_party/sleef/src/libm/sleefsimdsp.c:1021
ROCm#3 0x3ff67029cfb in Sleef_cosf4_u10 /home/user/pytorch/build/sleef/src/libm/disps390x_128.c:182
ROCm#4 0x3ff55d21941 in at::vec::ZVECTOR::Vectorized<float, void> at::vec::ZVECTOR::Vectorized<float, void>::mapSleef<float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __
vector(2)), float, 0>(float __vector(4) const (*)(float __vector(4)), double __vector(2) const (*)(double __vector(2))) const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:991
ROCm#5 0x3ff5689ad01 in at::vec::ZVECTOR::Vectorized<float, void>::cos() const /home/user/pytorch/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h:1074
ROCm#6 0x3ff5685df97 in at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}::operator()(at::vec::ZVECTOR::Vectorized<float, void>) const /home/
user/pytorch/aten/src/ATen/cpu/vml.h:71
ROCm#7 0x3ff5689b691 in void at::vec::map<float, at::vml::ZVECTOR::vcos<float>(float*, float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1}, 0>(at::vml::ZVECTOR::vcos<float>(float*,
float const*, long)::{lambda(at::vec::ZVECTOR::Vectorized<float, void>)ROCm#1} const&, float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vec/functional_base.h:239
ROCm#8 0x3ff5685e0df in void at::vml::ZVECTOR::vcos<float>(float*, float const*, long) /home/user/pytorch/aten/src/ATen/cpu/vml.h:71
ROCm#9 0x3ff563fdde3 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
ROCm#10 0x3ff5648e4a3 in operator() /home/user/pytorch/aten/src/ATen/TensorIterator.h:406
ROCm#11 0x3ff5663cae1 in callback_fn<at::TensorIteratorBase::loop_2d_from_1d<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> >(c
onst at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)>&)::<lambda(char**, const int64_t*, int64_t, int64_t)> > /home/user/pytorch/
c10/util/FunctionRef.h:43
ROCm#12 0x3ff4d45a933 in c10::function_ref<void (char**, long const*, long, long)>::operator()(char**, long const*, long, long) const /home/user/pytorch/c10/util/FunctionRef.h:64
ROCm#13 0x3ff4d455133 in at::internal::serial_for_each(c10::ArrayRef<long>, c10::ArrayRef<long>, char**, unsigned long, c10::function_ref<void (char**, long const*, long, long)>, at::Range) /home/user/pyt
orch/aten/src/ATen/TensorIteratorInternal.h:52
ROCm#14 0x3ff4d43b703 in at::TensorIteratorBase::serial_for_each(c10::function_ref<void (char**, long const*, long, long)>, at::Range) const /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:777
ROCm#15 0x3ff4d43ab59 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long const*, long, long)>, long) /home/user/pytorch/aten/src/ATen/TensorIterator.cpp:749
ROCm#16 0x3ff5648e851 in for_each<at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&)::<lambda()>::<lambda()>::<lambda(char**, const int64_t*, int64_t)> > /home/user/pytorch/aten/src/ATen/TensorItera
tor.h:421
ROCm#17 0x3ff563fe5f9 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
ROCm#18 0x3ff56400915 in operator() /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
ROCm#19 0x3ff56400f1d in at::native::ZVECTOR::cos_kernel(at::TensorIteratorBase&) /home/user/pytorch/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp:770
ROCm#20 0x3ff4f303007 in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&), at::native::cos_stub>::operator()<at::native::structured_cos_out&>(c10::DeviceType, at::native::structured_cos_out
&) /home/user/pytorch/aten/src/ATen/native/DispatchStub.h:158
ROCm#21 0x3ff4f2edb3f in at::native::structured_cos_out::impl(at::Tensor const&, at::Tensor const&) /home/user/pytorch/aten/src/ATen/native/UnaryOps.cpp:330
ROCm#22 0x3ff526ef739 in wrapper_CPU_cos /home/user/pytorch/build/aten/src/ATen/RegisterCPU.cpp:4307
ROCm#23 0x3ff52c651d9 in operator() /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
ROCm#24 0x3ff52c651d9 in call /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:463
ROCm#25 0x3ff5076df2f in at::Tensor c10::callUnboxedKernelFunction<at::Tensor, at::Tensor const&>(void*, c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&) /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:50
ROCm#26 0x3ff5009a93f in at::Tensor c10::KernelFunction::call<at::Tensor, at::Tensor const&>(c10::OperatorHandle const&, c10::DispatchKeySet, at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core
/boxing/KernelFunction_impl.h:103
ROCm#27 0x3ff5009a93f in at::Tensor c10::Dispatcher::call<at::Tensor, at::Tensor const&>(c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)> const&, at::Tensor const&) const /home/user/pytorch/aten/s
rc/ATen/core/dispatch/Dispatcher.h:639
ROCm#28 0x3ff5009a93f in c10::TypedOperatorHandle<at::Tensor (at::Tensor const&)>::call(at::Tensor const&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
ROCm#29 0x3ff5009a93f in at::_ops::cos::call(at::Tensor const&) /home/user/pytorch/build/aten/src/ATen/Operators_0.cpp:2215
ROCm#30 0x3ff7d813741 in at::Tensor::cos() const /home/user/pytorch/build/aten/src/ATen/core/TensorBody.h:2107
ROCm#31 0x3ff7dc0f2b7 in operator() /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2953
ROCm#32 0x3ff7dc0faf7 in THPVariable_cos /home/user/pytorch/torch/csrc/autograd/generated/python_torch_functions_2.cpp:2955
ROCm#33 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
ROCm#34 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#35 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#36 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
ROCm#37 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#38 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#39 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#40 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#41 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
ROCm#42 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#43 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#44 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/
torch/csrc/utils/python_dispatch.cpp:175
ROCm#45 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::Op
eratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
ROCm#46 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch::
PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operator
Handle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
ROCm#47 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/b
oxing/BoxedKernel_impl.h:41
ROCm#48 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/cor
e/boxing/KernelFunction_impl.h:43
ROCm#49 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:6
91
ROCm#50 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
ROCm#51 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
ROCm#52 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
ROCm#53 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c1
0::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
ROCm#54 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10::
IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
ROCm#55 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
ROCm#56 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-lin
ux-gnu/11/include/g++-v11/bits/std_function.h:590
ROCm#57 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
ROCm#58 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11::
kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
ROCm#59 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
ROCm#60 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
ROCm#61 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::vo
id_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
ROCm#62 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /h
ome/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
ROCm#63 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
ROCm#64 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
ROCm#65 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
ROCm#66 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
ROCm#67 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#68 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#69 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
ROCm#70 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#71 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#72 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#73 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#74 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
ROCm#75 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#76 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
ROCm#77 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#78 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#79 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#80 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#81 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#82 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#83 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#84 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#85 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#86 0x3ffa5feb289 in call_function Python/ceval.c:5891
ROCm#87 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#88 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#89 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#90 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#91 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
ROCm#92 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#93 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#94 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#95 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#96 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#97 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#98 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#99 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
ROCm#100 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#101 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#102 0x3ff7f87a393 in torch::impl::dispatch::PythonKernelHolder::operator()(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch
/torch/csrc/utils/python_dispatch.cpp:175
ROCm#103 0x3ff7f8871a7 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::operator()(c10::OperatorKernel*, c10::O
peratorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:87
ROCm#104 0x3ff7f887261 in c10::BoxedKernel::makeFromFunctor<torch::impl::dispatch::PythonKernelHolder>(std::unique_ptr<torch::impl::dispatch::PythonKernelHolder, std::default_delete<torch::impl::dispatch:
:PythonKernelHolder> >)::{lambda(c10::OperatorKernel*, c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*)ROCm#1}::_FUN(c10::OperatorKernel*, c10::Operato
rHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) /home/user/pytorch/aten/src/ATen/core/boxing/BoxedKernel_impl.h:86
ROCm#105 0x3ff7e0d10ab in c10::BoxedKernel::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/
boxing/BoxedKernel_impl.h:41
ROCm#106 0x3ff7e0d1459 in c10::KernelFunction::callBoxed(c10::OperatorHandle const&, c10::DispatchKeySet, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/co
re/boxing/KernelFunction_impl.h:43
ROCm#107 0x3ff7f876421 in c10::Dispatcher::callBoxed(c10::OperatorHandle const&, std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:
691
ROCm#108 0x3ff4d22bcdd in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >*) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:417
ROCm#109 0x3ff65a092d5 in c10::OperatorHandle::callBoxed(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:421
ROCm#110 0x3ff65a05641 in operator() /home/user/pytorch/torch/csrc/jit/runtime/register_c10_ops.cpp:15
ROCm#111 0x3ff65a08cb5 in __invoke_impl<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c
10::IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:61
ROCm#112 0x3ff65a0897b in __invoke_r<void, torch::jit::(anonymous namespace)::createOperatorFromC10(const c10::OperatorHandle&)::<lambda(torch::jit::Stack&)>&, std::vector<c10::IValue, std::allocator<c10:
:IValue> >&> /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/invoke.h:111
ROCm#113 0x3ff65a084e1 in _M_invoke /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/std_function.h:290
ROCm#114 0x3ff7eb2cb21 in std::function<void (std::vector<c10::IValue, std::allocator<c10::IValue> >&)>::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) const /usr/lib/gcc/s390x-ibm-li
nux-gnu/11/include/g++-v11/bits/std_function.h:590
ROCm#115 0x3ff7eb1b659 in torch::jit::Operation::operator()(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /home/user/pytorch/aten/src/ATen/core/stack.h:41
ROCm#116 0x3ff7eb08449 in torch::jit::invokeOperatorFromPython(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, pybind11::args, pybind11:
:kwargs const&, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:764
ROCm#117 0x3ff7eb09d85 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:829
ROCm#118 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
ROCm#119 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
ROCm#120 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
ROCm#121 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
ROCm#122 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
ROCm#123 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
ROCm#124 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
ROCm#125 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#126 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#127 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
ROCm#128 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#129 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#130 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#131 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#132 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
ROCm#133 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#134 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
ROCm#135 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#136 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#137 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#138 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#139 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#140 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#141 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#142 0x3ffa5e87d2b in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#143 0x3ffa5e882dd in method_vectorcall Objects/classobject.c:83
ROCm#144 0x3ffa5e836d3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#145 0x3ffa5e84b6f in _PyObject_CallFunctionVa Objects/call.c:485
ROCm#146 0x3ffa5e84f2d in callmethod Objects/call.c:557
ROCm#147 0x3ffa5e85039 in PyObject_CallMethod Objects/call.c:577
ROCm#148 0x3ff7f7efa05 in torch::handle_torch_function_no_python_arg_parser(c10::ArrayRef<pybind11::handle>, _object*, _object*, char const*, _object*, char const*, torch::TorchFunctionName) /home/user/py
torch/torch/csrc/utils/python_arg_parser.cpp:338
ROCm#149 0x3ff7eb09b67 in torch::jit::_get_operation_for_overload_or_packet(std::vector<std::shared_ptr<torch::jit::Operator>, std::allocator<std::shared_ptr<torch::jit::Operator> > > const&, c10::Symbol,
pybind11::args, pybind11::kwargs const&, bool, c10::optional<c10::DispatchKey>) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:827
ROCm#150 0x3ff7e573eb9 in operator() /home/user/pytorch/torch/csrc/jit/python/init.cpp:1549
ROCm#151 0x3ff7e6728dd in call_impl<pybind11::object, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&, 0, 1, pybind11::detail::v
oid_type> /home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1439
ROCm#152 0x3ff7e64312f in call<pybind11::object, pybind11::detail::void_type, torch::jit::initJITBindings(PyObject*)::<lambda(const string&, const string&)>::<lambda(pybind11::args, pybind11::kwargs)>&> /
home/user/pytorch/third_party/pybind11/include/pybind11/cast.h:1408
ROCm#153 0x3ff7e5da259 in operator() /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:249
ROCm#154 0x3ff7e5da441 in _FUN /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:224
ROCm#155 0x3ff7d317a1f in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/third_party/pybind11/include/pybind11/pybind11.h:929
ROCm#156 0x3ffa5ef5ae1 in cfunction_call Objects/methodobject.c:543
ROCm#157 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#158 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#159 0x3ffa5feb50d in do_call_core Python/ceval.c:5915
ROCm#160 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#161 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#162 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#163 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#164 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
ROCm#165 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#166 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
ROCm#167 0x3ffa5e84027 in _PyObject_MakeTpCall Objects/call.c:215
ROCm#168 0x3ffa5fd767b in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
ROCm#169 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#170 0x3ffa5feb289 in call_function Python/ceval.c:5891
ROCm#171 0x3ffa5fe5ad1 in _PyEval_EvalFrameDefault Python/ceval.c:4181
ROCm#172 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#173 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#174 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#175 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#176 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#177 0x3ffa5feb289 in call_function Python/ceval.c:5891
ROCm#178 0x3ffa5fe5c3b in _PyEval_EvalFrameDefault Python/ceval.c:4213
ROCm#179 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#180 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#181 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#182 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
ROCm#183 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#184 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#185 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#186 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#187 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#188 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#189 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#190 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
ROCm#191 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#192 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#193 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#194 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#195 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#196 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#197 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#198 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
ROCm#199 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#200 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#201 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#202 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#203 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#204 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#205 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#206 0x3ffa5e841fb in PyVectorcall_Call Objects/call.c:255
ROCm#207 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#208 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#209 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#210 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#211 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#212 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#213 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#214 0x3ffa5e83d1f in _PyObject_FastCallDictTstate Objects/call.c:142
ROCm#215 0x3ffa5e84937 in _PyObject_Call_Prepend Objects/call.c:431
ROCm#216 0x3ffa5f2f577 in slot_tp_call Objects/typeobject.c:7494
ROCm#217 0x3ffa5e843f3 in _PyObject_Call Objects/call.c:305
ROCm#218 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#219 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#220 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#221 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#222 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#223 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#224 0x3ffa5fd76a3 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
ROCm#225 0x3ffa5fd772f in PyObject_Vectorcall Include/cpython/abstract.h:123
ROCm#226 0x3ffa5feb289 in call_function Python/ceval.c:5891
ROCm#227 0x3ffa5fe5b21 in _PyEval_EvalFrameDefault Python/ceval.c:4198
ROCm#228 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#229 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#230 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#231 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
ROCm#232 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#233 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#234 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#235 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#236 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#237 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#238 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#239 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
ROCm#240 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#241 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#242 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#243 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#244 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#245 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#246 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#247 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
ROCm#248 0x3ffa5e84347 in _PyObject_Call Objects/call.c:290
ROCm#249 0x3ffa5e84483 in PyObject_Call Objects/call.c:317
ROCm#250 0x3ffa5feb7cf in do_call_core Python/ceval.c:5943
ROCm#251 0x3ffa5fe6019 in _PyEval_EvalFrameDefault Python/ceval.c:4277
ROCm#252 0x3ffa5fd7aed in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
ROCm#253 0x3ffa5fe8ba9 in _PyEval_Vector Python/ceval.c:5065
ROCm#254 0x3ffa5e8459b in _PyFunction_Vectorcall Objects/call.c:342
ROCm#255 0x3ffa5e8427f in PyVectorcall_Call Objects/call.c:267
0x03ff70f54570 is located 0 bytes to the right of global variable 'Sleef_rempitabsp' defined in '/home/user/pytorch/third_party/sleef/src/libm/rempitab.c:986:34' (0x3ff70f53f00) of size 1648
SUMMARY: AddressSanitizer: global-buffer-overflow /home/user/pytorch/third_party/sleef/src/arch/helpers390x_128.h:129 in vgather_vf_p_vi2
Shadow bytes around the buggy address:
0x10007fee1ea850: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea860: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea870: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea890: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10007fee1ea8a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[f9]f9
0x10007fee1ea8b0: f9 f9 f9 f9 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea8c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea8d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea8e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fee1ea8f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
Shadow gap: cc
==2030580==ABORTING
```
</details>
It reproduces when running `pytest -v test/test_ops.py -k test_python_ref__refs_cos_cpu_bfloat16` under address sanitizer on s390x.
See also: shibatch/sleef#464
Pull Request resolved: pytorch#102266
Approved by: https://github.com/malfet
akashveramd
pushed a commit
that referenced
this pull request
Jun 13, 2025
amd-sriram
added a commit
that referenced
this pull request
Aug 12, 2025
Commit Messages: - update the param_id calculation so that it works on both CPX and SPX modes (#271) (#272) - reset parameters for FusedDenseGeluDense similar to FusedDense to make the test_gelu pass (#269) (#270) Co-authored-by: Sriram Kumar <[email protected]> - Fix build error (#263) - Fix warp size (#256) * replace c10_warp_size in fused rope * replace c10_warp_size in fused softmax * replace c10_warp_size in group batch norm * replace c10_warp_size in multiheadattention * replace c10_warp_size in tramsducer * replace c10_warp_size in xentropy * replace c10_warp_size in sync batch normalization * replace c10_warp_size in group batch norm * replace warp_size in multihead attention - Disabling Aiter Installation in default build (#254) * made a flag to switch on/off aiter compile using --aiter when installing apex * Added information on building AITER during installation in readme - Replaced warpsize with C10_WARP_SIZE (#249) - correct the approach to get to the apex folder from the test file (#248) - Apex extensions import test (#245) * add test to extract extensions from setup.py and test if there can be imported * moved test outside tests/L0 - Fixing the C10_warpsize issue. replacing the macros with at::cuda::warp_size() (#237) - Replacing c10_warp_size with platform based warp_size values (#228) fixes :https://ontrack-internal.amd.com/browse/SWDEV-541725 - [master] Added AITER as a submodule and use in fused_rope.py (#222) * Added aiter support in fused_rope.py for all 4 variants. Updated fused rope test, reduced tolerances according to unit test in aiter repo. * Add aiter as a submodule and install it if it is rocm. Switch on aiter backend if it is rocm and aiter is installed * add pandas to the requirements so that aiter can be used without numpy error - ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject * Replace ROCM_HOME condition to IS_ROCM_PYTORCH for installing aiter and use pip install -e . instead of python setup.py develop for installing aiter. * Create apex and aiter subclasses for the four variants of FusedRoPEFunc and select apex or aiter subclass based on AITER_ROPE_BACKEND value. The user can specify the environment variable USE_ROCM_AITER_ROPE_BACKEND to select between aiter and apex backends for fused rope. * If the AITER backend is selected, use lowered precision in the unit test otherwise use the original precision 1e-3 * warn user about the lower precision when using aiter backend for fused rope * Update fused_rope.py remove spaces * simplify the switch between aiter and apex subclasses * install aiter without editable mode - Merge pull request #227 from ROCm/amd/dev/iassiour/SWDEV-541770 Do not use warpSize as a constexpr in nhwc_batch_norm_kernel.h - Do not use warpSize as a constexpr in nhwc_batch_norm_kernel.h In ROCm 7.0, the warpSize variable is no longer constexpr. This commit replaces the variable use with the correct values based on the architecture we're running on. - change epilogue parameter for hipblaslt matmul in cuda kernel for fused dense gelu dense (#223) Fixes : https://ontrack-internal.amd.com/browse/SWDEV-534531 - Reset torch default device to cpu after running the amp unit tests. (#220) - Fix unit tests for transformer, fused dense, mlp (#218) * Fix fused_dense_gelu_dense, change the names of the parameters so that they can be accessed by the test appropriately * Update the absolute tolerances in test_mlp from 0 and 1e-7 to 1e-5 * Deactivate the amp state handle for optimization level other than O0. This helps to pass the UT after this. * Update condition for deactivating amp state handle from opt level equal to 1 to opt level not equal to 0 * Update torch set default dtype method to remove warning * Update the method to create overflow buffer for amp optimizer * Update the method to create overflow buffer for amp optimizer * Update the method to create overflow buffer for amp optimizer * reset the default device to cpu so that the generator uses cuda, as run_amp tests set its to cuda - Update fused layer norm code from upstream apex repo. The intra-warp reductions code inside cuWelfordMuSigma2() function in layer norm kernel assumes a warp size of 32, so added a condition for rocm to support gpu warp size (based on earlier apex code). For rocm, adjust the threadsize, based on earlier apex code. (#215) - upgrade matplotlib to resolve setuptools_scm error. (#213) The error: File /tmp/easy_install-_pfhn8pn/matplotlib-3.5.1/.eggs/setuptools_scm-8.3.1-py3.12.egg/setuptools_scm/_integration/pyproject_reading.py, line 36, in read_pyproject section = defn.get(tool, {})[tool_name] ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^ KeyError: 'setuptools_scm' Solution : https://github.com/matplotlib/matplotlib/blob/v3.8.x/pyproject.toml#L22 matplotlib 3.8 is the first version to have pyproject.toml with this tool.setuptools_scm section. This higher version of setuptools expects this structure in the python packages it installs. Matplotlib 3.5.1 doesn't satisfy this condition. The solution is to change the condition to matplotlib>=3.8. - Update distributed fused adam - integrate Pipeline operations and support different grad (#207) * Fix `DistributedFusedAdam` for grad dtype != param dtype (#1893) * Pipeline `reduce-scatter` and `all-reduce`. (#1895) --------- Co-authored-by: Tailing Yuan <[email protected]> Co-authored-by: Wil Kong <[email protected]> - Update the condition for building the NCCL allocator, PyTorch should be greater than or equal to 2.6 (#204) - Update version.txt (#203) change the version from 1.7.0 to 1.8.0 - [ROCm] Use at::empty to manage workspace memory to avoid hip runtime calls (#197) Optimize the memory for fused_weight_gradient_mlp_cuda module - Update README.md (#198) Add release notes for release/1.5, 1.6 and 1.7 - Update README.md (#196) updated the support versions for apex 1.7.0 PRs: - https://github.com/ROCm/apex/pull/1895 Fixes: - https://example.com/issue-271 - https://example.com/issue-249 - https://example.com/issue-254 - https://example.com/issue-228 - https://example.com/issue-263 - https://example.com/issue-223 - https://example.com/issue-237 - https://example.com/issue-203 - https://example.com/issue-256 - https://example.com/issue-245 - https://example.com/issue-272 - https://example.com/issue-204 - https://ontrack-internal.amd.com/browse/SWDEV-540029 - https://example.com/issue-222 - https://example.com/issue-220 - https://example.com/issue-248 - https://example.com/issue-1893 - https://example.com/issue-198 - https://example.com/issue-215 - https://example.com/issue-213 - https://example.com/issue-1895 - https://example.com/issue-218 - https://example.com/issue-227 - https://example.com/issue-196 - https://example.com/issue-197 - https://example.com/issue-207
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.