-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[TENSOR MERGE] Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_ #9654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
05954d3 to
1534139
Compare
aten/src/ATen/Storage.h
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/TH/THStorageClass.cpp
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
56dda4c to
5969841
Compare
|
@pytorchbot retest this please |
a7ecc6d to
aa5bc14
Compare
Looks real. |
aten/src/ATen/Storage.h
Outdated
| Storage(const Storage& other) = delete; | ||
| Storage(Storage&) = delete; | ||
| Storage(const Storage&) = delete; | ||
| Storage(Storage&&) = delete; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| : Storage(nullptr) { | ||
| // storage = ${THStorage}_newWithAllocator(${state,} size, allocator); | ||
| storage = new THStorage(ScalarType::${ScalarName}, size, allocator, TH_STORAGE_RESIZABLE); | ||
| THStorage_setResizable(storage, TH_STORAGE_RESIZABLE); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/TH/THGeneral.h.in
Outdated
| #define __STDC_FORMAT_MACROS | ||
| #endif | ||
|
|
||
| #include <stdbool.h> |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/TH/generic/THStorage.cpp
Outdated
| std::move(data), | ||
| allocator, | ||
| TH_STORAGE_REFCOUNTED | TH_STORAGE_RESIZABLE); | ||
| TH_STORAGE_RESIZABLE); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/TH/generic/THStorage.h
Outdated
|
|
||
| #define TH_STORAGE_REFCOUNTED 1 | ||
| #define TH_STORAGE_RESIZABLE 2 | ||
| #define TH_STORAGE_RESIZABLE 1 |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
ezyang
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- There are some mistakes in the refactoring; that's probably why the tests fail
- Expunge
TH_STORAGEentirely; just use a bool.
53083b8 to
beb71d8
Compare
Summary: Need an overload of `at::from_blob` for Variables. ezyang colesbury ebetica Pull Request resolved: pytorch#9605 Differential Revision: D8926226 Pulled By: goldsborough fbshipit-source-id: e377c0d019d4377f3fc124614c7dcc562aa69990
Summary: Pull Request resolved: pytorch#9717 D8722560 was landed with some build errors, unfortunately the c10 code isn't part of contbuild yet. Fixing them. Differential Revision: D8954141 fbshipit-source-id: 2a082fb8041626e45ccd609f37a8ef807f6dad8a
Summary: Pull Request resolved: pytorch#9550 as titled Differential Revision: D8899226 fbshipit-source-id: 3c7cf026e8cbc0e95770e5a35b213a97bebba385
Summary: This is a modification of the strategy from pytorch#8919 and pytorch#9579. ``` Previously, the CPU architecture-specific kernels self-registered with the DispatchStub. When linking as part of a static library, this requires the flag --whole-archive to be passed to the linker to ensure that the object files for the kernels are included. Caffe2 and TensorFlow use that strategy. We ran into some issues with --whole-archive blowing up the binary size of some downstream projects in Facebook. This PR avoids --whole-archive for CPU kernels. The downside is that the generic code needs to be aware of whether kernels are compiled with AVX and with AVX2 (via HAVE_AVX_CPU_DEFINITION and HAVE_AVX2_CPU_DEFINITION). The CUDA kernels still self-register with DispatchStub because the CPU library is not aware of whether the CUDA library will be available at runtime. There are a few major changes to DispatchStub - The environment variable ATEN_CPU_CAPABILITY overrides the CPU capability detection code (Previous ATEN_DISABLE_AVX/AVX2) - DispatchStub is defined in the generic native code instead of the CPU_CAPABILITY_DEFAULT kernel. ``` Pull Request resolved: pytorch#9664 Differential Revision: D8943350 Pulled By: colesbury fbshipit-source-id: 329229b0ee9ff94fc001b960287814bd734096ef
Summary: Pull Request resolved: pytorch#9622 Implement a ctc_beam_sarch_decoder operator based on ctc_greedy_decoder. Differential Revision: D8903100 fbshipit-source-id: 38973632cb437e5cfcb9ed3a48ed6b901c10efa3
…nsor (pytorch#9667) Summary: Pull Request resolved: pytorch#9667 MKL-DNN doesn't support 64-bit integger (https://github.com/intel/mkl-dnn/blob/cfee61bf81322b1ca315d5ed6cb9a9419618426b/include/mkldnn_types.h#L62-L75). So force converting from `TensorCPU<long>` to `s32` Ideep tensor will cause memory issue. This diff gives an alternative solution, where we just fall through to TensorCPU. The reasoning is that since MKL-DNN doesn't support 64 bit integer tensor, downstream ops have to be in CPUConext. So there is no reason force converting to ideep tensor and back. Reviewed By: pjh5 Differential Revision: D8943544 fbshipit-source-id: f514903cda27e34b8887271c9df56c8220895116
Summary: Pull Request resolved: pytorch#9718 This patch switches the interpreter to use IValue's primitive numbers rather than tensors for computing on integers and floats. In addition to preparing the interpreter for first-class support of other types, this cleans up the handling of primitive numbers, making it possible to just use the normal operator overloading dispatch to find the right implementation for numbers. As a result of this change, a lot of other functionality needed to be updated since it was the first time we use non-tensors in a lot of places in the code base. Notes: * Fixes code_template.py so that multi-line strings are indented correctly when used on a standalone line * Cast operators (`int(x)`) now are functional. Some tests have addition conversions to integers because we no longer allow implicit tensor -> integer conversions following the same convention as in python * prim::ListConstruct/createList has been added to the interpreter for creating lists and this has replaced aten::stack for integers lists * gen_jit_dispatch.py has been refactored so that non-tensor types use operators on IValues to extract the primitives * IValue gains a .to<T> method that is the equivalent of tensor_as but for IValue instead of at::Tensor * `constant_as<T>` is switched over to using IValues's `.to<T>` method, to make conversion from constant->IValue->C++ type more consistent. This functionality combined with `toIValue(Value*)` replaces the `tensor_as` and `as_tensor` family of functions. * conditional expressions (if, loop) and operators related to them are now computed on integers rather than tensors * IValue gains constructors for constructing from at::Scalar and converting to it. However, IValue itself will always store the scalars as a double or int64. * To align with python 3 syntax, TK_INT, TK_FLOAT, and TK_BOOL have been removed from the parser, and int/float/bool are just treated as special identifiers in the compiler, along with print. These are represented as special sugared values with a `call` method implemented. For int/float/bool this implements casting behavior. * Dropped shared_from_this from Type/Module. They were not needed and they making debugging harder because they internally throw/catch exceptions. * Shape propagation has been updated to support running nodes that include floating point primitive types, this required some refactoring of internal functions. * TensorToNum and NumToTensor have actual implementations as operators now * regster_prim_ops now contains implementations of math operators for float/int primitive types, and for mixed (prim <+> tensor) versions. This removes the need for special handling in compiler.cpp * Primitive math is now entirely handled by letting the compiler choose the right overloads. This removes tons of special casing in the compiler. * incorporates eellison's change to allow casting from return values. Due to the addition of primitive support, the code need slight modifications, so I just pre-merged it here. * stack.h gains generic vararg versions of push/pop that know how to convert to/from C++ types: ``` at::Tensor a; at::Scalar b; pop(stack, a, b); at::Tensor c = a + b; push(stack, c); ``` apaszke Pull Request resolved: pytorch#9584 Reviewed By: apaszke Differential Revision: D8910546 Pulled By: zdevito fbshipit-source-id: 0f3e60d4d22217f196a8f606549430e43b7e7e30
|
Replaced by #9728 |
Summary: Constituent PRs: - [x] #9553 Remove unnecessary functions from StorageDerived.h (by cpuhrsch, reviewed by ezyang) - [x] #9588 Use THTensor/Storage for THVoidTensor/Storage (by cpuhrsch , reviewed by gchanan) - [x] #9627 Delete context from tensor (by ezyang, reviewed by gchanan) - [x] #9641 Tensor reorganization (by ezyang, reviewed by gchanan ) - [x] #9647 Remove dim_ from THTensor (by cpuhrsch, reviewed by ezyang) - [x] #9650 Remove context (by cpuhrsch, reviewed by gchanan and ezyang) - [x] #9715 Fix Windows build in tensor merge PR (by ezyang, reviewed by gchanan and SsnL) Upcoming PRs which didn't make this cut: - [x] #9644 Stride move to TensorImpl, and nits (by ezyang, reviewed by gchanan) - [ ] #9652 Native localScalar (by ezyang, **UNREVIEWED AND FAILING TESTS**) - [x] #9710 Devirtualize TensorImpl::toString (by ezyang, reviewed by gchanan) - [ ] #9654 Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_ (by cpuhrsch, **CHANGES REQUESTED AND FAILING TESTS**) Pull Request resolved: #9713 Reviewed By: gchanan Differential Revision: D8960882 Pulled By: ezyang fbshipit-source-id: 99747b2c5462c7ff6809b67aacb4197626408204
Summary: Constituent PRs: - [x] pytorch#9553 Remove unnecessary functions from StorageDerived.h (by cpuhrsch, reviewed by ezyang) - [x] pytorch#9588 Use THTensor/Storage for THVoidTensor/Storage (by cpuhrsch , reviewed by gchanan) - [x] pytorch#9627 Delete context from tensor (by ezyang, reviewed by gchanan) - [x] pytorch#9641 Tensor reorganization (by ezyang, reviewed by gchanan ) - [x] pytorch#9647 Remove dim_ from THTensor (by cpuhrsch, reviewed by ezyang) - [x] pytorch#9650 Remove context (by cpuhrsch, reviewed by gchanan and ezyang) - [x] pytorch#9715 Fix Windows build in tensor merge PR (by ezyang, reviewed by gchanan and SsnL) Upcoming PRs which didn't make this cut: - [x] pytorch#9644 Stride move to TensorImpl, and nits (by ezyang, reviewed by gchanan) - [ ] pytorch#9652 Native localScalar (by ezyang, **UNREVIEWED AND FAILING TESTS**) - [x] pytorch#9710 Devirtualize TensorImpl::toString (by ezyang, reviewed by gchanan) - [ ] pytorch#9654 Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_ (by cpuhrsch, **CHANGES REQUESTED AND FAILING TESTS**) Pull Request resolved: pytorch#9713 Reviewed By: gchanan Differential Revision: D8960882 Pulled By: ezyang fbshipit-source-id: 99747b2c5462c7ff6809b67aacb4197626408204
Summary: Constituent PRs: - [x] pytorch#9553 Remove unnecessary functions from StorageDerived.h (by cpuhrsch, reviewed by ezyang) - [x] pytorch#9588 Use THTensor/Storage for THVoidTensor/Storage (by cpuhrsch , reviewed by gchanan) - [x] pytorch#9627 Delete context from tensor (by ezyang, reviewed by gchanan) - [x] pytorch#9641 Tensor reorganization (by ezyang, reviewed by gchanan ) - [x] pytorch#9647 Remove dim_ from THTensor (by cpuhrsch, reviewed by ezyang) - [x] pytorch#9650 Remove context (by cpuhrsch, reviewed by gchanan and ezyang) - [x] pytorch#9715 Fix Windows build in tensor merge PR (by ezyang, reviewed by gchanan and SsnL) Upcoming PRs which didn't make this cut: - [x] pytorch#9644 Stride move to TensorImpl, and nits (by ezyang, reviewed by gchanan) - [ ] pytorch#9652 Native localScalar (by ezyang, **UNREVIEWED AND FAILING TESTS**) - [x] pytorch#9710 Devirtualize TensorImpl::toString (by ezyang, reviewed by gchanan) - [ ] pytorch#9654 Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_ (by cpuhrsch, **CHANGES REQUESTED AND FAILING TESTS**) Pull Request resolved: pytorch#9713 Reviewed By: gchanan Differential Revision: D8960882 Pulled By: ezyang fbshipit-source-id: 99747b2c5462c7ff6809b67aacb4197626408204
No description provided.