Skip to content

Conversation

@cpuhrsch
Copy link
Contributor

No description provided.

@cpuhrsch cpuhrsch changed the title Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_ [TENSOR MERGE] Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_ Jul 20, 2018
@cpuhrsch cpuhrsch force-pushed the sizeint64 branch 2 times, most recently from 05954d3 to 1534139 Compare July 20, 2018 20:21

This comment was marked as off-topic.

This comment was marked as off-topic.

@cpuhrsch cpuhrsch force-pushed the sizeint64 branch 10 times, most recently from 56dda4c to 5969841 Compare July 20, 2018 21:25
@ezyang
Copy link
Contributor

ezyang commented Jul 20, 2018

@pytorchbot retest this please

@cpuhrsch cpuhrsch force-pushed the sizeint64 branch 2 times, most recently from a7ecc6d to aa5bc14 Compare July 20, 2018 22:15
@ezyang
Copy link
Contributor

ezyang commented Jul 23, 2018

22:31:01 /var/lib/jenkins/workspace/aten/src/ATen/test/atest.cpp:25
22:31:01 ...............................................................................
22:31:01 
22:31:01 /var/lib/jenkins/workspace/aten/src/ATen/test/atest.cpp:41: FAILED:
22:31:01   {Unknown expression after the reported line}
22:31:01 due to a fatal error condition:
22:31:01   SIGSEGV - Segmentation violation signal

Looks real.

@ezyang ezyang mentioned this pull request Jul 23, 2018
9 tasks
Storage(const Storage& other) = delete;
Storage(Storage&) = delete;
Storage(const Storage&) = delete;
Storage(Storage&&) = delete;

This comment was marked as off-topic.

: Storage(nullptr) {
// storage = ${THStorage}_newWithAllocator(${state,} size, allocator);
storage = new THStorage(ScalarType::${ScalarName}, size, allocator, TH_STORAGE_RESIZABLE);
THStorage_setResizable(storage, TH_STORAGE_RESIZABLE);

This comment was marked as off-topic.

#define __STDC_FORMAT_MACROS
#endif

#include <stdbool.h>

This comment was marked as off-topic.

std::move(data),
allocator,
TH_STORAGE_REFCOUNTED | TH_STORAGE_RESIZABLE);
TH_STORAGE_RESIZABLE);

This comment was marked as off-topic.


#define TH_STORAGE_REFCOUNTED 1
#define TH_STORAGE_RESIZABLE 2
#define TH_STORAGE_RESIZABLE 1

This comment was marked as off-topic.

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. There are some mistakes in the refactoring; that's probably why the tests fail
  2. Expunge TH_STORAGE entirely; just use a bool.

@ezyang ezyang force-pushed the tensor-merge branch 2 times, most recently from 53083b8 to beb71d8 Compare July 23, 2018 16:05
@ezyang ezyang mentioned this pull request Jul 23, 2018
11 tasks
goldsborough and others added 7 commits July 23, 2018 12:40
Summary:
Need an overload of `at::from_blob` for Variables.

ezyang colesbury ebetica
Pull Request resolved: pytorch#9605

Differential Revision: D8926226

Pulled By: goldsborough

fbshipit-source-id: e377c0d019d4377f3fc124614c7dcc562aa69990
Summary:
Pull Request resolved: pytorch#9717

D8722560 was landed with some build errors, unfortunately the c10 code isn't part of contbuild yet.
Fixing them.

Differential Revision: D8954141

fbshipit-source-id: 2a082fb8041626e45ccd609f37a8ef807f6dad8a
Summary:
Pull Request resolved: pytorch#9550

as titled

Differential Revision: D8899226

fbshipit-source-id: 3c7cf026e8cbc0e95770e5a35b213a97bebba385
Summary:
This is a modification of the strategy from pytorch#8919 and pytorch#9579.

```
Previously, the CPU architecture-specific kernels self-registered with
the DispatchStub. When linking as part of a static library, this requires
the flag --whole-archive to be passed to the linker to ensure that the
object files for the kernels are included. Caffe2 and TensorFlow use that
strategy.

We ran into some issues with --whole-archive blowing up the binary size
of some downstream projects in Facebook. This PR avoids --whole-archive
for CPU kernels. The downside is that the generic code needs to be aware
of whether kernels are compiled with AVX and with AVX2 (via
HAVE_AVX_CPU_DEFINITION and HAVE_AVX2_CPU_DEFINITION).

The CUDA kernels still self-register with DispatchStub because the CPU
library is not aware of whether the CUDA library will be available at
runtime.

There are a few major changes to DispatchStub

 - The environment variable ATEN_CPU_CAPABILITY overrides the CPU
   capability detection code (Previous ATEN_DISABLE_AVX/AVX2)

 - DispatchStub is defined in the generic native code instead of the
   CPU_CAPABILITY_DEFAULT kernel.
```
Pull Request resolved: pytorch#9664

Differential Revision: D8943350

Pulled By: colesbury

fbshipit-source-id: 329229b0ee9ff94fc001b960287814bd734096ef
Summary:
Pull Request resolved: pytorch#9622

Implement a ctc_beam_sarch_decoder operator based on ctc_greedy_decoder.

Differential Revision: D8903100

fbshipit-source-id: 38973632cb437e5cfcb9ed3a48ed6b901c10efa3
…nsor (pytorch#9667)

Summary:
Pull Request resolved: pytorch#9667

MKL-DNN doesn't support 64-bit integger (https://github.com/intel/mkl-dnn/blob/cfee61bf81322b1ca315d5ed6cb9a9419618426b/include/mkldnn_types.h#L62-L75). So force converting from `TensorCPU<long>` to `s32` Ideep tensor will cause memory issue. This diff gives an alternative solution, where we just fall through to TensorCPU. The reasoning is that since MKL-DNN doesn't support 64 bit integer tensor, downstream ops have to be in CPUConext. So there is no reason force converting to ideep tensor and back.

Reviewed By: pjh5

Differential Revision: D8943544

fbshipit-source-id: f514903cda27e34b8887271c9df56c8220895116
Summary:
Pull Request resolved: pytorch#9718

This patch switches the interpreter to use IValue's primitive numbers rather than tensors for computing on integers and floats. In addition to preparing the interpreter for first-class support of other types, this cleans up the handling of primitive numbers, making it possible to just use the normal operator overloading dispatch to find the right implementation for numbers. As a result of this change, a lot of other functionality needed to be updated since it was the first time we use non-tensors in a lot of places in the code base.

Notes:
* Fixes code_template.py so that multi-line strings are indented correctly when used on a standalone line
* Cast operators (`int(x)`) now are functional. Some tests have addition conversions to integers because
we no longer allow implicit tensor -> integer conversions following the same convention as in python
* prim::ListConstruct/createList has been added to the interpreter for creating lists and this has
replaced aten::stack for integers lists
* gen_jit_dispatch.py has been refactored so that non-tensor types use operators on IValues to extract
the primitives
* IValue gains a .to<T> method that is the equivalent of tensor_as but for IValue instead of at::Tensor
* `constant_as<T>` is switched over to using IValues's `.to<T>` method, to make conversion from constant->IValue->C++ type
more consistent. This functionality combined with `toIValue(Value*)` replaces the `tensor_as` and `as_tensor` family of functions.
* conditional expressions (if, loop) and operators related to them are now computed on integers rather than tensors
* IValue gains constructors for constructing from at::Scalar and converting to it. However, IValue itself will always store
the scalars as a double or int64.
* To align with python 3 syntax, TK_INT, TK_FLOAT, and TK_BOOL have been removed from the parser, and int/float/bool are just treated as special identifiers in the compiler,
along with print. These are represented as special sugared values with a `call` method implemented. For int/float/bool this implements casting behavior.
* Dropped shared_from_this from Type/Module. They were not needed and they making debugging harder because they internally throw/catch exceptions.
* Shape propagation has been updated to support running nodes that include floating point primitive types, this required some refactoring of internal functions.
* TensorToNum and NumToTensor have actual implementations as operators now
* regster_prim_ops now contains implementations of math operators for float/int primitive types, and for mixed (prim <+> tensor) versions. This removes the need for special handling in compiler.cpp
* Primitive math is now entirely handled by letting the compiler choose the right overloads. This removes tons of special casing in the compiler.
* incorporates eellison's change to allow casting from return values. Due to the addition of primitive support, the code need slight modifications, so I just pre-merged it here.
* stack.h gains generic vararg versions of push/pop that know how to convert to/from C++ types:

```
at::Tensor a;
at::Scalar b;
pop(stack, a, b);
at::Tensor c = a + b;
push(stack, c);
```
apaszke
Pull Request resolved: pytorch#9584

Reviewed By: apaszke

Differential Revision: D8910546

Pulled By: zdevito

fbshipit-source-id: 0f3e60d4d22217f196a8f606549430e43b7e7e30
@cpuhrsch
Copy link
Contributor Author

Replaced by #9728

@cpuhrsch cpuhrsch closed this Jul 23, 2018
facebook-github-bot pushed a commit that referenced this pull request Jul 24, 2018
Summary:
Constituent PRs:

- [x] #9553 Remove unnecessary functions from StorageDerived.h (by cpuhrsch, reviewed by ezyang)
- [x] #9588 Use THTensor/Storage for THVoidTensor/Storage (by cpuhrsch , reviewed by gchanan)
- [x] #9627 Delete context from tensor (by ezyang, reviewed by gchanan)
- [x] #9641 Tensor reorganization (by ezyang, reviewed by gchanan )
- [x] #9647 Remove dim_ from THTensor (by cpuhrsch, reviewed by ezyang)
- [x] #9650 Remove context (by cpuhrsch, reviewed by gchanan and ezyang)
- [x] #9715 Fix Windows build in tensor merge PR (by ezyang, reviewed by gchanan and SsnL)

Upcoming PRs which didn't make this cut:

- [x] #9644 Stride move to TensorImpl, and nits (by ezyang, reviewed by gchanan)
- [ ] #9652 Native localScalar  (by ezyang, **UNREVIEWED AND FAILING TESTS**)
- [x] #9710 Devirtualize TensorImpl::toString (by ezyang, reviewed by gchanan)
- [ ] #9654 Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_  (by cpuhrsch, **CHANGES REQUESTED AND FAILING TESTS**)
Pull Request resolved: #9713

Reviewed By: gchanan

Differential Revision: D8960882

Pulled By: ezyang

fbshipit-source-id: 99747b2c5462c7ff6809b67aacb4197626408204
jramseyer pushed a commit to jramseyer/pytorch that referenced this pull request Jul 30, 2018
Summary:
Constituent PRs:

- [x] pytorch#9553 Remove unnecessary functions from StorageDerived.h (by cpuhrsch, reviewed by ezyang)
- [x] pytorch#9588 Use THTensor/Storage for THVoidTensor/Storage (by cpuhrsch , reviewed by gchanan)
- [x] pytorch#9627 Delete context from tensor (by ezyang, reviewed by gchanan)
- [x] pytorch#9641 Tensor reorganization (by ezyang, reviewed by gchanan )
- [x] pytorch#9647 Remove dim_ from THTensor (by cpuhrsch, reviewed by ezyang)
- [x] pytorch#9650 Remove context (by cpuhrsch, reviewed by gchanan and ezyang)
- [x] pytorch#9715 Fix Windows build in tensor merge PR (by ezyang, reviewed by gchanan and SsnL)

Upcoming PRs which didn't make this cut:

- [x] pytorch#9644 Stride move to TensorImpl, and nits (by ezyang, reviewed by gchanan)
- [ ] pytorch#9652 Native localScalar  (by ezyang, **UNREVIEWED AND FAILING TESTS**)
- [x] pytorch#9710 Devirtualize TensorImpl::toString (by ezyang, reviewed by gchanan)
- [ ] pytorch#9654 Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_  (by cpuhrsch, **CHANGES REQUESTED AND FAILING TESTS**)
Pull Request resolved: pytorch#9713

Reviewed By: gchanan

Differential Revision: D8960882

Pulled By: ezyang

fbshipit-source-id: 99747b2c5462c7ff6809b67aacb4197626408204
goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
Constituent PRs:

- [x] pytorch#9553 Remove unnecessary functions from StorageDerived.h (by cpuhrsch, reviewed by ezyang)
- [x] pytorch#9588 Use THTensor/Storage for THVoidTensor/Storage (by cpuhrsch , reviewed by gchanan)
- [x] pytorch#9627 Delete context from tensor (by ezyang, reviewed by gchanan)
- [x] pytorch#9641 Tensor reorganization (by ezyang, reviewed by gchanan )
- [x] pytorch#9647 Remove dim_ from THTensor (by cpuhrsch, reviewed by ezyang)
- [x] pytorch#9650 Remove context (by cpuhrsch, reviewed by gchanan and ezyang)
- [x] pytorch#9715 Fix Windows build in tensor merge PR (by ezyang, reviewed by gchanan and SsnL)

Upcoming PRs which didn't make this cut:

- [x] pytorch#9644 Stride move to TensorImpl, and nits (by ezyang, reviewed by gchanan)
- [ ] pytorch#9652 Native localScalar  (by ezyang, **UNREVIEWED AND FAILING TESTS**)
- [x] pytorch#9710 Devirtualize TensorImpl::toString (by ezyang, reviewed by gchanan)
- [ ] pytorch#9654 Use int64_t instead of ptrdiff_t for size / Rename flag to resizable_  (by cpuhrsch, **CHANGES REQUESTED AND FAILING TESTS**)
Pull Request resolved: pytorch#9713

Reviewed By: gchanan

Differential Revision: D8960882

Pulled By: ezyang

fbshipit-source-id: 99747b2c5462c7ff6809b67aacb4197626408204
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants