Skip to content

Conversation

@li-roy
Copy link
Contributor

@li-roy li-roy commented Mar 1, 2019

Stack:
    :white_circle:  #17530 Small clean up of aten_op  💚
    :white_circle:  #17601 Store ScalarType and Backend instead of Type in TensorIterator  💚
    :white_circle:  #17785 Remove Type::elementSizeInBytes  💚
    :white_circle:  #17723 Store python default type as PyTensorType instead of at::Type  💚
    :white_circle:  #17786 Pass ScalarType separately from Type in python constructors  💚
    :white_circle:  #17792 Remove Type::ScalarType()  💚
    :black_circle:  #17603 Remove conversion operator from Type to TensorOptions  💛
    :white_circle:  #17787 Add ScalarType arg to Type::options()  💛

Differential Revision: D14276588

Differential Revision: D14276588
Differential Version: 73837916
Differential Revision: D14276588
Differential Version: 73946908
@li-roy li-roy requested a review from gchanan March 2, 2019 01:16
Differential Revision: D14276588
Differential Version: 73955527
royboy added 4 commits March 5, 2019 15:03
Differential Revision: D14276588
Differential Version: 74349067
Differential Revision: D14276588
Differential Version: 74395304
Differential Revision: D14276588
Differential Version: 74488006
Differential Revision: D14276588
Differential Version: 74496474
@li-roy li-roy changed the base branch from export-D14274754 to export-D14351082 March 6, 2019 20:42
royboy added 2 commits March 6, 2019 16:33
Differential Revision: D14276588
Differential Version: 74537075
Differential Revision: D14276588
Differential Version: 74694652
@li-roy li-roy changed the base branch from export-D14351082 to export-D14379075 March 8, 2019 00:28
royboy added 3 commits March 7, 2019 18:19
Differential Revision: D14276588
Differential Version: 74711137
Differential Revision: D14276588
Differential Version: 74714417
Differential Revision: D14276588
Differential Version: 74716823
@li-roy li-roy changed the base branch from export-D14379075 to export-D14381774 March 8, 2019 03:23
Differential Revision: D14276588
Differential Version: 74726652
IntTensor _to_csr_int(const LongTensor& rowIndices, int64_t dim, int64_t nnz) {
IntTensor csr = at::empty({dim+1}, CUDA(kInt));
IntTensor rowIndicesInt = at::empty({rowIndices.size(0)}, CUDA(kInt));
IntTensor csr = at::empty({dim+1}, TensorOptions(kCPU).dtype(kInt));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't you change the types of these?

int64_t nnz = sparse._nnz();

LongTensor indices = at::empty({1, nnz}, CUDA(kLong));
LongTensor indices = at::empty({1, nnz}, TensorOptions(kCPU).dtype(kLong));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here.

AT_CHECK(cuda::getApplyGrid(valueSize, grid, curDevice), "mul: Argument #0: tensor too large or too many dimensions");

LongTensor resultNnz = at::empty({1}, CUDA(kLong));
LongTensor resultNnz = at::empty({1}, TensorOptions(kCPU).dtype(kLong));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and here.


void TestZeroDim(Type& type) {
Tensor a = at::scalar_tensor(4, type.options()); // rand(type, {1});
void TestZeroDim(TensorOptions &options) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need references here all over this file -- the issue was you can't have a value-type Type.

// can't expand empty tensor
void TestEmptyTensor(Type& T) {
auto empty = randn({0}, T);
void TestEmptyTensor(TensorOptions& options) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this file too.


test(CPU(kFloat), CPU(kDouble));
auto options = device(kCPU).dtype(kFloat);
test(CPU(kFloat), options, CPU(kDouble));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this makes sense.

if (at::hasCUDA()) {
test(CUDA(kFloat), CUDA(kDouble));
auto options = device(kCUDA).dtype(kFloat);
test(CUDA(kFloat), options, CUDA(kDouble));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this either.

}

void test(Type &T) {
void test(TensorOptions &options) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same issue with & here.

using namespace at;
void TestSimpleCase(Type& T) {
auto a = randn({2, 3, 4, 5}, T);
void TestSimpleCase(TensorOptions& options) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here too.

REQUIRE_OPTIONS(kCPU, -1, kInt, kStrided);

options = TensorOptions(getNonVariableType(Backend::SparseCPU, kFloat));
options = TensorOptions(kCPU).dtype(kFloat).layout(kSparse);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: how come sometimes you do TensorOptions(kCPU) and other times device(kCPU) to start?

TypeAndSize(const Tensor & t)
: sizes(t.sizes().vec())
, type(&t.type()) {}
, backend(t.type().backend())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

store TensorOptions?

royboy added 6 commits March 8, 2019 13:11
Differential Revision: D14276588
Differential Version: 74808628
Differential Revision: D14276588
Differential Version: 74811464
Differential Revision: D14276588
Differential Version: 74818456
Differential Revision: D14276588
Differential Version: 74831201
Differential Revision: D14276588
Differential Version: 74843008
Differential Revision: D14276588
Differential Version: 74983512
@li-roy li-roy closed this Apr 16, 2019
@ezyang ezyang deleted the export-D14276588 branch May 30, 2019 15:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants