-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Remove conversion operator from Type to TensorOptions #17603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Differential Revision: D14276588 Differential Version: 73837916
Differential Revision: D14276588 Differential Version: 73946908
Differential Revision: D14276588 Differential Version: 73955527
Differential Revision: D14276588 Differential Version: 74349067
Differential Revision: D14276588 Differential Version: 74395304
Differential Revision: D14276588 Differential Version: 74488006
Differential Revision: D14276588 Differential Version: 74496474
Differential Revision: D14276588 Differential Version: 74537075
Differential Revision: D14276588 Differential Version: 74694652
Differential Revision: D14276588 Differential Version: 74711137
Differential Revision: D14276588 Differential Version: 74714417
Differential Revision: D14276588 Differential Version: 74716823
Differential Revision: D14276588 Differential Version: 74726652
| IntTensor _to_csr_int(const LongTensor& rowIndices, int64_t dim, int64_t nnz) { | ||
| IntTensor csr = at::empty({dim+1}, CUDA(kInt)); | ||
| IntTensor rowIndicesInt = at::empty({rowIndices.size(0)}, CUDA(kInt)); | ||
| IntTensor csr = at::empty({dim+1}, TensorOptions(kCPU).dtype(kInt)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
didn't you change the types of these?
| int64_t nnz = sparse._nnz(); | ||
|
|
||
| LongTensor indices = at::empty({1, nnz}, CUDA(kLong)); | ||
| LongTensor indices = at::empty({1, nnz}, TensorOptions(kCPU).dtype(kLong)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here.
| AT_CHECK(cuda::getApplyGrid(valueSize, grid, curDevice), "mul: Argument #0: tensor too large or too many dimensions"); | ||
|
|
||
| LongTensor resultNnz = at::empty({1}, CUDA(kLong)); | ||
| LongTensor resultNnz = at::empty({1}, TensorOptions(kCPU).dtype(kLong)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here.
|
|
||
| void TestZeroDim(Type& type) { | ||
| Tensor a = at::scalar_tensor(4, type.options()); // rand(type, {1}); | ||
| void TestZeroDim(TensorOptions &options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need references here all over this file -- the issue was you can't have a value-type Type.
| // can't expand empty tensor | ||
| void TestEmptyTensor(Type& T) { | ||
| auto empty = randn({0}, T); | ||
| void TestEmptyTensor(TensorOptions& options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this file too.
|
|
||
| test(CPU(kFloat), CPU(kDouble)); | ||
| auto options = device(kCPU).dtype(kFloat); | ||
| test(CPU(kFloat), options, CPU(kDouble)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this makes sense.
| if (at::hasCUDA()) { | ||
| test(CUDA(kFloat), CUDA(kDouble)); | ||
| auto options = device(kCUDA).dtype(kFloat); | ||
| test(CUDA(kFloat), options, CUDA(kDouble)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this either.
| } | ||
|
|
||
| void test(Type &T) { | ||
| void test(TensorOptions &options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same issue with & here.
| using namespace at; | ||
| void TestSimpleCase(Type& T) { | ||
| auto a = randn({2, 3, 4, 5}, T); | ||
| void TestSimpleCase(TensorOptions& options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here too.
| REQUIRE_OPTIONS(kCPU, -1, kInt, kStrided); | ||
|
|
||
| options = TensorOptions(getNonVariableType(Backend::SparseCPU, kFloat)); | ||
| options = TensorOptions(kCPU).dtype(kFloat).layout(kSparse); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: how come sometimes you do TensorOptions(kCPU) and other times device(kCPU) to start?
| TypeAndSize(const Tensor & t) | ||
| : sizes(t.sizes().vec()) | ||
| , type(&t.type()) {} | ||
| , backend(t.type().backend()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store TensorOptions?
Differential Revision: D14276588 Differential Version: 74808628
Differential Revision: D14276588 Differential Version: 74811464
Differential Revision: D14276588 Differential Version: 74818456
Differential Revision: D14276588 Differential Version: 74831201
Differential Revision: D14276588 Differential Version: 74843008
Differential Revision: D14276588 Differential Version: 74983512
Stack:
:white_circle: #17530 Small clean up of aten_op 💚
:white_circle: #17601 Store ScalarType and Backend instead of Type in TensorIterator 💚
:white_circle: #17785 Remove Type::elementSizeInBytes 💚
:white_circle: #17723 Store python default type as PyTensorType instead of at::Type 💚
:white_circle: #17786 Pass ScalarType separately from Type in python constructors 💚
:white_circle: #17792 Remove Type::ScalarType() 💚
:black_circle: #17603 Remove conversion operator from Type to TensorOptions 💛
:white_circle: #17787 Add ScalarType arg to Type::options() 💛
Differential Revision: D14276588