Skip to content

Conversation

@Roger-luo
Copy link

@Roger-luo Roger-luo commented Sep 13, 2018

Promotion inherits numpy.promote_types's rule (mostly), but numpy do not define promotion between char and complex. Should we delete promotion for these types too?

Binary operators are not working on tensors and complex values. Still figuring out why...

cc: @ezyang

Roger-luo added 5 commits September 12, 2018 18:14
- add some C macros to support the dispatch in Dispatch.h
- fix the following indexing by add support for complex number in `_local_scalar_dense_cpu`

from torch_complex import torch
a = torch.empty(2, 2, dtype=torch.complex128)
a[1, 1]
if (isComplexType(a) || isComplexType(b)) {
AT_ERROR("promoteTypes with complex numbers is not handled yet; figure out what the correct rules should be");
}
// if (isComplexType(a) || isComplexType(b)) {

This comment was marked as off-topic.

constexpr auto f2 = ScalarType::Half;
constexpr auto f4 = ScalarType::Float;
constexpr auto f8 = ScalarType::Double;
constexpr auto c4 = ScalarType::ComplexFloat;

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang
Copy link
Contributor

ezyang commented Sep 13, 2018

It looks like our vectorizer doesn't know how to deal with complex numbers, unfortunately. Happy to take the promotion logic patch separately.

@Roger-luo Roger-luo changed the title RFC: Define complex type promotions and support complex binary operators WIP: Define complex type promotions and support complex binary operators Sep 17, 2018
@Roger-luo
Copy link
Author

Discussed on slack. We will allow promotion between char and complex for now (this is not the same with numpy).

@Roger-luo Roger-luo changed the title WIP: Define complex type promotions and support complex binary operators Define complex type promotions and support complex binary operators Sep 17, 2018
} else if (type_ == ParameterType::DOUBLE) {
default_double = atof(str.c_str());
} else if (type_ == ParameterType::COMPLEX) {
default_complex[0] = atof(str.c_str()); // TODO: parse "x + xj"?

This comment was marked as off-topic.

@ssnl
Copy link
Collaborator

ssnl commented Sep 25, 2018

@Roger-luo This needs a rebase to trigger CI again. Also, I wonder if we can have some tests for this.

Remove AVX support for complex

Revert "Remove AVX support for complex"

This reverts commit 8ed90ea18bec7629274aafb6e9c9fdef306dbda7.
@Roger-luo Roger-luo changed the title Define complex type promotions and support complex binary operators WIP: Define complex type promotions and support complex binary operators Sep 26, 2018
@Roger-luo
Copy link
Author

Roger-luo commented Sep 26, 2018

@ssnl thanks!

I'll need to fix the CI error I got with AT_DISPATCH_ALL_TYPES_AND_COMPLEX first... It works fine on my Mac but got error on Ubuntu :-(

Then I'll be able to add some tests.

@ezyang
Copy link
Contributor

ezyang commented Sep 28, 2018

(For the record: I helped Roger repro the issue offline)

@PhilSee
Copy link

PhilSee commented Sep 30, 2018

On the AT_DISPATCH_ALL_TYPES issue: This might actually be related to a bug in GCC, where the visibility of a lambda and its captured objects list is treated differently by GCC: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80947 . Adding a __attribute__ ((visibility("hidden"))) to the tensor_cpu declaration in TensorFactories.cpp:643 is a suggested workaround that actually does the trick for me. It has some pitfalls, however. More on Slack.

@Roger-luo
Copy link
Author

Okay... this is related to different gcc versions, MWE:

#include <iostream>
#include <complex>

template <class T>
struct Vec256 {
public:
 T values[32 / sizeof(T)] = {0};
  static constexpr int size = 32 / sizeof(T);
  Vec256() {}
  Vec256(T val) {
    for (int i = 0; i != size; i++) {
      values[i] = val;
    }
  }
};

int main() {
  Vec256<std::complex<double>> r(1);
  std::cout << r.values[1] << std::endl;
  return 0;
}

This works fine with clang and later version of g++, but won't built with g++ 4.8. specialize the class manually for complex will work.

#include "vec256_double.h"
#include "vec256_complex.h"
// #include "vec256_complex_float.h"
// #include "vec256_complex_double.h"

This comment was marked as off-topic.

This comment was marked as off-topic.

@Roger-luo
Copy link
Author

It seems that something breaks cuda tests:

22:56:08 ERROR: test_mul_intertype_scalar (__main__.TestCuda)
22:56:08 ----------------------------------------------------------------------
22:56:08 Traceback (most recent call last):
22:56:08   File "/var/lib/jenkins/workspace/test/common.py", line 275, in wrapper
22:56:08     method(*args, **kwargs)
22:56:08   File "test_cuda.py", line 925, in test_mul_intertype_scalar
22:56:08     test_mul(torch.float16)
22:56:08   File "test_cuda.py", line 922, in test_mul
22:56:08     x *= y
22:56:08 RuntimeError: TensorIterator expected type torch.cuda.DoubleTensor but got torch.cuda.HalfTensor[]

Is it a type promotion problem?

@ezyang
Copy link
Contributor

ezyang commented Oct 2, 2018

Looks like a promotion problem to me!

By the way, @gchanan brought to my attention that we have a bunch of latent bugs in PyTorch, owing to the fact that in a number of places we do if (scalar.is_float()) { /* assume it's float */ } else { /* assume it's int */ }. We'll need to fix these. Maybe this is causing the error you see above.

@ezyang
Copy link
Contributor

ezyang commented Oct 12, 2018

@Roger-luo Do you need any help with this patch?

@PhilippPelz PhilippPelz mentioned this pull request Oct 17, 2018
10 tasks
@facebook-github-bot
Copy link
Contributor

Hi @Roger-luo!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@Roger-luo
Copy link
Author

Ah I think I should just close this PR since there's complex number now!

@Roger-luo Roger-luo closed this Mar 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants