Skip to content

Conversation

@ZolotukhinM
Copy link

No description provided.

@ZolotukhinM ZolotukhinM requested review from driazati, suo and zdevito May 30, 2019 10:50
@pytorchbot pytorchbot added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 30, 2019
@ZolotukhinM
Copy link
Author

Below is how the error messages changed.

Example 1:

cu = torch.jit.CompilationUnit('''
    def foo(x, y):
        return x.avd(y)
''')

Before:

RuntimeError:
unknown builtin op: aten::avd
Here are some suggestions:
	aten::svd
	aten::add
	aten::gcd
	aten::add_
	aten::abs
	aten::rand
	aten::std
	aten::mv
	aten::any
	aten::all
	aten::addr
	aten::ord
	aten::save
:

    def foo(x, y):
        return x.avd(y)
               ~~~~~ <--- HERE

After:

RuntimeError:
Unknown builtin op: aten::avd.
Here are some suggestions:
	aten::svd
	aten::add
	aten::gcd
	aten::add_
	aten::abs
	aten::rand
	aten::std
	aten::mv
	aten::any
	aten::all
	aten::addr
	aten::ord
	aten::save

The original call is:

    def foo(x, y):
        return x.avd(y)
               ~~~~~ <--- HERE

Example 2:

cu = torch.jit.CompilationUnit('''
    def foo(x, y):
        return torch.add(x,y,x,y)
''')

Before:

RuntimeError:
arguments for call are not valid:

  for operator aten::add(Tensor self, Tensor other, *, Scalar alpha=<default>) -> Tensor:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator aten::add(Tensor self, Scalar other, Scalar alpha=<default>) -> Tensor:
  expected at most 3 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator aten::add(Tensor self, Tensor other, *, Scalar alpha=<default>, Tensor out) -> Tensor:
  argument out not provided.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE

  for operator aten::add(string a, string b) -> string:
  Expected a value of type str for argument 'a' but found Tensor

      def foo(x, y):
          return torch.add(x,y,x,y)
                           ~ <--- HERE

  for operator aten::add(int[] a, int[] b) -> int[]:
  Expected a value of type List[int] for argument 'a' but found Tensor

      def foo(x, y):
          return torch.add(x,y,x,y)
                           ~ <--- HERE

  for operator aten::add(float[] a, float[] b) -> float[]:
  Expected a value of type List[float] for argument 'a' but found Tensor

      def foo(x, y):
          return torch.add(x,y,x,y)
                           ~ <--- HERE

  for operator aten::add(bool[] a, bool[] b) -> bool[]:
  Expected a value of type List[bool] for argument 'a' but found Tensor

      def foo(x, y):
          return torch.add(x,y,x,y)
                           ~ <--- HERE

  for operator aten::add(Tensor[] a, Tensor[] b) -> Tensor[]:
  Expected a value of type List[Tensor] for argument 'a' but found Tensor

      def foo(x, y):
          return torch.add(x,y,x,y)
                           ~ <--- HERE

  for operator aten::add(t[] a, t[] b) -> t[]:
  Could not match type Tensor to List[t] in argument 'a': Cannot match List[t] to Tensor

      def foo(x, y):
          return torch.add(x,y,x,y)
                           ~ <--- HERE

  for operator aten::add(int a, int b) -> int:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator aten::add(float a, float b) -> float:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator aten::add(int a, float b) -> float:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator aten::add(float a, int b) -> float:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator add(float a, Tensor b) -> Tensor:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE


  for operator add(int a, Tensor b) -> Tensor:
  expected at most 2 arguments but found 4 positional arguments.

      def foo(x, y):
          return torch.add(x,y,x,y)
                 ~~~~~~~~~ <--- HERE

for call at:

    def foo(x, y):
        return torch.add(x,y,x,y)
               ~~~~~~~~~ <--- HERE

After:

RuntimeError:
Arguments for call are not valid.
The following operator variants are available:

  aten::add(Tensor self, Tensor other, *, Scalar alpha=<default>) -> Tensor:
  Expected at most 2 arguments but found 4 positional arguments.

  aten::add(Tensor self, Scalar other, Scalar alpha=<default>) -> Tensor:
  Expected at most 3 arguments but found 4 positional arguments.

  aten::add(Tensor self, Tensor other, *, Scalar alpha=<default>, Tensor out) -> Tensor:
  Argument out not provided.

  aten::add(string a, string b) -> string:
  Expected a value of type str for argument 'a' but found Tensor.

  aten::add(int[] a, int[] b) -> int[]:
  Expected a value of type List[int] for argument 'a' but found Tensor.

  aten::add(float[] a, float[] b) -> float[]:
  Expected a value of type List[float] for argument 'a' but found Tensor.

  aten::add(bool[] a, bool[] b) -> bool[]:
  Expected a value of type List[bool] for argument 'a' but found Tensor.

  aten::add(Tensor[] a, Tensor[] b) -> Tensor[]:
  Expected a value of type List[Tensor] for argument 'a' but found Tensor.

  aten::add(t[] a, t[] b) -> t[]:
  Could not match type Tensor to List[t] in argument 'a': Cannot match List[t] to Tensor.

  aten::add(int a, int b) -> int:
  Expected at most 2 arguments but found 4 positional arguments.

  aten::add(float a, float b) -> float:
  Expected at most 2 arguments but found 4 positional arguments.

  aten::add(int a, float b) -> float:
  Expected at most 2 arguments but found 4 positional arguments.

  aten::add(float a, int b) -> float:
  Expected at most 2 arguments but found 4 positional arguments.

  add(float a, Tensor b) -> Tensor:
  Expected at most 2 arguments but found 4 positional arguments.

  add(int a, Tensor b) -> Tensor:
  Expected at most 2 arguments but found 4 positional arguments.

The original call is:

    def foo(x, y):
        return torch.add(x,y,x,y)
               ~~~~~~~~~ <--- HERE

Copy link
Member

@suo suo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm. I expect we have some tests that regex on error output, so you'll need to change those

@zdevito
Copy link
Contributor

zdevito commented May 31, 2019

Another thing i think we can change: if this was a method invocation, we should only even consider builtins whose first argument matches. That way when you write a + b you don't see every thing in the system, just things with an __add__ method on a's type.

@zdevito zdevito removed their request for review May 31, 2019 03:03
@ZolotukhinM ZolotukhinM force-pushed the schema_matching-better-errors branch from b1de5c6 to a70e2bf Compare May 31, 2019 13:56
@pytorchbot pytorchbot added the module: internals Related to internal abstractions in c10 and ATen label May 31, 2019
@ZolotukhinM ZolotukhinM force-pushed the schema_matching-better-errors branch from a70e2bf to 955a475 Compare May 31, 2019 16:01
@ZolotukhinM ZolotukhinM force-pushed the schema_matching-better-errors branch from 955a475 to c785c2f Compare June 11, 2019 21:41
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZolotukhinM has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ZolotukhinM ZolotukhinM force-pushed the schema_matching-better-errors branch from c785c2f to 510530d Compare June 12, 2019 18:17
@pytorchbot pytorchbot added the module: cpp Related to C++ API label Jun 12, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZolotukhinM has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 12, 2019
Summary: Pull Request resolved: pytorch/pytorch#21141

Differential Revision: D15769066

Pulled By: ZolotukhinM

fbshipit-source-id: 5853e0360581c44e42b068add3bf2bc68e671b2b
@facebook-github-bot
Copy link
Contributor

@ZolotukhinM merged this pull request in 9691025.

@yf225
Copy link
Contributor

yf225 commented Jun 13, 2019

This PR didn't pass CUDA tests in the CI and is breaking master. I am reverting.

@ZolotukhinM ZolotukhinM reopened this Jun 13, 2019
@ZolotukhinM ZolotukhinM force-pushed the schema_matching-better-errors branch from 510530d to a8c5049 Compare June 13, 2019 18:24
@pytorchbot pytorchbot added the module: custom-operators custom operators, custom ops, custom-operators, custom-ops label Jun 13, 2019
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZolotukhinM has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jun 14, 2019
Summary: Pull Request resolved: pytorch/pytorch#21141

Differential Revision: D15808354

Pulled By: ZolotukhinM

fbshipit-source-id: 16d938fd5acafb445a0c433cabc9a55cab563165
@ZolotukhinM ZolotukhinM deleted the schema_matching-better-errors branch April 1, 2020 18:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: cpp Related to C++ API module: custom-operators custom operators, custom ops, custom-operators, custom-ops module: internals Related to internal abstractions in c10 and ATen oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants