Skip to content

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Oct 31, 2022

Stack from ghstack (oldest at bottom):

This also comes with some bug fixes that were uncovered from doing
this:

  • Forward device calls to inner tensor in FunctionalTensorWrapper

  • Make legacyExtractDispatchKey exclude Functionalize, so that
    it can get at the real device type key. This is noncontroversial.

  • Stop stripping dense from key set. The reason for this is
    FunctionalWrapperTensor may be used in contexts where people
    query if it is dense or not. If it doesn't report this correctly
    (from the dispatch key), it will cause errors. This caused some
    torchbench models to fail when I did one-pass tracing.

  • Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang [email protected]

This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
@ezyang ezyang requested a review from Chillee as a code owner October 31, 2022 02:30
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 31, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/88063

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 324e7ba:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@ezyang
Copy link
Contributor Author

ezyang commented Oct 31, 2022

Bah these tests are passing on the branch, need to find the fixes I need

@ezyang
Copy link
Contributor Author

ezyang commented Oct 31, 2022

it's probably #87575

This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Oct 31, 2022
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

ghstack-source-id: 622ed0e
Pull Request resolved: #88063
@ezyang ezyang added release notes: composability release notes category topic: not user facing topic category labels Oct 31, 2022
@ezyang
Copy link
Contributor Author

ezyang commented Nov 1, 2022

This is blocked on #87647

This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
// we need to know if we're dispatching to AutogradCPU or AutogradXLA).
// Instead, it's sufficient to remove the `Dense` dispatch key,
// which prevents us from accidentally trying to directly run a CPU/CUDA kernel.
key_set_ = key_set_.remove(c10::DispatchKey::Dense);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am curious in what contexts we're querying whether or not a tensor is Dense 🤔

Probably user code that asks if the tensor is sparse or not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't remember, but yeah, in general it was stuff that was looking at the key set to figure out what type of tensor this was. It's too bad we had to get rid of this.

// We override a bunch of _custom(), so make sure they get called
// TODO: metadata copying may not actually be necessary then
set_custom_sizes_strides(SizesStridesPolicy::CustomSizes);
set_custom_device(true);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
@ezyang ezyang added ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request labels Nov 2, 2022
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
if (is_nested()) {
return false;
}
if (key_set_.has(DispatchKey::Functionalize)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what ended up causing this to be necessary?

The old behavior seems a bit more correct - if the inner tensor advertises as strided, the outer tensor does too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, this is to get the fix where we don't generate as_strided calls in backwards. I made a policy decision for functionalization to always opt into view meta reconstruction.

This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <ezyangfb.com>

[ghstack-poisoned]
# I do something unsound instead
# assert arg is new_arg, "input argument was mutated, this is not valid"
if arg is not new_arg:
assert arg.shape == new_arg.shape
Copy link
Contributor

@bdhirsh bdhirsh Nov 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, there are existing tests in test_functionalization.py that mutate input metadata.

To preserve the old behavior, should this PR just do the same thing that functionalize() does, and
use .as_strided_() to mutate the input metadata when detected, and wait for
#82602 to add the asserts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a moot point, because dynamo never calls AOTAutograd with ops that mutates inputs, and as_strided_ is in that set. (And direct user usage is unlikely to call this.)


# Deleting this in a followup
xfail('nn.functional.feature_alpha_dropout', 'with_train'),
xfail('nn.functional.pad', 'circular'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this PR end up fixing these xfails? 🤔

I thought I fixed this one in particular with #88198. Maybe tracing with functinalization in one go sidesteps this issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def f(x):
return CustomFn.apply(x)

self.verify_aot_autograd(f, [torch.randn(3)])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's actually going on here, when we pass a custom autograd.function object into aot_autograd to be traced?

(1) I thought that "support for custom autograd functions" was pretty unrelated to this change
(2) Does this actually work properly today? (do we trace "through" the custom function, and add its custom fwd and bwd ops into our traced graph)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We trace inside the custom autograd function, and end up with an AOTautograd custom function which is identity on backwards and plus one on backwards.

(1) My goal wasn't really to get custom autograd function working (and indeed, in the end to end Dynamo codepath I don't think there's enough wiring for this to work), but one side effect of doing the tracing all in one go is that custom autograd should just work... and it does!
(2) Yes.

def inner(*args, **kwargs):
def to_fun(t):
if isinstance(t, Tensor):
r = torch._to_functional_tensor(t)
Copy link
Contributor

@bdhirsh bdhirsh Nov 4, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was still thinking about the fact that we're calling .detach() now in this PR where previously we didn't have to, and I just want to lay out my understanding.

Basically, if we do inp_new = inp_old.detach().requires_grad(original_gradness), our new input can have different than the original input, BUT this metadata only matters in practice w.r.t. mutations. Since we expect to be guaranteed a mutation-free-graph, then these metadatas difference won't matter when we're tracing through autograd. In particular, the metadata fields that can differ are ._is_leaf and ._is_view().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For sanity though, are there any tests that exercise aot autograd on inputs that require grad, but are not leaves? Seem worth adding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the point is that it doesn't matter if you don't mutate inputs.

The non-leaf input tensor case is exercised by some of the end-to-end dynamo tests, though I don't see any direct tests for it with aot autograd. I think my preference is to add the torture tests when we actually support input mutation; that's when those non-leaf/leaf tests are actually interesting.

Copy link
Contributor

@bdhirsh bdhirsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a few comments but pre-stamping, generally lgtm

@ezyang
Copy link
Contributor Author

ezyang commented Nov 5, 2022

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Nov 5, 2022
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#88063
Approved by: https://github.com/bdhirsh
ezyang added a commit that referenced this pull request Nov 7, 2022
Signed-off-by: Edward Z. Yang <[email protected]>
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
This also comes with some bug fixes that were uncovered from doing
this:

- Forward device calls to inner tensor in FunctionalTensorWrapper

- Make legacyExtractDispatchKey exclude Functionalize, so that
  it can get at the real device type key.  This is noncontroversial.

- Stop stripping dense from key set.  The reason for this is
  FunctionalWrapperTensor may be used in contexts where people
  query if it is dense or not.  If it doesn't report this correctly
  (from the dispatch key), it will cause errors.  This caused some
  torchbench models to fail when I did one-pass tracing.

- Save and restore reapply views TLS correctly

Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#88063
Approved by: https://github.com/bdhirsh
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/1482/head branch June 8, 2023 16:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: composability release notes category topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants