Skip to content

Conversation

@jbschlosser
Copy link
Contributor

@jbschlosser jbschlosser commented Oct 29, 2024

Stack from ghstack (oldest at bottom):

Allows for:

# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 29, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/139196

Note: Links to docs will display an error until the docs builds have been completed.

❌ 13 New Failures

As of commit 567c5f4 with merge base 03ec250 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@jbschlosser jbschlosser added the topic: not user facing topic category label Oct 29, 2024
@github-actions
Copy link
Contributor

Attention! native_functions.yaml was changed

If you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. See https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info.


Caused by:

Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
jbschlosser added a commit that referenced this pull request Oct 29, 2024
ghstack-source-id: e07772c
Pull Request resolved: #139196
Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
jbschlosser added a commit that referenced this pull request Oct 30, 2024
ghstack-source-id: 99a0515
Pull Request resolved: #139196
Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
jbschlosser added a commit that referenced this pull request Oct 31, 2024
ghstack-source-id: 68266b0
Pull Request resolved: #139196
Allows for:
```
# shape (B, j1, D)
njt1 = ...

# shape (B, j2, D)
njt2 = ...

# njt1's shape: (B, j1, D) 
njt2.view_as(njt1)
```

so NJTs with different nested ints but same ragged structure can be operated on together in binary ops, cat, etc.

[ghstack-poisoned]
jbschlosser added a commit that referenced this pull request Oct 31, 2024
ghstack-source-id: 3ebc4fb
Pull Request resolved: #139196
@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Dec 30, 2024

- name: view_as(Tensor(a) self, Tensor other) -> Tensor(a)
dispatch:
Default:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need this one at all ;)

self: not_implemented("view_as")
other: non_differentiable
AutogradNestedTensor:
self: grad.view_as(self)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will safe the full self? You most likely want only .size() or some lightweight thing here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep that's right, sadly. this PR is an inefficient workaround for our current lack of factory function support with shapes that have nested ints. I'm not sure of another way to address this without that support

# verify input is viewable as other's shape
if inp._ragged_idx != other._ragged_idx:
raise RuntimeError(error_message)
torch._assert_async(torch.all(inp._offsets == other._offsets), error_message)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should compare CPU offsets if they're available

@github-actions github-actions bot closed this Feb 23, 2025
@github-actions github-actions bot deleted the gh/jbschlosser/195/head branch March 27, 2025 02:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants