-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Validate sparse tensors constructed via legacy constructor #147408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate sparse tensors constructed via legacy constructor #147408
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/147408
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 2 PendingAs of commit a666d8b with merge base b10ba0a ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
provided exploit now errors with RuntimeError: Storage size calculation overflowed with sizes=[2] and strides=[1] during torch.load [ghstack-poisoned]
provided exploit now errors with RuntimeError: size is inconsistent with indices: for dim 0, size is 1 but found index 4702111234474983745 during torch.load [ghstack-poisoned]
albanD
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
| "size is inconsistent with indices: for dim 0, size is 1 but found index 4702111234474983745" | ||
| ): | ||
| x = torch.sparse.FloatTensor( | ||
| torch.tensor([[4702111234474983745], [0]]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: you can use 3 as the index here to avoid the weirdly large value. It should fail the same way !
|
We actually can't do this due to the reason I explained re legacy_load, closing this PR and will reopen another one |
…constructor to _sparse_tensors_to_validate" This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. [ghstack-poisoned]
…sparse_tensors_to_validate" This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. [ghstack-poisoned]
…constructor to _sparse_tensors_to_validate" This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. [ghstack-poisoned]
…sparse_tensors_to_validate" This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. [ghstack-poisoned]
…constructor to _sparse_tensors_to_validate" This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. [ghstack-poisoned]
…sparse_tensors_to_validate" This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. [ghstack-poisoned]
…ors_to_validate (#147759) This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. Pull Request resolved: #147759 Approved by: https://github.com/albanD
…ors_to_validate (#147759) This is a redo of #147408 which added validation at the end of the legacy constructor calls. The reason why I didn't land that was because in `legacy_load`, constructor would be called before storages of indices/values are set. So the tensor would not actually be validated. Technically, torch.sparse.{Foo}Tensor should not even be called by our rebuild process since afaict this was the first PR that added support for sparse tensor serialization #27062 and it already uses `_rebuild_sparse_tensor` (which would add the rebuilt tensor to the list to validate), but torch.sparse.FooTensor is allowlisted This PR adds tensors constructed as such to the list to validate at the end of torch.load. Pull Request resolved: #147759 Approved by: https://github.com/albanD
EDIT: this is not an encompassing fix because of legacy_load, will redo
provided exploit now errors with
RuntimeError: size is inconsistent with indices: for dim 0, size is 1 but found index 4702111234474983745
during torch.load
Stack from ghstack (oldest at bottom):