Skip to content

Conversation

@soulitzer
Copy link
Contributor

@soulitzer soulitzer commented Dec 28, 2023

Stack from ghstack (oldest at bottom):

Instead of printing the tensor's data print the dtype and shape metadata of the tensor.

Executing: <VarMeanBackward0 object at 0x1352d0e20> with grad_outputs: [None,f32[]]

This is important in order to avoid doing a cuda sync and also useful to reduce verbosity.

@soulitzer soulitzer requested a review from albanD as a code owner December 28, 2023 23:58
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 28, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/116523

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 5257e6b with merge base e5f2ac1 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

soulitzer added a commit that referenced this pull request Dec 28, 2023
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

def prehook(grad_output):
node = torch._C._current_autograd_node()
log_str = f"Executing: {node} with grad_output: {grad_output}"
log_str = f"Executing: {node}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is too verbose, feel free to only print size to share some information.

soulitzer added a commit that referenced this pull request Jan 2, 2024

if t is None:
return "None"
return f"{dtype_abbrs[t.dtype]}[{', '.join(map(str, t.shape))}]"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that a usual format used in other places?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, fx and logging tensor also use this format

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change sounds good but needs rebase!

…output"


Instead of printing the tensor's data print the dtype and shape metadata of the tensor.
```
Executing: <VarMeanBackward0 object at 0x1352d0e20> with grad_outputs: [None,f32[]]
```
This is important in order to avoid doing a cuda sync and also useful to reduce verbosity.


[ghstack-poisoned]
…output"


Instead of printing the tensor's data print the dtype and shape metadata of the tensor.
```
Executing: <VarMeanBackward0 object at 0x1352d0e20> with grad_outputs: [None,f32[]]
```
This is important in order to avoid doing a cuda sync and also useful to reduce verbosity.


[ghstack-poisoned]
@soulitzer soulitzer added the release notes: autograd release notes category label Jan 5, 2024
…output"


Instead of printing the tensor's data print the dtype and shape metadata of the tensor.
```
Executing: <VarMeanBackward0 object at 0x1352d0e20> with grad_outputs: [None,f32[]]
```
This is important in order to avoid doing a cuda sync and also useful to reduce verbosity.


[ghstack-poisoned]
@soulitzer
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 9, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

xadupre pushed a commit to xadupre/pytorch that referenced this pull request Jan 10, 2024
…orch#116523)

Instead of printing the tensor's data print the dtype and shape metadata of the tensor.
```
Executing: <VarMeanBackward0 object at 0x1352d0e20> with grad_outputs: [None,f32[]]
```
This is important in order to avoid doing a cuda sync and also useful to reduce verbosity.

Pull Request resolved: pytorch#116523
Approved by: https://github.com/albanD
xadupre pushed a commit to xadupre/pytorch that referenced this pull request Jan 10, 2024
@facebook-github-bot facebook-github-bot deleted the gh/soulitzer/262/head branch January 13, 2024 15:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: autograd release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants