Skip to content

Conversation

@anijain2305
Copy link
Contributor

@anijain2305 anijain2305 commented Apr 1, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/150450

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit 513c1a2 with merge base 15dbad2 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following jobs are marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

# AOTDispatcher first pass does not run make_fx on
# dynamo graphs. As a result, it can have non OpOverload
# ops.
if not isinstance(op, torch._ops.OpOverload):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fine to me but I'm not sure if there are any other problems from doing so, so will leave it to Richard for the stamp :3

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems better than the previous state. But I think we should be tracing through the inside of the HOP subgraph so that the operator.add call gets desugared into an aten.add call, instead of manually iterating through the nodes and calling _validate_cache_key on each node. Does that make sense?

@anijain2305
Copy link
Contributor Author

Seems better than the previous state. But I think we should be tracing through the inside of the HOP subgraph so that the operator.add call gets desugared into an aten.add call, instead of manually iterating through the nodes and calling _validate_cache_key on each node. Does that make sense?

I plan to work on this. But till then, it would be good to get this in to make trunk healthy, as Lazos, me and Angela are working with invoke_subgraph recently.

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sgtm

@anijain2305 anijain2305 added the ciflow/trunk Trigger trunk jobs on your pull request label Apr 1, 2025
@anijain2305
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Apr 2, 2025
amathewc pushed a commit to amathewc/pytorch that referenced this pull request Apr 17, 2025
Divigroup-RAP pushed a commit to Divigroup-RAP/PYTORCH that referenced this pull request Apr 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants