Skip to content

Make TRITON_INTERPRET=1 work with inductor generated kernels #123956

@eellison

Description

@eellison

🚀 The feature, motivation and pitch

If you try to run TRITON_INTERPRET=1 with inductor generated kernels you'll get an exception:

  File "/data/users/eellison/pytorch/torch/_inductor/triton_heuristics.py", line 365, in _precompile_config
    binary = triton.compile(*compile_args, **compile_kwargs)
  File "/home/eellison/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/triton/compiler/compiler.py", line 231, in compile
    key = f"{triton_key()}-{src.hash()}-{backend.hash()}-{options.hash()}-{str(sorted(get_env_vars().items()))}"
  File "/home/eellison/.conda/envs/pytorch-3.10/lib/python3.10/site-packages/triton/compiler/compiler.py", line 106, in hash
    key = f"{self.fn.cache_key}-{self.attrs.hash()}-{sorted_sig}-{sorted_constants}"

It would be nice for debugging to add compatibility.

Alternatives

No response

Additional context

No response

cc @ezyang @msaroufim @bdhirsh @anijain2305 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @amjames @desertfire

Metadata

Metadata

Assignees

Labels

featureA request for a proper, new feature.module: inductoroncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions