Conversation
|
|
|
Thanks for your contribution! Could you sign the CLA following instructions by the bot comment above, and fix lint issues by running |
|
Thanks for reviewing this, @justinchuby! I just signed the CLA, and associated my last commits with the right email address. Moreover, I have passed the linter and fixed a problem with the unit tests regarding the |
justinchuby
left a comment
There was a problem hiding this comment.
Looks like there's still errors in the CLA. Squashing the commits may work?
|
Apologies, I realized that the CLA should be signed under a "company contribution", since most of this work was done using my company's resources (Adobe). This contribution was internally approved before I submitted this PR, so I might need a working day or two to have someone from the Open Source Office sign the CLA. Will report asap. |
|
Not sure if related to this PR or the underlying onnxruntime, but there seems to be a off-by-one error somewhere when using graph optimization with STFT.
The output shape matches, but the expected shape is different, and so when we add nodes on top of that it causes shape mismatch errors. |
torch/onnx/symbolic_opset17.py
Outdated
There was a problem hiding this comment.
return_complex=False is deprecated so I don't like the idea of forcing people to use it. I see that complex64 and complex128 are mentioned as types in this document, so is it possible to implement view_as_real/view_as_complex in onnx?
There was a problem hiding this comment.
I like the idea of having view_as_real and view_as_complex ops in onnx. Would you be willing to open an issue for a new operator? https://github.com/onnx/onnx/issues/new?assignees=&labels=operator&template=operator.md&title=
There was a problem hiding this comment.
@peterbell10 I also don't like the idea of forcing return_complex=False, but it is an easy workaround if we don't want to wait to have complex conversion support on ONNX, which seems like it might take quite some time (see below).
@justinchuby Seems like view_as_complex was already requested here: #49793 Unfortunately, there was not enough interest for this when it was reported. I just commented on the issue, to try to bring it back.
There was a problem hiding this comment.
Am I right in saying onnx Cast doesn't support complex types? If it did then you could do something like:
def _as_real(z):
return float(z), float(-1j * z)
def _as_complex(real, imag):
return complex(real) + 1j * complex(imag)There was a problem hiding this comment.
That is correct: ONNX doesn't currently support casting for complex types (https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast).
If it did, we could simply use Cast on the result of STFT to make sure the result is returned as complex if return_complex=True.
That being said, would it be possible to obtain the output of STFT, put it on a PyTorch tensor, and convert it to complex using the _as_complex function above, and then put it back into the ONNX Graph?
Sorry, just realized that that's not gonna work, because Cast from/to complex is not supported, which is exactly what @peterbell10 said (😅).
There was a problem hiding this comment.
Yeah, otherwise there will be a graph break in onnx.
@milesial I noticed that as well. The fact that it actually returns the correct shape in all the tests I wrote makes me think it might be a bug on the the onnxruntime API (or the ONNX STFT definition?). I couldn't find an issue reported on the I was thinking of reporting it once this PR is merged, so that it might be easier to reproduce. Or if you wanna report it now, by all means do :) |
|
Got it, good you caught it too, I opened microsoft/onnxruntime#14316 since I don't know if this MR is going to be merged soon |
|
The CLA is finally signed, apologies it took so long. I'll work on the comments and feedback asap. |
|
I notice the CLA bot is still having errors. Perhaps squash or rebase some commits? |
|
There are some build errors I believe happened because this branch is too old. Could you rebase with master? |
justinchuby
left a comment
There was a problem hiding this comment.
Thanks so much for creating this!
|
@pytorchbot merge -g |
|
@pytorchbot merge -g |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot merge -f "unrelated cuda and other failures" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
This PR addresses issue [#81075](pytorch/pytorch#81075), making `torch.stft` compatible with ONNX Opset 17's STFT operator. The conversion works for _most_ of `torch.stft` functionality: - Batched or unbatched inputs - Normalization - Pre-computed windows - Rectangular windows - One-sided returns - Window centering (implicitly supported) What is currently _not_ supported is **complex types**, due to the lack of conversion functionality between PyTorch and ONNX (pytorch/pytorch#86746). Regardless, this is easy to bypass by setting `return_complex=False` when using `torch.stft`. Note that there is already a draft PR to address this (pytorch/pytorch#83944), but it is currently closed and it only partially addresses the conversion (i.e., most of `torch.stft` functionality is lacking, and unit tests are missing). Pull Request resolved: pytorch/pytorch#92087 Approved by: https://github.com/justinchuby
|
I am trying to export a PyTorch model with STFT into ONNX, but seems like the main torch branch does not support this (any more? yet?). Can anyone explain what happened to this PR? I basically had to repatch this PR to make the export process again. /cc @justinchuby |
|
It should be merged properly? Was the code gone? |
|
This PR should be able to export to ONNX models with STFT-related methods like the following: import torch
import torchaudio
ONNX_MODEL = "out_melspec.onnx"
N_SAMPLES = 16000
N_FFT = 1024
OPSET_VERSION = 17
class CustomMelSpec(torch.nn.Module):
def forward(self, x):
return torchaudio.transforms.MelSpectrogram(n_fft=N_FFT)(x)
print("PyTorch Version:", torch.__version__)
print("Torchaudio Version:", torchaudio.__version__)
x = torch.randn(1, 1, N_SAMPLES)
with open(ONNX_MODEL, "wb") as f:
torch.onnx.export(CustomMelSpec(), (x), f, opset_version=OPSET_VERSION)But running this script returns the following error (see pytorch and torchaudio versions used below): After applying this PR as a patch again (to torch v1.13.0), the code runs fine and I can successfully export the model. So I've no idea what happened to the merged code 🤔 Maybe the way to export ONNX models has changed? |
|
Any updates on this, @justinchuby ? |
|
I see the same code from this pr in the current codebase. I am wondering why reapplying as a patch would change anything 🤔 |
|
Yeah, I have no idea why this is not working right off the bat with the latest torch version (or basically any version after this patch was merged). For whatever reason, the STFT models are not converted to ONNX any more :/ |
|
The error message is saying the inputs cannot be complex; we can only run them in the real value mode. Was this what the model was doing? |
|
After some more digging, turns out the torch code is totally fine, the issue was with torchaudio! Everything works fine after applying this patch to the latest version of torchaudio (basically forcing torchaudio spectrogram to use float values instead of complex types): diff --git a/src/torchaudio/functional/functional.py b/src/torchaudio/functional/functional.py
index af34e707..4eb842c7 100644
--- a/src/torchaudio/functional/functional.py
+++ b/src/torchaudio/functional/functional.py
@@ -133,9 +134,12 @@ def spectrogram(
pad_mode=pad_mode,
normalized=frame_length_norm,
onesided=onesided,
- return_complex=True,
+ return_complex=False,
)
+ # From imaginary and real values to absolute value
+ spec_f = torch.sqrt(torch.pow(spec_f[:, :, :, 0], 2.0) + torch.pow(spec_f[:, :, :, 1], 2.0))
+
# unpack batch
spec_f = spec_f.reshape(shape[:-1] + spec_f.shape[-2:])Apologies for the confusion, and thanks for helping me figure this out! Note: the patch above will make non-power spectrograms (ie |

This PR addresses issue #81075, making
torch.stftcompatible with ONNX Opset 17's STFT operator.The conversion works for most of
torch.stftfunctionality:What is currently not supported is complex types, due to the lack of conversion functionality between PyTorch and ONNX (#86746).
Regardless, this is easy to bypass by setting
return_complex=Falsewhen usingtorch.stft.Note that there is already a draft PR to address this (#83944), but it is currently closed and it only partially addresses the conversion (i.e., most of
torch.stftfunctionality is lacking, and unit tests are missing).