-
Notifications
You must be signed in to change notification settings - Fork 26.3k
add functions in nn.utils.rnn to get the last step of a PackedSequence object #1375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
apaszke
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, sorry for a late review!
Can you please add some tests for these functions?
torch/nn/utils/rnn.py
Outdated
| for i, length in enumerate(lengths): | ||
| index = sum([min(length - 1, l) for l in lengths]) + i | ||
| indices.append(index) | ||
| return indices |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/utils/rnn.py
Outdated
| def get_last_step_tensor(packed_sequence, lengths): | ||
| """Extract the last step of each sequence of a :class:`PackedSequence` object. | ||
| It is useful for rnn's output. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Hi @WarBean, any update on this? Can I continue your PR? |
|
I'll accept updated versions of this PR too. Just start a new branch with this commit and put your updates in a new one |
|
Sorry for the delay. I would like to continue the PR but I am not familiar with Github 's PR workflow. Should I close this thread and restart a new one with the amended commit based on the master branch? |
|
Just push a new commit, or force push an amended version of this commit to your branch. GitHub will see this and automatically update this PR with a new version. No need to close anything |
|
Hi, this is just a suggestion but isn't the Also, you could define What do you think? |
|
@reiinakano Excellent suggestion! Using the |
|
I don't know how to add tests inside the project but I have write an extra test script: Output: Seems nothing wrong. |
|
@WarBean This is the unit test for I believe you should build a unit test for each new function you introduce to make sure their behavior is consistent with every new commit. |
|
Any suggestion on how to pass the checks? Not sure what "no environment variables set" means in Travis CI. |
|
I can't see the error you mentioned. The tests failed because of a bug somewhere in this PR |
test/test_nn.py
Outdated
| expected_data = list(itertools.chain.from_iterable(expected_data)) | ||
| expected_data = torch.stack(expected_data, dim=0) | ||
|
|
||
| expected_last_step_data = torch.stack([padded.data[length - 1, i] for i, length in enumerate(lenghts)]) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@WarBean You have a transposed character in your test. "lenghts". If you click on the details of the travis build then on one of the builds (there are separate ones for different python versions etc), you will see a log of the test output. |
|
@aron-bordin @reiinakano @WarBean @apaszke Has any of you guys added this functionality elsewhere? |
|
I don't think so |
* Have Kernel Inherit IrContainer (pytorch#1375) * Kernel<-Fusion Step 1 - Convert ExprSort to StmtSort (pytorch#1376) * Kernel<-Fusion Step 2 - Mutator refactor (pytorch#1377) * Kernel<-Fusion Step 3 - Debug print for expr_eval and type promotion fix (pytorch#1379) * Kernel<-Fusion Step 4 - Have kernel inherit Fusion (pytorch#1380) * Kernel<-Fusion Step 5 - Move lowering passes into their own files (pytorch#1382) * Kernel<-Fusion Step 6 - Remove kir::IrBuilder (pytorch#1383) * Kernel<-Fusion Step 7 - Remove kir functions from ComputeAtMap (pytorch#1384) * Kernel<-Fusion Step 8 - Clean up [lower/executor] utils (pytorch#1387) * Kernel<-Fusion Step 9 - Remove TensorView::fuserTv (pytorch#1388) * Kernel<-Fusion Step 10 - Remove lowerVal/lowerExpr (pytorch#1389) * Kernel<-Fusion Step 11 - Finish cleaning up kir (pytorch#1390)
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
|
Hi @WarBean! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
================================================= Temporarily skip test_conv3d_64bit_indexing - Rocblas API support is requested - SWDEV-383635 & sub task - SWDEV-390218 Skip ddp apply_optim_in_bwd tests for gloo (pytorch#1302) To resolve https://ontrack-internal.amd.com/browse/SWDEV-403530 and https://ontrack-internal.amd.com/browse/SWDEV-419837. For more context check upstream issue pytorch#111834 Add skipIfRocmArch decorator for Navi skips (pytorch#1356) Converted NAVI check as a function (pytorch#1364) * Moved NAVI check to the test file * Revised NAVI check as a function [Navi] [Inductor] Unskip Navi inductor UTs (pytorch#1514) Relates to https://ontrack-internal.amd.com/browse/SWDEV-461590 Bad import in test_torchinductor and skip torchvision related UT (pytorch#1374) skip test_inductor_freezing failing UTs (pytorch#1375) Skip test_mm_triton_kernel_benchmark (pytorch#1376) * Running triton kernel on ROCM only has one GB/s metric reported * Update test_kernel_benchmark.py skip vmapvjpvjp_linalg_householder_product_cuda_float32 (pytorch#1420) skipIfRocm needs msg parameter [NO CP] Updated changes to skip few UTs Imported skipIfRocm in certain test suites (pytorch#1577) Fixes SWDEV-472397 Added functions imports (pytorch#1521) Fixes inductor.test_torchinductor_dynamic_shapes::TestInductorDynamicCUDA::test_item_unbacked_stride_nobreak_cuda Enable test_public_api_surface (pytorch#1601) Fixes SWDEV-462410. Enable this unit test since PyTorch issue pytorch#104012 has been closed. This unit test runs fine on MI100/MI300 and upstream. (cherry picked from commit 0001d4ab5070635cfecc146ee299bbb9fa45ca67) [rocm6.3_internal_testing] Fixed error string assertion in test_invalid_devices (pytorch#1607) Fixes pytorch#8974 (cherry picked from commit a688e0a)
================================================= Temporarily skip test_conv3d_64bit_indexing - Rocblas API support is requested - SWDEV-383635 & sub task - SWDEV-390218 Skip ddp apply_optim_in_bwd tests for gloo (pytorch#1302) To resolve https://ontrack-internal.amd.com/browse/SWDEV-403530 and https://ontrack-internal.amd.com/browse/SWDEV-419837. For more context check upstream issue pytorch#111834 Add skipIfRocmArch decorator for Navi skips (pytorch#1356) Converted NAVI check as a function (pytorch#1364) * Moved NAVI check to the test file * Revised NAVI check as a function [Navi] [Inductor] Unskip Navi inductor UTs (pytorch#1514) Relates to https://ontrack-internal.amd.com/browse/SWDEV-461590 Bad import in test_torchinductor and skip torchvision related UT (pytorch#1374) skip test_inductor_freezing failing UTs (pytorch#1375) Skip test_mm_triton_kernel_benchmark (pytorch#1376) * Running triton kernel on ROCM only has one GB/s metric reported * Update test_kernel_benchmark.py skip vmapvjpvjp_linalg_householder_product_cuda_float32 (pytorch#1420) skipIfRocm needs msg parameter [NO CP] Updated changes to skip few UTs Imported skipIfRocm in certain test suites (pytorch#1577) Fixes SWDEV-472397 Added functions imports (pytorch#1521) Fixes inductor.test_torchinductor_dynamic_shapes::TestInductorDynamicCUDA::test_item_unbacked_stride_nobreak_cuda Enable test_public_api_surface (pytorch#1601) Fixes SWDEV-462410. Enable this unit test since PyTorch issue pytorch#104012 has been closed. This unit test runs fine on MI100/MI300 and upstream. (cherry picked from commit 0001d4ab5070635cfecc146ee299bbb9fa45ca67) [rocm6.3_internal_testing] Fixed error string assertion in test_invalid_devices (pytorch#1607) Fixes pytorch#8974 (cherry picked from commit a688e0a) (cherry picked from commit b966e44) [rocm6.4_internal_testing] Skip non_standard_bool_values tests (pytorch#1880) Fixes SWDEV-509757 (cherry picked from commit 80b4c41) [rocm6.4_internal_testing] [NAVI32] Skipped sdpa_2 test in test_aot_inductor for Navi32 (pytorch#1882) The test fails with assertion error "Tensors are not close" After testing I can confirm that this issue is caused by eager mode execution specific to navi32 during the test_sdpa_2 run. Made a cross reference between navi31, navi32 and mi300. AOTInductor results are all the exact same for all of the archs, only the eager mode fails here for navi32 with 1.5% difference in tensor values from the gpu run. I assume that this happens due to fp16-32-16 conversions in eager mode or missing some if-statements for navi32 specifically. Simple reproducer to check the values for cpu/gpu/eager/aoti runs. [gfx1101_test_sdpa_2_issue_reproducer.txt](https://github.com/user-attachments/files/18676367/gfx1101_test_sdpa_2_issue_reproducer.txt) (cherry picked from commit 896c789) Fixed rocm skip import issue (pytorch#1949) skip_if_rocm does not exist in torch/testing/_internal/common_distributed.py. Use skipIfRocm from torch/testing/_internal/common_utils.py instead. (cherry picked from commit cfb673e) Skip certain unit tests on NAVI (pytorch#1950) This PR is to skip certain unit tests on NAVI only. Fixes SWDEV-509011 - test_sac_ilp.py::TestSACILP::test_sac_ilp_case1 Fixes SWDEV-509311 - test_max_autotune.py::TestMaxAutotune::test_non_contiguous_input_addmm Fixes SWDEV-510738 test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_sharded_grad_scaler_found_inf (cherry picked from commit e86291a)
No description provided.