Skip to content

Conversation

@WarBean
Copy link

@WarBean WarBean commented Apr 27, 2017

No description provided.

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, sorry for a late review!

Can you please add some tests for these functions?

for i, length in enumerate(lengths):
index = sum([min(length - 1, l) for l in lengths]) + i
indices.append(index)
return indices

This comment was marked as off-topic.

def get_last_step_tensor(packed_sequence, lengths):
"""Extract the last step of each sequence of a :class:`PackedSequence` object.
It is useful for rnn's output.

This comment was marked as off-topic.

@nrbrd
Copy link

nrbrd commented Jul 18, 2017

Hi @WarBean, any update on this? Can I continue your PR?

@reiinakano
Copy link

Hi @WarBean and @apaszke, don't know how things work in this situation but would it be okay if someone else continued this PR? Can I just create my own based off of this? This is a very useful function I'd like added in Pytorch soon.

@apaszke
Copy link
Contributor

apaszke commented Aug 16, 2017

I'll accept updated versions of this PR too. Just start a new branch with this commit and put your updates in a new one

@WarBean
Copy link
Author

WarBean commented Aug 16, 2017

Sorry for the delay. I would like to continue the PR but I am not familiar with Github 's PR workflow. Should I close this thread and restart a new one with the amended commit based on the master branch?

@apaszke
Copy link
Contributor

apaszke commented Aug 16, 2017

Just push a new commit, or force push an amended version of this commit to your branch. GitHub will see this and automatically update this PR with a new version. No need to close anything

@reiinakano
Copy link

Hi, this is just a suggestion but isn't the lengths parameter unnecessary? The PackedSequence object already stores the lengths. Why not just use that?

Also, you could define get_last_step_tensor as an instance method of PackedSequence. Personally, I think that makes more sense.

What do you think?

@WarBean
Copy link
Author

WarBean commented Aug 18, 2017

@reiinakano Excellent suggestion! Using the self.batches stored inside PackedSequence I can easily use O(n) time to compute the indices.

@WarBean
Copy link
Author

WarBean commented Aug 18, 2017

I don't know how to add tests inside the project but I have write an extra test script:

import torch
from rnn import *
import numpy as np
from torch.autograd import Variable

lengths = np.sort(np.random.randint(1, 100, 128)).tolist()[::-1]
a = Variable(torch.zeros(max(lengths), len(lengths), 13))
for i, length in enumerate(lengths):
    a.data[length - 1, i] = 1
sequence = pack_padded_sequence(a, lengths)
last_step = sequence.last_step_tensor()
print(last_step)
assert torch.sum(last_step.data - 1) == 0

Output:

Variable containing:
    1     1     1  ...      1     1     1
    1     1     1  ...      1     1     1
    1     1     1  ...      1     1     1
       ...          ⋱          ...
    1     1     1  ...      1     1     1
    1     1     1  ...      1     1     1
    1     1     1  ...      1     1     1
[torch.FloatTensor of size 128x13]

Seems nothing wrong.

@reiinakano
Copy link

reiinakano commented Aug 18, 2017

@WarBean This is the unit test for pack_padded_sequence https://github.com/pytorch/pytorch/blob/master/test/test_nn.py#L1769. The unit tests are run every time someone makes a change to the codebase.

I believe you should build a unit test for each new function you introduce to make sure their behavior is consistent with every new commit.

@WarBean
Copy link
Author

WarBean commented Aug 18, 2017

Any suggestion on how to pass the checks? Not sure what "no environment variables set" means in Travis CI.

@apaszke
Copy link
Contributor

apaszke commented Aug 19, 2017

I can't see the error you mentioned. The tests failed because of a bug somewhere in this PR

test/test_nn.py Outdated
expected_data = list(itertools.chain.from_iterable(expected_data))
expected_data = torch.stack(expected_data, dim=0)

expected_last_step_data = torch.stack([padded.data[length - 1, i] for i, length in enumerate(lenghts)])

This comment was marked as off-topic.

@ccarter-cs
Copy link

@WarBean You have a transposed character in your test. "lenghts". If you click on the details of the travis build then on one of the builds (there are separate ones for different python versions etc), you will see a log of the test output.

@keishinkickback
Copy link

@aron-bordin @reiinakano @WarBean @apaszke Has any of you guys added this functionality elsewhere?

@apaszke
Copy link
Contributor

apaszke commented Dec 30, 2017

I don't think so

jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Jan 26, 2022
* Have Kernel Inherit IrContainer (pytorch#1375)
* Kernel<-Fusion Step 1 - Convert ExprSort to StmtSort (pytorch#1376)
* Kernel<-Fusion Step 2 - Mutator refactor (pytorch#1377)
* Kernel<-Fusion Step 3 - Debug print for expr_eval and type promotion fix (pytorch#1379)
* Kernel<-Fusion Step 4 - Have kernel inherit Fusion (pytorch#1380)
* Kernel<-Fusion Step 5 - Move lowering passes into their own files (pytorch#1382)
* Kernel<-Fusion Step 6 - Remove kir::IrBuilder (pytorch#1383)
* Kernel<-Fusion Step 7 - Remove kir functions from ComputeAtMap (pytorch#1384)
* Kernel<-Fusion Step 8 - Clean up [lower/executor] utils (pytorch#1387)
* Kernel<-Fusion Step 9 - Remove TensorView::fuserTv (pytorch#1388)
* Kernel<-Fusion Step 10 - Remove lowerVal/lowerExpr (pytorch#1389)
* Kernel<-Fusion Step 11 - Finish cleaning up kir (pytorch#1390)
@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2022

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
Stale pull requests will automatically be closed 30 days after being marked Stale

@github-actions github-actions bot added the Stale label Mar 1, 2022
@github-actions github-actions bot removed the Stale label Mar 23, 2022
@facebook-github-bot
Copy link
Contributor

Hi @WarBean!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label May 28, 2022
@github-actions github-actions bot closed this Jun 27, 2022
petrex pushed a commit to petrex/pytorch that referenced this pull request Sep 23, 2024
jagadish-amd pushed a commit to jagadish-amd/pytorch that referenced this pull request Jan 29, 2025
=================================================

Temporarily skip test_conv3d_64bit_indexing

- Rocblas API support is requested
- SWDEV-383635 & sub task - SWDEV-390218

Skip ddp apply_optim_in_bwd tests for gloo (pytorch#1302)

To resolve https://ontrack-internal.amd.com/browse/SWDEV-403530 and https://ontrack-internal.amd.com/browse/SWDEV-419837. For more context check upstream issue pytorch#111834

Add skipIfRocmArch decorator for Navi skips (pytorch#1356)

Converted NAVI check as a function (pytorch#1364)

* Moved NAVI check to the test file

* Revised NAVI check as a function

[Navi] [Inductor] Unskip Navi inductor UTs (pytorch#1514)

Relates to https://ontrack-internal.amd.com/browse/SWDEV-461590

Bad import in test_torchinductor and skip torchvision related UT (pytorch#1374)

skip test_inductor_freezing failing UTs (pytorch#1375)

Skip test_mm_triton_kernel_benchmark (pytorch#1376)

* Running triton kernel on ROCM only has one GB/s metric reported

* Update test_kernel_benchmark.py

skip vmapvjpvjp_linalg_householder_product_cuda_float32 (pytorch#1420)

skipIfRocm needs msg parameter

[NO CP] Updated changes to skip few UTs

Imported skipIfRocm in certain test suites (pytorch#1577)

Fixes SWDEV-472397

Added functions imports (pytorch#1521)

Fixes
inductor.test_torchinductor_dynamic_shapes::TestInductorDynamicCUDA::test_item_unbacked_stride_nobreak_cuda

Enable test_public_api_surface (pytorch#1601)

Fixes SWDEV-462410.

Enable this unit test since PyTorch issue
pytorch#104012 has been closed. This
unit test runs fine on MI100/MI300 and upstream.

(cherry picked from commit 0001d4ab5070635cfecc146ee299bbb9fa45ca67)

[rocm6.3_internal_testing] Fixed error string assertion in test_invalid_devices (pytorch#1607)

Fixes pytorch#8974

(cherry picked from commit a688e0a)
jagadish-amd pushed a commit to jagadish-amd/pytorch that referenced this pull request May 15, 2025
=================================================

Temporarily skip test_conv3d_64bit_indexing

- Rocblas API support is requested
- SWDEV-383635 & sub task - SWDEV-390218

Skip ddp apply_optim_in_bwd tests for gloo (pytorch#1302)

To resolve https://ontrack-internal.amd.com/browse/SWDEV-403530 and https://ontrack-internal.amd.com/browse/SWDEV-419837. For more context check upstream issue pytorch#111834

Add skipIfRocmArch decorator for Navi skips (pytorch#1356)

Converted NAVI check as a function (pytorch#1364)

* Moved NAVI check to the test file

* Revised NAVI check as a function

[Navi] [Inductor] Unskip Navi inductor UTs (pytorch#1514)

Relates to https://ontrack-internal.amd.com/browse/SWDEV-461590

Bad import in test_torchinductor and skip torchvision related UT (pytorch#1374)

skip test_inductor_freezing failing UTs (pytorch#1375)

Skip test_mm_triton_kernel_benchmark (pytorch#1376)

* Running triton kernel on ROCM only has one GB/s metric reported

* Update test_kernel_benchmark.py

skip vmapvjpvjp_linalg_householder_product_cuda_float32 (pytorch#1420)

skipIfRocm needs msg parameter

[NO CP] Updated changes to skip few UTs

Imported skipIfRocm in certain test suites (pytorch#1577)

Fixes SWDEV-472397

Added functions imports (pytorch#1521)

Fixes
inductor.test_torchinductor_dynamic_shapes::TestInductorDynamicCUDA::test_item_unbacked_stride_nobreak_cuda

Enable test_public_api_surface (pytorch#1601)

Fixes SWDEV-462410.

Enable this unit test since PyTorch issue
pytorch#104012 has been closed. This
unit test runs fine on MI100/MI300 and upstream.

(cherry picked from commit 0001d4ab5070635cfecc146ee299bbb9fa45ca67)

[rocm6.3_internal_testing] Fixed error string assertion in test_invalid_devices (pytorch#1607)

Fixes pytorch#8974

(cherry picked from commit a688e0a)
(cherry picked from commit b966e44)

[rocm6.4_internal_testing] Skip non_standard_bool_values tests (pytorch#1880)

Fixes SWDEV-509757

(cherry picked from commit 80b4c41)

[rocm6.4_internal_testing] [NAVI32] Skipped sdpa_2 test in test_aot_inductor for Navi32 (pytorch#1882)

The test fails with assertion error "Tensors are not close"

After testing I can confirm that this issue is caused by eager mode
execution specific to navi32 during the test_sdpa_2 run. Made a cross
reference between navi31, navi32 and mi300. AOTInductor results are all
the exact same for all of the archs, only the eager mode fails here for
navi32 with 1.5% difference in tensor values from the gpu run. I assume
that this happens due to fp16-32-16 conversions in eager mode or missing
some if-statements for navi32 specifically.

Simple reproducer to check the values for cpu/gpu/eager/aoti runs.

[gfx1101_test_sdpa_2_issue_reproducer.txt](https://github.com/user-attachments/files/18676367/gfx1101_test_sdpa_2_issue_reproducer.txt)

(cherry picked from commit 896c789)

Fixed rocm skip import issue (pytorch#1949)

skip_if_rocm does not exist in
torch/testing/_internal/common_distributed.py. Use skipIfRocm from
torch/testing/_internal/common_utils.py instead.

(cherry picked from commit cfb673e)

Skip certain unit tests on NAVI (pytorch#1950)

This PR is to skip certain unit tests on NAVI only.
Fixes SWDEV-509011 - test_sac_ilp.py::TestSACILP::test_sac_ilp_case1
Fixes SWDEV-509311 -
test_max_autotune.py::TestMaxAutotune::test_non_contiguous_input_addmm
Fixes SWDEV-510738
test_fsdp_sharded_grad_scaler.py::TestShardedGradScalerParityWithDDP::test_sharded_grad_scaler_found_inf

(cherry picked from commit e86291a)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants