Skip to content

Conversation

@rraminen
Copy link
Contributor

Relaxing the tolerance values to enable the below unit test, with FP16 data type on ROCm

unit/runtime/half_precision/test_fp8.py::TestFp8ComposabilityAcrossZero::test[fp16]

        # Relax tolerance only for ROCm + FP16
        if is_rocm_pytorch() and model_dtype == torch.float16:
            rtol, atol = 3e-07, 3e-05

cc: @jithunnair-amd

@loadams loadams enabled auto-merge (squash) June 20, 2025 15:03
@loadams loadams merged commit d33baf0 into deepspeedai:master Jun 20, 2025
9 checks passed
Antlera pushed a commit to Antlera/DeepSpeed that referenced this pull request Jun 27, 2025
…7373)

Relaxing the tolerance values to enable the below unit test, with FP16
data type on ROCm


`unit/runtime/half_precision/test_fp8.py::TestFp8ComposabilityAcrossZero::test[fp16]
`

```
        # Relax tolerance only for ROCm + FP16
        if is_rocm_pytorch() and model_dtype == torch.float16:
            rtol, atol = 3e-07, 3e-05
```

cc: @jithunnair-amd
lpnpcs pushed a commit to lpnpcs/DeepSpeed that referenced this pull request Jul 30, 2025
…7373)

Relaxing the tolerance values to enable the below unit test, with FP16
data type on ROCm


`unit/runtime/half_precision/test_fp8.py::TestFp8ComposabilityAcrossZero::test[fp16]
`

```
        # Relax tolerance only for ROCm + FP16
        if is_rocm_pytorch() and model_dtype == torch.float16:
            rtol, atol = 3e-07, 3e-05
```

cc: @jithunnair-amd
mauryaavinash95 pushed a commit to DataStates/DeepSpeed that referenced this pull request Oct 4, 2025
…7373)

Relaxing the tolerance values to enable the below unit test, with FP16
data type on ROCm


`unit/runtime/half_precision/test_fp8.py::TestFp8ComposabilityAcrossZero::test[fp16]
`

```
        # Relax tolerance only for ROCm + FP16
        if is_rocm_pytorch() and model_dtype == torch.float16:
            rtol, atol = 3e-07, 3e-05
```

cc: @jithunnair-amd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants