-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🐛 Describe the bug
In #99272 (comment), @awmartin wrote:
I get the following warning for autocast on MPS using pytorch 2.6.0 nightly:
sam2/.env/lib/python3.11/site-packages/torch/amp/autocast_mode.py:332: UserWarning: In MPS autocast, but the target dtype is not supported. Disabling autocast. MPS Autocast only supports dtype of torch.bfloat16 currently.My offending code (taken from Segment Anything 2.1 sample code) appears to reference the proper dtype:
with torch.inference_mode(), torch.autocast("mps", dtype=torch.bfloat16):However, it seems the actual supported type is
torch.float16as per this line. Is this expected? Or is the warning message on line 330 incorrect?
There had been some back and forth regarding which type to support - fp16 or bf16 (ref #99272 (comment), #99272 (comment)). A decision was made to go with fp16. I think this was an oversight when going from bf16 to fp16.
The error message should be updated to MPS Autocast only supports dtype of torch.float16 currently.. Consider adding a regression test in order to prevent a similar bug in the future.
Minimal repro:
import torch
with torch.autocast("mps", dtype=torch.bfloat16):
x = torch.tensor(1)
# /Users/hvaara/dev/pytorch/pytorch/torch/amp/autocast_mode.py:332: UserWarning: In MPS autocast, but the target dtype is not supported. Disabling autocast.
# MPS Autocast only supports dtype of torch.bfloat16 currently.
# warnings.warn(error_message)I was able to repro on main and v2.5.0.
Fix incoming.
Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitb9618c9
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:13:44) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] onnx==1.16.2
[pip3] optree==0.12.1
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] torch==2.6.0a0+gitb9618c9
[pip3] torch_geometric==2.4.0
[pip3] torch-tb-profiler==0.4.3
[pip3] torchao==0.5.0
[pip3] torchaudio==2.5.0a0+ba696ea
[pip3] torchmultimodal==0.1.0b0
[pip3] torchvision==0.20.0a0+0d80848
[pip3] triton==3.0.0
[conda] bert-pytorch 0.0.1a4 dev_0
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] numpy 1.26.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] torch 2.6.0a0+gitb9618c9 dev_0
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torchao 0.5.0 pypi_0 pypi
[conda] torchaudio 2.5.0a0+ba696ea dev_0
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchvision 0.20.0a0+0d80848 dev_0
[conda] triton 3.0.0 pypi_0 pypi